id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
7125 | Center (group theory) | Set of elements that commute with every element of a group
In abstract algebra, the center of a group "G" is the set of elements that commute with every element of "G". It is denoted Z("G"), from German "Zentrum," meaning "center". In set-builder notation,
Z("G") = {"z" ∈ "G"| ∀"g" ∈ "G", "zg"
"gz"}.
The center is a normal subgroup, Z("G") ⊲ "G", and also a characteristic subgroup, but is not necessarily fully characteristic. The quotient group, "G" / Z("G"), is isomorphic to the inner automorphism group, Inn("G").
A group "G" is abelian if and only if Z("G") = "G". At the other extreme, a group is said to be centerless if Z("G") is trivial; i.e., consists only of the identity element.
The elements of the center are central elements.
As a subgroup.
The center of "G" is always a subgroup of "G". In particular:
Furthermore, the center of "G" is always an abelian and normal subgroup of "G". Since all elements of Z("G") commute, it is closed under conjugation.
A group homomorphism "f" : "G" → "H" might not restrict to a homomorphism between their centers. The image elements "f" ("g") commute with the image "f" ( "G" ), but they need not commute with all of "H" unless "f" is surjective. Thus the center mapping formula_0 is not a functor between categories Grp and Ab, since it does not induce a map of arrows.
Conjugacy classes and centralizers.
By definition, an element is central whenever its conjugacy class contains only the element itself; i.e. Cl("g") = {"g"}.
The center is the intersection of all the centralizers of elements of "G": formula_1 As centralizers are subgroups, this again shows that the center is a subgroup.
Conjugation.
Consider the map "f" : "G" → Aut("G"), from "G" to the automorphism group of "G" defined by "f"("g") = "ϕ""g", where "ϕ""g" is the automorphism of "G" defined by
"f"("g")("h") = "ϕ""g"("h") = "ghg"−1.
The function, "f" is a group homomorphism, and its kernel is precisely the center of "G", and its image is called the inner automorphism group of "G", denoted Inn("G"). By the first isomorphism theorem we get,
"G"/Z("G") ≃ Inn("G").
The cokernel of this map is the group Out("G") of outer automorphisms, and these form the exact sequence
1 ⟶ Z("G") ⟶ "G" ⟶ Aut("G") ⟶ Out("G") ⟶ 1.
Higher centers.
Quotienting out by the center of a group yields a sequence of groups called the upper central series:
("G"0 = "G") ⟶ ("G"1 = "G"0/Z("G"0)) ⟶ ("G"2 = "G"1/Z("G"1)) ⟶ ⋯
The kernel of the map "G" → "Gi" is the "i"th center of "G" (second center, third center, etc.), denoted Z"i"("G"). Concretely, the ("i"+1)-st center comprises the elements that commute with all elements up to an element of the "i"th center. Following this definition, one can define the 0th center of a group to be the identity subgroup. This can be continued to transfinite ordinals by transfinite induction; the union of all the higher centers is called the hypercenter.
The ascending chain of subgroups
1 ≤ Z("G") ≤ Z2("G") ≤ ⋯
stabilizes at "i" (equivalently, Z"i"("G") = Zi+1("G")) if and only if "G""i" is centerless.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G\\to Z(G)"
},
{
"math_id": 1,
"text": "Z(G) = \\bigcap_{g\\in G} Z_G(g)."
},
{
"math_id": 2,
"text": " \\begin{pmatrix}\n 1 & 0 & z\\\\\n 0 & 1 & 0\\\\\n 0 & 0 & 1\n \\end{pmatrix}"
},
{
"math_id": 3,
"text": "U(n)"
},
{
"math_id": 4,
"text": "\\left\\{ e^{i\\theta} \\cdot I_n \\mid \\theta \\in [0, 2\\pi) \\right\\}"
},
{
"math_id": 5,
"text": "\\operatorname{SU}(n)"
},
{
"math_id": 6,
"text": "\\left\\lbrace e^{i\\theta} \\cdot I_n \\mid \\theta = \\frac{2k\\pi}{n}, k = 0, 1, \\dots, n-1 \\right\\rbrace "
}
]
| https://en.wikipedia.org/wiki?curid=7125 |
71252852 | Scalar chromodynamics | Quantum chromodynamics with scalar matter
In quantum field theory, scalar chromodynamics, also known as scalar quantum chromodynamics or scalar QCD, is a gauge theory consisting of a gauge field coupled to a scalar field. This theory is used experimentally to model the Higgs sector of the Standard Model.
It arises from a coupling of a scalar field to gauge fields. Scalar fields are used to model certain particles in particle physics; the most important example is the Higgs boson. Gauge fields are used to model forces in particle physics: they are force carriers. When applied to the Higgs sector, these are the gauge fields appearing in electroweak theory, described by Glashow–Weinberg–Salam theory.
Matter content and Lagrangian.
Matter content.
This article discusses the theory on flat spacetime formula_0, commonly known as Minkowski space.
The model consists of a complex vector valued scalar field formula_1 minimally coupled to a gauge field formula_2.
The gauge group of the theory is a Lie group formula_3. Commonly, this is formula_4 for some formula_5, though many details hold even when we don't concretely fix formula_3.
The scalar field can be treated as a function formula_6, where formula_7 is the data of a representation of formula_3. Then formula_8 is a vector space. The 'scalar' refers to how formula_1 transforms (trivially) under the action of the Lorentz group, despite formula_1 being vector valued. For concreteness, the representation is often chosen to be the fundamental representation. For formula_4, this fundamental representation is formula_9. Another common representation is the adjoint representation. In this representation, varying the Lagrangian below to find the equations of motion gives the Yang–Mills–Higgs equation.
Each component of the gauge field is a function formula_10 where formula_11 is the Lie algebra of formula_3 from the Lie group–Lie algebra correspondence. From a geometric point of view, formula_2 are the components of a principal connection under a global choice of trivialization (which can be made due to the theory being on flat spacetime).
Lagrangian.
The Lagrangian density arises from minimally coupling the Klein–Gordon Lagrangian (with a potential) to the Yang–Mills Lagrangian. Here the scalar field formula_1 is in the fundamental representation of formula_4:
Scalar QCD Lagrangian density
formula_12
where
This straightforwardly generalizes to an arbitrary gauge group formula_3, where formula_1 takes values in an arbitrary representation formula_20 equipped with an invariant inner product formula_21, by replacing formula_22.
Gauge invariance.
The model is invariant under gauge transformations, which at the group level is a function formula_23, and at the algebra level is a function formula_24.
At the group level, the transformations of fields is
formula_25
formula_26
From the geometric viewpoint, formula_27 is a global change of trivialization. This is why it is a misnomer to call gauge symmetry a "symmetry": it is really a redundancy in the description of the system.
Curved spacetime.
The theory admits a generalization to a curved spacetime formula_28, but this requires more subtle definitions for many objects appearing in the theory. For example, the scalar field must be viewed as a section of an associated vector bundle with fibre formula_8. This is still true on flat spacetime, but the flatness of the base space allows the section to be viewed as a function formula_29, which is conceptually simpler.
Higgs mechanism.
If the potential is minimized at a non-zero value of formula_1, this model exhibits the Higgs mechanism. In fact the Higgs boson of the Standard Model is modeled by this theory with the choice formula_30; the Higgs boson is also coupled to electromagnetism.
Examples.
By concretely choosing a potential formula_8, some familiar theories can be recovered.
Taking formula_31 gives Yang–Mills minimally coupled to a Klein–Gordon field with mass formula_28.
Taking formula_32 gives the potential for the Higgs boson in the Standard Model.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{R}^{1,3}"
},
{
"math_id": 1,
"text": "\\phi"
},
{
"math_id": 2,
"text": "A_\\mu"
},
{
"math_id": 3,
"text": "G"
},
{
"math_id": 4,
"text": "\\text{SU}(N)"
},
{
"math_id": 5,
"text": "N"
},
{
"math_id": 6,
"text": "\\phi: \\mathbb{R}^{1,3}\\rightarrow V"
},
{
"math_id": 7,
"text": "(V, \\rho, G)"
},
{
"math_id": 8,
"text": "V"
},
{
"math_id": 9,
"text": "\\mathbb{C}^N"
},
{
"math_id": 10,
"text": "A_\\mu: \\mathbb{R}^{1,3} \\rightarrow \\mathfrak{g}"
},
{
"math_id": 11,
"text": "\\mathfrak{g}"
},
{
"math_id": 12,
"text": "\\mathcal{L} = -\\frac{1}{4} \\text{tr}(F_{\\mu\\nu}F^{\\mu\\nu}) + (D_\\mu \\phi)^\\dagger D^\\mu \\phi - V(\\phi) "
},
{
"math_id": 13,
"text": "F_{\\mu\\nu}"
},
{
"math_id": 14,
"text": "F_{\\mu\\nu} = \\partial_\\mu A_\\nu - \\partial_\\nu A_\\mu + ig[A_\\mu, A_\\nu]"
},
{
"math_id": 15,
"text": "D_\\mu \\phi"
},
{
"math_id": 16,
"text": "D_\\mu \\phi = \\partial_\\mu \\phi - ig\\rho(A_\\mu)\\phi."
},
{
"math_id": 17,
"text": "g"
},
{
"math_id": 18,
"text": "V(\\phi)"
},
{
"math_id": 19,
"text": "\\text{tr}"
},
{
"math_id": 20,
"text": "\\rho"
},
{
"math_id": 21,
"text": "\\langle \\cdot , \\cdot \\rangle"
},
{
"math_id": 22,
"text": "(D_\\mu \\phi)^\\dagger D^\\mu \\phi \\mapsto \\langle D_\\mu \\phi, D^\\mu \\phi \\rangle"
},
{
"math_id": 23,
"text": "U:\\mathbb{R}^{1,3}\\rightarrow G"
},
{
"math_id": 24,
"text": "\\alpha:\\mathbb{R}^{1,3}\\rightarrow \\mathfrak{g}"
},
{
"math_id": 25,
"text": "\\phi(x) \\mapsto U(x)\\phi(x)"
},
{
"math_id": 26,
"text": "A_\\mu(x) \\mapsto UA_\\mu U^{-1} - \\frac{i}{g}(\\partial_\\mu U) U^{-1}."
},
{
"math_id": 27,
"text": "U(x)"
},
{
"math_id": 28,
"text": "M"
},
{
"math_id": 29,
"text": "M \\rightarrow V"
},
{
"math_id": 30,
"text": "G = \\text{SU}(2)"
},
{
"math_id": 31,
"text": "V(\\phi) = M^2\\phi^\\dagger \\phi"
},
{
"math_id": 32,
"text": "V(\\phi) = \\lambda (\\phi^\\dagger \\phi)^2 - \\mu_H^2\\phi^\\dagger \\phi"
}
]
| https://en.wikipedia.org/wiki?curid=71252852 |
7125851 | Turning radius | Minimum dimension for a vehicle to make a turn
The turning radius (alternatively, turning diameter or turning circle) of a vehicle defines the minimum dimension (typically the radius or diameter, respectively) of available space required for that vehicle to make a semi-circular U-turn without skidding. The Oxford English Dictionary describes turning circle as "the smallest circle within which a ship, motor vehicle, etc., can be turned round completely". The term thus refers to a theoretical minimal circle in which for example an aeroplane, a ground vehicle or a watercraft can be turned around.
The terms ("radius", "diameter", or "circle") can have different meanings; refer to the section.
Definition.
On wheeled vehicles with the common type of front wheel steering (i.e. one, two or even four wheels at the front capable of steering), the vehicle's "turning diameter" measures the minimum space needed to turn the vehicle around while the steering is set to its maximum displacement from the central 'straight ahead' position - i.e. either extreme left or right. If a marker pen was placed on the point of the vehicle furthest from the center of the turn, the diameter of the circle traced during the turn defines the value of that vehicle's turning diameter. Mathematically, the "turning radius" would be half of the turning diameter.
The curb-to-curb turning radius, which considers the chassis and wheels only without body protrusions, can be expressed as a simplified function of the wheelbase, tire width, and steering angle:
formula_0
Aircraft have a similar minimum turning circle concept, generally associated with a standard rate turn, in which an aircraft enters a coordinated turn which changes its heading at a rate of 3° per second, or 180° in one minute. In this case, the turning radius depends on the true airspeed formula_1 (in knots) as:
formula_2
Turning diameter is sometimes used in everyday language as a generalized term rather than with numerical figures. For example, a wheeled vehicle with a very small turning circle may be described as having a "tight turning radius", meaning that it is easier to turn around very tight corners. Wheeled vehicles with four-wheel steering will have a smaller turning radius than vehicles that steer wheels on one axle.
Exceptions.
Technically, the minimum possible turning circle for a vehicle would be where it does not move either forwards or backwards while turning and simply pivots on its central axis. For a rectangular vehicle capable of doing this, the smallest turning circle would be equal to the diagonal length of the vehicle. As an example, some boats can be turned in this way, generally by using azimuth thrusters.
Some wheeled vehicles are designed to spin around their central axis by making all wheels steerable, such as certain lawnmowers and wheelchairs as they do not follow a circular path as they turn. In this case the vehicle is referred to as a "zero turning radius" vehicle. Some camera dollies used in the film industry have a "round" mode which allows them to spin around their z axis by allowing synchronized inverse rotation of their left and right wheel sets, effectively giving them "zero" turning radius.
Many conventionally steerable vehicles (only one axle with steerable wheels) can reverse the direction of travel in a space smaller than the stated turning radius by executing a specialized maneuver, such as a J-turn or similar skid, or in a discontinuous motion such as a three-point turn.
Alternative nomenclature.
Other terms are sometimes used synonymously for turning diameter, which can lead to confusion.
Turning radius and diameter.
The automotive term "turning radius" has been used as equivalent and interchangeable with the "turning diameter". For example, the 2017 Audi A4 is specified by the manufacturer as having a turning diameter (curb-to-curb) of . Mathematically, the radius of a circle is half the diameter, so the correct turning radius in this example would be = m. However, another source lists the turning radius of the same vehicle as also being 11.6 m, which is the turning diameter.
In practice, the values of turning diameter tend to be listed more frequently in vehicle specifications, so the term turning diameter will therefore be more correct in most cases. The turning diameter will always give a higher number for a given vehicle, and the turning diameter measurement is usually preferred by automotive manufacturers. Such mixing of terms can lead to confusion among consumers.
Turning circle.
The term "turning circle" is another term also sometimes used synonymously for the turning diameter. Some argue that turning circle is less ambiguous than turning radius, but "turning circle" may introduce its own ambiguities since the same circle can be defined by multiple measurements, including the radius formula_3, diameter (formula_4, twice as big), or circumference (formula_5, about 6.28 times as big). For example, "Motor Trend" refers to a "curb-to-curb turning circle" of a 2008 Cadillac CTS as , but the terminology is not yet settled. AutoChannel.com refers to the "turning radius" of the same car as .
Turning circle is also sometimes used to refer to the path swept in the manoeuvre, i.e. the arc, or the circle's circumference in the case when the manoeuvre makes a complete turn.
Different measurement methods.
There are two methods for measuring the vehicle turning diameter which will give slightly different results. These two methods are called wall-to-wall and curb-to-curb (US spelling), or alternatively kerb-to-kerb (UK spelling).
The wall-to-wall turning circle is the minimum distance between two walls, both of which exceed the height of the vehicle, in which the vehicle can make a U-turn. The kerb-to-kerb turning circle is the minimum distance between two raised curbs, both of which are lower than the lowest body protrusions, in which the vehicle can make a U-turn. The wall-to-wall turning circle is greater than the kerb-to-kerb measure for the same vehicle because of the front and rear body overhangs. One can find these two ways of measuring the turning circle used in auto specifications, for example, a van might be listed as having a turning circle (in meters) of 12.1 (C) / 12.4 (W).
Curb-to-curb.
A curb or curb-to-curb turning circle will show the straight-line distance from one side of the circle to the other, through the center. The name "curb-to-curb" indicates that a street would have to be this wide before this car can make a U-turn and not hit a street curb with a wheel. If you took the street curb and built it higher, as high as the car, and tried to make a U-turn in the street, parts of the car (bumper) would hit the wall.
The kerb-to-kerb turning circle can be smaller than the turning circle as it refers to only a partial circle (~180°) with the vehicle alongside one kerb to start with. To perform a U turn in a forward direction only, the centre of the turn is not coincident with the centre of the road - thus a complete circle would not be possible (without driving onto the pavement to complete the manoeuvre). It also does not take into account that part of the vehicle that overhangs the wheels where as 'turning circle' does.
Wall-to-wall.
The name wall or wall-to-wall turning circle denotes how far apart the two walls would have to be to allow a U-turn without scraping the walls.
Legal requirements for road vehicles.
Road vehicles must be able to carry out a 360 degrees turn on an annulus with an outer radius of and an inner radius of , measured wall-to-wall. In addition, when entering this annulus, no part of the vehicle can overreach a tangent by more than ; this tangent is drawn at the outer, 12.5 m limit of the annulus.
New Zealand requires that road vehicles can perform a 360 degrees turn on a circle with a diameter, measured wall-to-wall. The only part of the vehicle that may reach over this limitation are collapsible mirrors.
See also.
<templatestyles src="Div col/styles.css"/> | [
{
"math_id": 0,
"text": "turning\\ radius = \\frac{wheelbase}{\\sin{\\left ( steering\\ angle \\right )}} + \\frac{tire\\ width}{2}"
},
{
"math_id": 1,
"text": "v_t"
},
{
"math_id": 2,
"text": "turning\\ radius = \\frac{v_t}{60 \\pi}"
},
{
"math_id": 3,
"text": "r"
},
{
"math_id": 4,
"text": "d = 2 \\cdot r"
},
{
"math_id": 5,
"text": "2 \\pi r"
}
]
| https://en.wikipedia.org/wiki?curid=7125851 |
712617 | Power rating | Highest power input allowed to flow through electrical or mechanical equipment
In electrical engineering and mechanical engineering, the power rating of equipment is the highest power input allowed to flow through particular equipment. According to the particular discipline, the term "power" may refer to electrical or mechanical power. A power rating can also involve average and maximum power, which may vary depending on the kind of equipment and its application.
Power rating limits are usually set as a guideline by the manufacturers, protecting the equipment, and simplifying the design of larger systems, by providing a level of operation under which the equipment will not be damaged while allowing for a certain safety margin.
Equipment types.
Dissipative equipment.
In equipment that primarily dissipates electric power or converts it into mechanical power, such as resistors, and speakers, the power rating given is usually the maximum power that can be safely dissipated by the equipment. The usual reason for this limit is heat, although in certain electromechanical devices, particularly speakers, it is to prevent mechanical damage. When heat is the limiting factor, the power rating is easily calculated. First, the amount of heat that can be safely dissipated by the device, formula_0, must be calculated. This is related to the maximum safe operating temperature, the ambient temperature or temperature range in which the device will be operated, and the method of cooling. If formula_1 is the maximum safe operating temperature of the device, formula_2 is the ambient temperature, and formula_3 is the total thermal resistance between the device and ambient, then the maximum heat dissipation is given by
formula_4
If all power in a device is dissipated as heat, then this is also the power rating.
Mechanical equipment.
Equipment is generally rated by the power it will deliver, for example, at the shaft of an electric or hydraulic motor. The power input to the equipment will be greater owing to the less than 100% efficiency of the device. Efficiency of a device is often defined as the ratio of output power to the sum of output power and losses. In some types of equipment, it is possible to measure or calculate losses directly. This allows efficiency to be calculated with greater precision than the quotient of input power over output power, where relatively small measurement uncertainty will greatly affect the resulting calculated efficiency.
Power converting equipment.
In devices that primarily convert between different forms of electric power, such as transformers, or transport it from one location to another, such as transmission lines, the power rating almost always refers to the maximum power flow through the device, not dissipation within it. The usual reason for the limit is heat, and the maximum heat dissipation is calculated as above.
Power ratings are usually given in watts for real power and volt-amperes for apparent power, although for devices intended for use in large power systems, both may be given in a per-unit system. Cables are usually rated by giving their maximum voltage and their ampacity. As the power rating depends on the method of cooling, different ratings may be specified for air cooling, water cooling, etc.
Average vs. maximum.
For AC-operated devices (e.g. coaxial cable, loudspeakers), there may even be two power ratings, a maximum (peak) power rating and an average power rating. For such devices, the peak power rating usually specifies the low frequency or pulse energy, while the average power rating limits high-frequency operation. Average power calculation rating depends on some assumptions about how the device is going to be used. For example, the EIA rating method for loudspeakers uses a shaped noise signal that simulates music and allows peak excursion of 6 dB, so an EIA rating of 50 Watts corresponds to 200 Watts peak rating.
Maximum continuous rating.
Maximum continuous rating (MCR) is defined as the maximum output (MW) that an electric power generating station is capable of producing continuously under normal conditions over a year. Under ideal conditions, the actual output could be higher than the MCR.
Within shipping, ships usually operate at the nominal continuous rating (NCR) which is 85% of the 90% of MCR. The 90% MCR is usually the contractual output for which the propeller is designed. Thus, the usual output at which ships are operated is around 75% to 77% of MCR.
Other definitions.
In some fields of engineering, even a more complex set of power ratings is used. For example, helicopter engines are rated for continuous power (which does not have a time constraint), takeoff and hover power rating (defined as half to one-hour operation), maximum contingency power (which can be sustained for two-three minutes), and emergency (half a minute) power rating.
For electrical motors, a similar kind of information is conveyed by the "service factor", which is a multiplier that, when applied to the rated output power, gives the power level a motor can sustain for shorter periods of time. The service factor is typically in the 1.15-1.4 range, with the figure being lower for higher-power motors. For every hour of operation at the service-factor-adjusted power rating, a motor loses two to three hours of life at nominal power, i.e. its service life is reduced to less than half for continued operation at this level. The service factor is defined in the ANSI/NEMA MG 1 standard, and is generally used in the United States. There is no IEC standard for the service factor.
Exceeding the power rating of a device by more than the margin of safety set by the manufacturer usually does damage to the device by causing its operating temperature to exceed safe levels. In semiconductors, irreparable damage can occur very quickly. Exceeding the power rating of most devices for a very short period of time is not harmful, although doing so regularly can sometimes cause cumulative damage.
Power ratings for electrical apparatus and transmission lines are a function of the duration of the proposed load and the ambient temperature; a transmission line or transformer, for example, can carry significantly more load in cold weather than in hot weather. Momentary overloads, causing high temperatures and deterioration of insulation, may be considered an acceptable trade-off in emergency situations. The power rating of switching devices varies depending on the circuit voltage as well as the current. In certain aerospace or military applications, a device may carry a much higher rating than would be accepted in devices intended to operate for long service life.
Examples.
Audio amplifiers.
Audio amplifier power ratings are typically established by driving the device under test to the onset of clipping, to a predetermined distortion level, variable per manufacturer or per product line. Driving an amplifier to 1% distortion levels will yield a higher rating than driving it to 0.01% distortion levels. Similarly, testing an amplifier at a single mid-range frequency, or testing just one channel of a two-channel amplifier, will yield a higher rating than if it is tested throughout its intended frequency range with both channels working. Manufacturers can use these methods to market amplifiers whose published maximum power output includes some amount of clipping in order to show higher numbers.
For instance, the Federal Trade Commission (FTC) established an amplifier rating system in which the device is tested with both channels driven throughout its advertised frequency range, at no more than its published distortion level. The Electronic Industries Association (EIA) rating system, however, determines amplifier power by measuring a single channel at 1,000 Hz, with a 1% distortion level—1% clipping. Using the EIA method rates an amplifier 10 to 20% higher than the FTC method.
Photovoltaic modules.
The nominal power of a photovoltaic module is determined by measuring current and voltage while varying resistance under defined illumination. The conditions are specified in standards such as IEC 61215, IEC 61646 and UL 1703; specifically, the light intensity is 1000 W/m2, with a spectrum similar to sunlight hitting the Earth's surface at latitude 35° N in the summer (airmass 1.5) and temperature of the cells at 25 °C. The power is measured while varying the resistive load on the module between open and closed circuit.
The maximum power measured is the nominal power of the module in Watts. Colloquially, this is also written as "Wp"; this format is colloquial as it is outside the standard by adding suffixes to standardized units. The nominal power divided by the light power that falls on the module (area x 1000 W/m2) is the "efficiency".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P_{D,max}"
},
{
"math_id": 1,
"text": "T_{D,max}"
},
{
"math_id": 2,
"text": "T_{A}"
},
{
"math_id": 3,
"text": "\\theta_{DA}"
},
{
"math_id": 4,
"text": "P_{D,max} = \\frac{T_{D,max} - T_{A}}{\\theta_{DA}}"
}
]
| https://en.wikipedia.org/wiki?curid=712617 |
712625 | Scree | Broken rock fragments at base of cliff
Scree is a collection of broken rock fragments at the base of a cliff or other steep rocky mass that has accumulated through periodic rockfall. Landforms associated with these materials are often called talus deposits. Talus deposits typically have a concave upwards form, where the maximum inclination corresponds to the angle of repose of the mean debris particle size. The exact definition of scree in the primary literature is somewhat relaxed, and it often overlaps with both "talus" and "colluvium".
The term "scree" comes from the Old Norse term for landslide, "skriða", while the term "talus" is a French word meaning a slope or embankment.
In high-altitude arctic and subarctic regions, scree slopes and talus deposits are typically adjacent to hills and river valleys. These steep slopes usually originate from late-Pleistocene periglacial processes. Notable scree sites in Eastern North America include the Ice Caves at White Rocks National Recreation Area in southern Vermont and Ice Mountain in eastern West Virginia in the Appalachian Mountains. Screes are most abundant in the Pyrenees, Alps, Variscan, Apennine, Orocantabrian, and Carpathian Mountains, Iberian peninsula, and Northern Europe.
Description.
The term "scree" is applied both to an unstable steep mountain slope composed of rock fragments and other debris, and to the mixture of rock fragments and debris itself. It is loosely synonymous with "talus", material that accumulates at the base of a projecting mass of rock, or "talus slope", a landform composed of talus. The term "scree" is sometimes used more broadly for any sheet of loose rock fragments mantling a slope, while "talus" is used more narrowly for material that accumulates at the base of a cliff or other rocky slope from which it has obviously eroded.
Scree is formed by rockfall, which distinguishes it from "colluvium". Colluvium is rock fragments or soil that is deposited by rainwash, sheetwash, or slow downhill creep, usually at the base of gentle slopes or hillsides. However, the terms "scree", "talus", and sometimes "colluvium" tend to be used interchangeably. The term "talus deposit" is sometimes used to distinguish the landform from the material of which it is made.
Scree slopes are often assumed to be close to the angle of repose. This is the slope at which a pile of granular material becomes mechanically unstable. However, careful examination of scree slopes shows that only those that are either rapidly accumulating new material, or are experiencing rapid removal of material from their bases, are close to the angle of repose. Most scree slopes are less steep, and they often show a concave shape, so that the foot of the slope is less steep than the top of the slope.
Scree with large, boulder-sized rock fragments may form talus caves, or human-sized passages formed in-between boulders.
Formation.
The formation of scree and talus deposits is the result of physical and chemical weathering acting on a rock face, and erosive processes transporting the material downslope.
There are five main stages of scree slope evolution: (1) accumulation, (2) consolidation, (3) weathering, (4) encroaching vegetation, and finally, (5) slope degradation.
Scree slopes form as a result of accumulated loose, coarse-grained material. Within the scree slope itself, however, there is generally good sorting of sediment by size: larger particles accumulate more rapidly at the bottom of the slope. Cementation occurs as fine-grained material fills in gaps between debris. The speed of consolidation depends on the composition of the slope; clayey components will bind debris together faster than sandy ones. Should weathering outpace the supply of sediment, plants may take root. Plant roots diminish cohesive forces between the coarse and fine components, degrading the slope. The predominant processes that degrade a rock slope depend largely on the regional climate (see below), but also on the thermal and topographic stresses governing the parent rock material. Example process domains include:
Physical weathering processes.
Scree formation is commonly attributed to the formation of ice within mountain rock slopes. The presence of joints, fractures, and other heterogeneities in the rock wall can allow precipitation, groundwater, and surface runoff to flow through the rock. If the temperature drops below the freezing point of the fluid contained within the rock, during particularly cold evenings, for example, this water can freeze. Since water expands by 9% when it freezes, it can generate large forces that either create new cracks or wedge blocks into an unstable position. Special boundary conditions (rapid freezing and water confinement) may be required for this to happen. Freeze-thaw scree production is thought to be most common during the spring and fall, when the daily temperatures fluctuate around the freezing point of water, and snow melt produces ample free water.
The efficiency of freeze-thaw processes in scree production is a subject of ongoing debate. Many researchers believe that ice formation in large open fracture systems cannot generate high enough pressures to force the fracturing apart of parent rocks, and instead suggest that the water and ice simply flow out of the fractures as pressure builds. Many argue that frost heaving, like that known to act in soil in permafrost areas, may play an important role in cliff degradation in cold places.
Eventually, a rock slope may be completely covered by its own scree, so that production of new material ceases. The slope is then said to be "mantled" with debris. However, since these deposits are still unconsolidated, there is still a possibility of the deposit slopes themselves failing. If the talus deposit pile shifts and the particles exceed the angle of repose, the scree itself may slide and fail.
Chemical weathering processes.
Phenomena such as acid rain may also contribute to the chemical degradation of rocks and produce more loose sediments.
Biotic weathering processes.
Biotic processes often intersect with both physical and chemical weathering regimes, as the organisms that interact with rocks can mechanically or chemically alter them.
Lichen frequently grow on the surface of, or within, rocks. Particularly during the initial colonization process, the lichen often inserts its hyphae into small fractures or mineral cleavage planes that exist in the host rock. As the lichen grows, the hyphae expand and force the fractures to widen. This increases the potential of fragmentation, possibly leading to rockfalls. During the growth of the lichen thallus, small fragments of the host rock can be incorporated into the biological structure and weaken the rock.
Freeze-thaw action of the entire lichen body due to microclimatic changes in moisture content can alternately cause thermal contraction and expansion, which also stresses the host rock. Lichen also produce a number of organic acids as metabolic byproducts. These often react with the host rock, dissolving minerals, and breaking down the substrate into unconsolidated sediments.
Interactions with surrounding landscape.
Scree often collects at the base of glaciers, concealing them from their environment. For example, Lech dl Dragon, in the Sella group of the Dolomites, is derived from the melting waters of a glacier and is hidden under a thick layer of scree. Debris cover on a glacier affects the energy balance and, therefore, the melting process. Whether the glacier ice begins melting more rapidly or more slowly is determined by the thickness of the layer of scree on its surface.
The amount of energy reaching the surface of the ice below the debris can be estimated via the one-dimensional, homogeneous material assumption of Fourier's Law:
formula_0,
where "k" is the thermal conductivity of the debris material, "Ts" is the ambient temperature above the debris surface, "Ti" is the temperature at the lower surface of the debris, and "d" is the thickness of the debris layer.
Debris with a low thermal conductivity value, or a high thermal resistivity, will not efficiently transfer energy through to the glacier, meaning the amount of heat energy reaching the ice surface is substantially lessened. This can act to insulate the glacier from incoming radiation.
The albedo, or the ability of a material to reflect incoming radiation energy, is also an important quality to consider. Generally, the debris will have a lower albedo than the glacier ice it covers, and will thus reflect less incoming solar radiation. Instead, the debris will absorb radiation energy and transfer it through the cover layer to the debris-ice interface.
If the ice is covered by a relatively thin layer of debris (less than around 2 centimeters thick), the albedo effect is most important. As scree accumulates atop the glacier, the ice's albedo will begin to decrease. Instead, the glacier ice will absorb incoming solar radiation and transfer it to the upper surface of the ice. Then, the glacier ice begins to absorb the energy and uses it in the process of melting.
However, once the debris cover reaches 2 or more centimeters in thickness, the albedo effect begins to dissipate. Instead, the debris blanket will act to insulate the glacier, preventing incoming radiation from penetrating the scree and reaching the ice surface. In addition to rocky debris, thick snow cover can form an insulating blanket between the cold winter atmosphere and subnivean spaces in screes. As a result, soil, bedrock, and also subterranean voids in screes do not freeze at high elevations.
Microclimates.
A scree has many small interstitial voids, while an ice cave has a few large hollows. Due to cold air seepage and air circulation, the bottom of scree slopes have a thermal regime similar to ice caves.
Because subsurface ice is separated from the surface by thin, permeable sheets of sediment, screes experience cold air seepage from the bottom of the slope where sediment is thinnest. This freezing circulating air maintains internal scree temperatures 6.8-9.0 °C colder than external scree temperatures. These <0 °C thermal anomalies occur up to 1000m below sites with mean annual air temperatures of 0 °C.
Patchy permafrost, which forms under conditions <0 °C, probably exists at the bottom of some scree slopes despite mean annual air temperatures of 6.8–7.5 °C.
Biodiversity.
During the last glacial period, a narrow ice-free corridor formed in the Scandinavian ice sheet, introducing taiga species to the terrain. These boreal plants and animals still live in modern alpine and subarctic tundra, as well as high-altitude coniferous forests and mires.
Scree microclimates maintained by circulating freezing air create microhabitats that support taiga plants and animals that could not otherwise survive regional conditions.
A Czech Republic Academy of Sciences research team led by physical chemist Vlastimil Růžička, analyzing 66 scree slopes, published a paper in "Journal of Natural History" in 2012, reporting that: "This microhabitat, as well as interstitial spaces between scree blocks elsewhere on this slope, supports an important assemblage of boreal and arctic bryophytes, pteridophytes, and arthropods that are disjunct from their normal ranges far to the north. This freezing scree slope represents a classic example of a palaeo refugium that significantly contributes to [the] protection and maintenance of regional landscape biodiversity."
Ice Mountain, a massive scree in West Virginia, supports distinctly different distributions of plant and animal species than northern latitudes.
Scree running.
Scree running is the activity of running down a scree slope; which can be very quick, as the scree moves with the runner. Some scree slopes are no longer possible to run, because the stones have been moved towards the bottom.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Q = -k \\left ( \\frac{T_s-T_i}{d} \\right )"
}
]
| https://en.wikipedia.org/wiki?curid=712625 |
71262627 | Fourth power law | Rule in road engineering
The fourth power law (also known as the fourth power rule) states that the greater the axle load of a vehicle, the stress on the road caused by the motor vehicle increases in proportion to the fourth power of the axle load. This law was discovered in the course of a series of scientific experiments in the United States in the late 1950s and was decisive for the development of standard construction methods in road construction.
Background.
At the beginning of the 1950s, the American Association of State Highway Officials (AASHO) dealt with the question of how the size of the axle load affects the service life of a road pavement. For this purpose, a test track was built in Ottawa, Illinois, which consisted of six loops, each with two lanes. The lanes were paved with both asphalt and concrete of varying thicknesses. In the two-year test, trucks with different axle loads then drove the roads almost continuously. The test was called the AASHO Road Test.
When evaluating the series of tests, it was found that there is a connection between the thickness of the pavement, the number of load transfers and the axle load, and that these have a direct effect on the service life and condition of a road. The service life of the road is thereby reduced with approximately the fourth power of the axle load.
The accuracy of the law of the fourth power is disputed among experts, since the test results depend on many other factors, such as climatic conditions, in addition to the factors mentioned above.
Calculation examples.
This example illustrates how a car and a truck affect the surface of a road differently according to the fourth power law.
formula_0 times as large
The "load" on the road from one axle (2 wheels) is 10 times greater for a truck than for a car. However, the fourth power law says that the "stress" on (damage to) the road is this ratio raised to the fourth power.
The road stress ratio of truck to car is 10,000 to 1.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "10^4=10\\cdot10\\cdot10\\cdot10=10,000"
}
]
| https://en.wikipedia.org/wiki?curid=71262627 |
712675 | Operator theory | Mathematical field of study
In mathematics, operator theory is the study of linear operators on function spaces, beginning with differential operators and integral operators. The operators may be presented abstractly by their characteristics, such as bounded linear operators or closed operators, and consideration may be given to nonlinear operators. The study, which depends heavily on the topology of function spaces, is a branch of functional analysis.
If a collection of operators forms an algebra over a field, then it is an operator algebra. The description of operator algebras is part of operator theory.
Single operator theory.
Single operator theory deals with the properties and classification of operators, considered one at a time. For example, the classification of normal operators in terms of their spectra falls into this category.
Spectrum of operators.
The spectral theorem is any of a number of results about linear operators or about matrices. In broad terms the spectral theorem provides conditions under which an operator or a matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This concept of diagonalization is relatively straightforward for operators on finite-dimensional spaces, but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modelled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.
Examples of operators to which the spectral theorem applies are self-adjoint operators or more generally normal operators on Hilbert spaces.
The spectral theorem also provides a canonical decomposition, called the spectral decomposition, eigenvalue decomposition, or eigendecomposition, of the underlying vector space on which the operator acts.
Normal operators.
A normal operator on a complex Hilbert space "H" is a continuous linear operator "N" : "H" → "H" that commutes with its hermitian adjoint "N*", that is: "NN*" = "N*N".
Normal operators are important because the spectral theorem holds for them. Today, the class of normal operators is well understood. Examples of normal operators are
The spectral theorem extends to a more general class of matrices. Let "A" be an operator on a finite-dimensional inner product space. "A" is said to be normal if "A"* "A" = "A A"*. One can show that "A" is normal if and only if it is unitarily diagonalizable: By the Schur decomposition, we have "A" = "U T U"*, where "U" is unitary and "T" upper triangular.
Since "A" is normal, "T T"* = "T"* "T". Therefore, "T" must be diagonal since normal upper triangular matrices are diagonal. The converse is obvious.
In other words, "A" is normal if and only if there exists a unitary matrix "U" such that
formula_0
where "D" is a diagonal matrix. Then, the entries of the diagonal of "D" are the eigenvalues of "A". The column vectors of "U" are the eigenvectors of "A" and they are orthonormal. Unlike the Hermitian case, the entries of "D" need not be real.
Polar decomposition.
The polar decomposition of any bounded linear operator "A" between complex Hilbert spaces is a canonical factorization as the product of a partial isometry and a non-negative operator.
The polar decomposition for matrices generalizes as follows: if "A" is a bounded linear operator then there is a unique factorization of "A" as a product "A" = "UP" where "U" is a partial isometry, "P" is a non-negative self-adjoint operator and the initial space of "U" is the closure of the range of "P".
The operator "U" must be weakened to a partial isometry, rather than unitary, because of the following issues. If "A" is the one-sided shift on "l"2(N), then |"A"| = ("A*A")1/2 = "I". So if "A" = "U" |"A"|, "U" must be "A", which is not unitary.
The existence of a polar decomposition is a consequence of Douglas' lemma:
<templatestyles src="Math_theorem/styles.css" />
Lemma — If "A", "B" are bounded operators on a Hilbert space "H", and "A*A" ≤ "B*B", then there exists a contraction "C" such that "A" = "CB". Furthermore, "C" is unique if "Ker"("B*") ⊂ "Ker"("C").
The operator "C" can be defined by "C"("Bh") = "Ah", extended by continuity to the closure of "Ran"("B"), and by zero on the orthogonal complement of Ran("B"). The operator "C" is well-defined since "A*A" ≤ "B*B" implies Ker("B") ⊂ Ker("A"). The lemma then follows.
In particular, if "A*A" = "B*B", then "C" is a partial isometry, which is unique if Ker("B*") ⊂ Ker("C").
In general, for any bounded operator "A",
formula_1
where ("A*A")1/2 is the unique positive square root of "A*A" given by the usual functional calculus. So by the lemma, we have
formula_2
for some partial isometry "U", which is unique if Ker("A") ⊂ Ker("U"). (Note Ker("A") = Ker("A*A") = Ker("B") = Ker("B*"), where "B" = "B*" = ("A*A")1/2.) Take "P" to be ("A*A")1/2 and one obtains the polar decomposition "A" = "UP". Notice that an analogous argument can be used to show "A = P'U' ", where "P' " is positive and "U' " a partial isometry.
When "H" is finite dimensional, "U" can be extended to a unitary operator; this is not true in general (see example above). Alternatively, the polar decomposition can be shown using the operator version of singular value decomposition.
By property of the continuous functional calculus, |"A"| is in the C*-algebra generated by "A". A similar but weaker statement holds for the partial isometry: the polar part "U" is in the von Neumann algebra generated by "A". If "A" is invertible, "U" will be in the C*-algebra generated by "A" as well.
Connection with complex analysis.
Many operators that are studied are operators on Hilbert spaces of holomorphic functions, and the study
of the operator is intimately linked to questions in function theory.
For example, Beurling's theorem describes the invariant subspaces of the unilateral shift in terms of inner functions, which are bounded holomorphic functions on the unit disk with unimodular boundary values almost everywhere on the circle. Beurling interpreted the unilateral shift as multiplication by the independent variable on the Hardy space. The success in studying multiplication operators, and more generally Toeplitz operators (which are multiplication, followed by projection onto the Hardy space) has inspired the study of similar questions on other spaces, such as the Bergman space.
Operator algebras.
The theory of operator algebras brings algebras of operators such as C*-algebras to the fore.
C*-algebras.
A C*-algebra, "A", is a Banach algebra over the field of complex numbers, together with a map * : "A" → "A". One writes "x*" for the image of an element "x" of "A". The map * has the following properties:
Remark. The first three identities say that "A" is a *-algebra. The last identity is called the C* identity and is equivalent to:
formula_8
The C*-identity is a very strong requirement. For instance, together with the spectral radius formula, it implies that the C*-norm is uniquely determined by the algebraic structure:
formula_9
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A = U D U^* "
},
{
"math_id": 1,
"text": "A^*A = (A^*A)^{\\frac{1}{2}} (A^*A)^{\\frac{1}{2}},"
},
{
"math_id": 2,
"text": "A = U (A^*A)^{\\frac{1}{2}}"
},
{
"math_id": 3,
"text": " x^{**} = (x^*)^* = x "
},
{
"math_id": 4,
"text": " (x + y)^* = x^* + y^* "
},
{
"math_id": 5,
"text": " (x y)^* = y^* x^*"
},
{
"math_id": 6,
"text": " (\\lambda x)^* = \\overline{\\lambda} x^* ."
},
{
"math_id": 7,
"text": " \\|x^* x \\| = \\left\\|x\\right\\| \\left\\|x^*\\right\\|."
},
{
"math_id": 8,
"text": "\\|xx^*\\| = \\|x\\|^2,"
},
{
"math_id": 9,
"text": " \\|x\\|^2 = \\|x^* x\\| = \\sup\\{|\\lambda| : x^* x - \\lambda \\,1 \\text{ is not invertible} \\}."
}
]
| https://en.wikipedia.org/wiki?curid=712675 |
71271638 | Kaniadakis Gamma distribution | Continuous probability distribution
The Kaniadakis Generalized Gamma distribution (or κ-Generalized Gamma distribution) is a four-parameter family of continuous statistical distributions, supported on a semi-infinite interval [0,∞), which arising from the Kaniadakis statistics. It is one example of a Kaniadakis distribution. The κ-Gamma is a deformation of the Generalized Gamma distribution.
Definitions.
Probability density function.
The Kaniadakis "κ"-Gamma distribution has the following probability density function:
formula_0
valid for formula_1, where formula_2 is the entropic index associated with the Kaniadakis entropy, formula_3, formula_4 is the scale parameter, and formula_5 is the shape parameter.
The ordinary generalized Gamma distribution is recovered as formula_6: formula_7.
Cumulative distribution function.
The cumulative distribution function of "κ"-Gamma distribution assumes the form:
formula_8
valid for formula_1, where formula_2. The cumulative Generalized Gamma distribution is recovered in the classical limit formula_6.
Properties.
Moments and mode.
The "κ"-Gamma distribution has moment of order formula_9 given by
formula_10
The moment of order formula_9 of the "κ"-Gamma distribution is finite for formula_11.
The mode is given by:
formula_12
Asymptotic behavior.
The "κ"-Gamma distribution behaves asymptotically as follows:
formula_13
formula_14
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \nf_{_{\\kappa}}(x) = \n(1 + \\kappa \\nu) (2 \\kappa)^\\nu \\frac{\\Gamma \\big(\\frac{1}{2 \\kappa} + \\frac{\\nu}{2} \\big)}{\\Gamma \\big(\\frac{1}{2 \\kappa} - \\frac{\\nu}{2} \\big)} \\frac{\\alpha \\beta^\\nu}{\\Gamma \\big(\\nu\\big)} x^{\\alpha \\nu - 1} \\exp_\\kappa(-\\beta x^\\alpha)\n"
},
{
"math_id": 1,
"text": "x \\geq 0"
},
{
"math_id": 2,
"text": "0 \\leq |\\kappa| < 1"
},
{
"math_id": 3,
"text": "0 < \\nu < 1/\\kappa"
},
{
"math_id": 4,
"text": "\\beta > 0"
},
{
"math_id": 5,
"text": "\\alpha > 0"
},
{
"math_id": 6,
"text": "\\kappa \\rightarrow 0"
},
{
"math_id": 7,
"text": "f_{_{0}}(x) = \\frac{|\\alpha| \\beta ^\\nu }{\\Gamma \\left( \\nu \\right)} x^{\\alpha \\nu - 1} \\exp_\\kappa(-\\beta x^\\alpha)"
},
{
"math_id": 8,
"text": "F_\\kappa(x) = (1 + \\kappa \\nu) (2 \\kappa)^\\nu \\frac{\\Gamma \\big(\\frac{1}{2 \\kappa} + \\frac{\\nu}{2} \\big)}{\\Gamma \\big(\\frac{1}{2 \\kappa} - \\frac{\\nu}{2} \\big)} \\frac{\\alpha \\beta^\\nu}{\\Gamma \\big(\\nu\\big)} \\int_0^x z^{\\alpha \\nu - 1} \\exp_\\kappa(-\\beta z^\\alpha) dz "
},
{
"math_id": 9,
"text": "m"
},
{
"math_id": 10,
"text": "\\operatorname{E}[X^m] = \\beta^{-m/ \\alpha} \\frac{(1 + \\kappa \\nu) (2 \\kappa)^{-m/\\alpha}}{1 + \\kappa \\big( \\nu + \\frac{m}{\\alpha}\\big)} \\frac{\\Gamma \\big( \\nu + \\frac{m}{ \\alpha } \\big) }{\\Gamma(\\nu)} \\frac{\\Gamma\\Big(\\frac{1}{2\\kappa} + \\frac{\\nu}{2}\\Big)}{\\Gamma\\Big(\\frac{1}{2\\kappa} - \\frac{\\nu}{2}\\Big)} \\frac{\\Gamma\\Big(\\frac{1}{2\\kappa} - \\frac{\\nu}{2} - \\frac{m}{2\\alpha}\\Big)}{\\Gamma\\Big(\\frac{1}{2\\kappa} + \\frac{\\nu}{2} + \\frac{m}{2\\alpha}\\Big)}"
},
{
"math_id": 11,
"text": "0 < \\nu + m/\\alpha < 1/\\kappa"
},
{
"math_id": 12,
"text": "x_{\\textrm{mode}} = \\beta^{-1/\\alpha} \\Bigg( \\nu - \\frac{1}{\\alpha} \\Bigg)^{\\frac{1}{\\alpha}} \\Bigg[ 1 - \\kappa^2 \\bigg( \\nu - \\frac{1}{\\alpha}\\bigg)^2\\Bigg]^{-\\frac{1}{2\\alpha}} "
},
{
"math_id": 13,
"text": "\\lim_{x \\to +\\infty} f_\\kappa (x) \\sim (2\\kappa \\beta)^{-1/\\kappa} (1 + \\kappa \\nu) (2 \\kappa)^\\nu \\frac{\\Gamma \\big(\\frac{1}{2 \\kappa} + \\frac{\\nu}{2} \\big)}{\\Gamma \\big(\\frac{1}{2 \\kappa} - \\frac{\\nu}{2} \\big)} \\frac{\\alpha \\beta^\\nu}{\\Gamma \\big(\\nu\\big)} x^{\\alpha \\nu - 1 - \\alpha /\\kappa}"
},
{
"math_id": 14,
"text": "\\lim_{x \\to 0^+} f_\\kappa (x) = (1 + \\kappa \\nu) (2 \\kappa)^\\nu \\frac{\\Gamma \\big(\\frac{1}{2 \\kappa} + \\frac{\\nu}{2} \\big)}{\\Gamma \\big(\\frac{1}{2 \\kappa} - \\frac{\\nu}{2} \\big)} \\frac{\\alpha \\beta^\\nu}{\\Gamma \\big(\\nu\\big)} x^{\\alpha \\nu - 1}"
},
{
"math_id": 15,
"text": "\\alpha = \\nu = 1"
},
{
"math_id": 16,
"text": "\\alpha = 1"
},
{
"math_id": 17,
"text": "\\nu = n = "
},
{
"math_id": 18,
"text": "\\alpha = 2"
},
{
"math_id": 19,
"text": "\\nu = 1/2 "
},
{
"math_id": 20,
"text": "\\kappa = 0"
},
{
"math_id": 21,
"text": "\\nu = "
},
{
"math_id": 22,
"text": "\\nu > 0 "
},
{
"math_id": 23,
"text": "\\nu = 1 "
},
{
"math_id": 24,
"text": "\\nu = "
},
{
"math_id": 25,
"text": "\\nu = 3/2 "
},
{
"math_id": 26,
"text": "\\nu = 1/\\alpha "
}
]
| https://en.wikipedia.org/wiki?curid=71271638 |
7127168 | Friction loss | Loss of fluid flow through friction
In fluid dynamics, friction loss (or frictional loss) is the head loss that occurs in a containment such as a pipe or duct due to the effect of the fluid's viscosity near the surface of the containment.
Engineering.
Friction loss is a significant engineering concern wherever fluids are made to flow, whether entirely enclosed in a pipe or duct, or with a surface open to the air.
Calculating volumetric flow.
In the following discussion, we define volumetric flow rate V̇ (i.e. volume of fluid flowing per time) as
formula_0
where
r = radius of the pipe (for a pipe of circular section, the internal radius of the pipe).
v = mean velocity of fluid flowing through the pipe.
A = cross sectional area of the pipe.
In long pipes, the loss in pressure (assuming the pipe is level) is proportional to the length of pipe involved.
Friction loss is then the change in pressure Δp per unit length of pipe "L"
formula_1
When the pressure is expressed in terms of the equivalent height of a column of that fluid, as is common with water, the friction loss is expressed as "S", the "head loss" per length of pipe, a dimensionless quantity also known as the "hydraulic slope".
formula_2
where
ρ = density of the fluid, (SI kg / m3)
g = the local acceleration due to gravity;
Characterizing friction loss.
Friction loss, which is due to the shear stress between the pipe surface and the fluid flowing within, depends on the conditions of flow and the physical properties of the system. These conditions can be encapsulated into a dimensionless number Re, known as the Reynolds number
formula_3
where "V" is the mean fluid velocity and "D" the diameter of the (cylindrical) pipe. In this expression, the properties of the fluid itself are reduced to the kinematic viscosity ν
formula_4
where
μ = viscosity of the fluid (SI kg / m • s)
Friction loss in straight pipe.
The friction loss in uniform, straight sections of pipe, known as "major loss", is caused by the effects of viscosity, the movement of fluid molecules against each other or against the (possibly rough) wall of the pipe. Here, it is greatly affected by whether the flow is laminar (Re < 2000) or turbulent (Re > 4000):
Form friction.
Factors other than straight pipe flow induce friction loss; these are known as "minor loss":
For the purposes of calculating the total friction loss of a system, the sources of form friction are sometimes reduced to an equivalent length of pipe.
Surface roughness.
The roughness of the surface of the pipe or duct affects the fluid flow in the regime of turbulent flow. Usually denoted by ε, values used for calculations of water flow, for some representative materials are:
Values used in calculating friction loss in ducts (for, e.g., air) are:
Calculating friction loss.
Hagen–Poiseuille Equation.
Laminar flow is encountered in practice with very viscous fluids, such as motor oil, flowing through small-diameter tubes, at low velocity. Friction loss under conditions of laminar flow follow the Hagen–Poiseuille equation, which is an exact solution to the Navier-Stokes equations. For a circular pipe with a fluid of density "ρ" and viscosity "μ", the hydraulic slope "S" can be expressed
formula_5
In laminar flow (that is, with Re < ~2000), the hydraulic slope is proportional to the flow velocity.
Darcy–Weisbach Equation.
In many practical engineering applications, the fluid flow is more rapid, therefore turbulent rather than laminar. Under turbulent flow, the friction loss is found to be roughly proportional to the square of the flow velocity and inversely proportional to the pipe diameter, that is, the friction loss follows the phenomenological Darcy–Weisbach equation in which the "hydraulic slope" "S" can be expressed
formula_6
where we have introduced the Darcy friction factor "f""D" (but see "Confusion with the Fanning friction factor");
"f""D" = Darcy friction factor
Note that the value of this dimensionless factor depends on the pipe diameter "D" and the roughness of the pipe surface ε. Furthermore, it varies as well with the flow velocity "V" and on the physical properties of the fluid (usually cast together into the Reynolds number Re). Thus, the friction loss is not precisely proportional to the flow velocity squared, nor to the inverse of the pipe diameter: the friction factor takes account of the remaining dependency on these parameters.
From experimental measurements, the general features of the variation of "f""D" are, for fixed "relative roughness" ε / "D" and for Reynolds number Re = "V" "D" / ν > ~2000,
The experimentally measured values of "f""D" are fit to reasonable accuracy by the (recursive) Colebrook–White equation, depicted graphically in the Moody chart which plots friction factor "f""D" versus Reynolds number Re for selected values of relative roughness ε / "D".
Calculating friction loss for water in a pipe.
In a design problem, one may select pipe for a particular hydraulic slope "S" based on the candidate pipe's diameter "D" and its roughness ε.
With these quantities as inputs, the friction factor "f""D" can be expressed in closed form in the Colebrook–White equation or other fitting function, and the flow volume "Q" and flow velocity "V" can be calculated therefrom.
In the case of water (ρ = 1 g/cc, μ = 1 g/m/s) flowing through a 12-inch (300 mm) Schedule-40 PVC pipe (ε = 0.0015 mm, "D" = 11.938 in.), a hydraulic slope "S" = 0.01 (1%) is reached at a flow rate "Q" = 157 lps (liters per second), or at a velocity "V" = 2.17 m/s (meters per second).
The following table gives Reynolds number Re, Darcy friction factor "f""D", flow rate "Q", and velocity "V" such that hydraulic slope "S" = "h""f" / "L" = 0.01, for a variety of nominal pipe (NPS) sizes.
Note that the cited sources recommend that flow velocity be kept below 5 feet / second (~1.5 m/s).
Also note that the given "f""D" in this table is actually a quantity adopted by the NFPA and the industry, known as C, which has the imperial units "psi/(100 gpm"2"ft)" and can be calculated using the following relation:
formula_7
where formula_8 is the pressure in psi, formula_9 is the flow in "100gpm" and formula_10 is the length of the pipe in "100ft"
Calculating friction loss for air in a duct.
Friction loss takes place as a gas, say air, flows through duct work.
The difference in the character of the flow from the case of water in a pipe stems from the differing Reynolds number Re and the roughness of the duct.
The friction loss is customarily given as pressure loss for a given duct length, Δ"p" / "L", in units of (US) inches of water for 100 feet or (SI) kg / m2 / s2.
For specific choices of duct material, and assuming air at standard temperature and pressure (STP), standard charts can be used to calculate the expected friction loss. The chart exhibited in this section can be used to graphically determine the required diameter of duct to be installed in an application where the volume of flow is determined and where the goal is to keep the pressure loss per unit length of duct "S" below some target value in all portions of the system under study. First, select the desired pressure loss Δ"p" / "L", say 1 kg / m2 / s2 (0.12 in H2O per 100 ft) on the vertical axis (ordinate). Next scan horizontally to the needed flow volume "Q", say 1 m3 / s (2000 cfm): the choice of duct with diameter "D" = 0.5 m (20 in.) will result in a pressure loss rate Δ"p" / "L" less than the target value. Note in passing that selecting a duct with diameter "D" = 0.6 m (24 in.) will result in a loss Δ"p" / "L" of 0.02 kg / m2 / s2 (0.02 in H2O per 100 ft), illustrating the great gains in blower efficiency to be achieved by using modestly larger ducts.
The following table gives flow rate "Q" such that friction loss per unit length Δ"p" / "L" (SI kg / m2 / s2) is 0.082, 0.245, and 0.816, respectively, for a variety of nominal duct sizes. The three values chosen for friction loss correspond to, in US units inch water column per 100 feet, 0.01, .03, and 0.1. Note that, in approximation, for a given value of flow volume, a step up in duct size (say from 100mm to 120mm) will reduce the friction loss by a factor of 3.
Note that, for the chart and table presented here, flow is in the turbulent, smooth pipe domain, with R* < 5 in all cases.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\dot{V} = \\pi r^2 v"
},
{
"math_id": 1,
"text": "\\frac{ \\Delta p }{ L }. "
},
{
"math_id": 2,
"text": "S = \\frac{h_f }{ L } = \\frac{ 1 }{ \\rho \\mathrm{g} } \\frac{ \\Delta p }{ L } ."
},
{
"math_id": 3,
"text": "\\mathrm{Re}=\\frac{1}{\\nu}VD"
},
{
"math_id": 4,
"text": "\\nu=\\frac{\\mu}{\\rho}"
},
{
"math_id": 5,
"text": "S = \\frac{64}{\\mathrm{Re}} \\frac{V^2}{2gD} = \\frac{64\\nu}{2g} \\frac{V}{D^2}"
},
{
"math_id": 6,
"text": "S = f_D \\frac{ 1 }{ 2g } \\frac{V^2}{D} "
},
{
"math_id": 7,
"text": " \\Delta P_f' = CQ'^2L' "
},
{
"math_id": 8,
"text": "\\Delta P_f'"
},
{
"math_id": 9,
"text": "Q'"
},
{
"math_id": 10,
"text": "L'"
}
]
| https://en.wikipedia.org/wiki?curid=7127168 |
71272605 | Kaniadakis Gaussian distribution | Continuous probability distribution
The Kaniadakis Gaussian distribution (also known as "κ"-Gaussian distribution) is a probability distribution which arises as a generalization of the Gaussian distribution from the maximization of the Kaniadakis entropy under appropriated constraints. It is one example of a Kaniadakis "κ"-distribution. The κ-Gaussian distribution has been applied successfully for describing several complex systems in economy, geophysics, astrophysics, among many others.
The κ-Gaussian distribution is a particular case of the κ-Generalized Gamma distribution.
Definitions.
Probability density function.
The general form of the centered Kaniadakis "κ"-Gaussian probability density function is:
formula_0
where formula_1 is the entropic index associated with the Kaniadakis entropy, formula_2 is the scale parameter, and
formula_3
is the normalization constant.
The standard Normal distribution is recovered in the limit formula_4
Cumulative distribution function.
The cumulative distribution function of "κ"-Gaussian distribution is given byformula_5whereformula_6is the Kaniadakis "κ"-Error function, which is a generalization of the ordinary Error function formula_7 as formula_8.
Properties.
Moments, mean and variance.
The centered "κ"-Gaussian distribution has a moment of odd order equal to zero, including the mean.
The variance is finite for formula_9 and is given by:
formula_10
Kurtosis.
The kurtosis of the centered "κ"-Gaussian distribution may be computed thought:
formula_11
which can be written asformula_12Thus, the kurtosis of the centered "κ"-Gaussian distribution is given by:formula_13orformula_14
κ-Error function.
The Kaniadakis "κ"-Error function (or "κ"-Error function) is a one-parameter generalization of the ordinary error function defined as:
formula_15
Although the error function cannot be expressed in terms of elementary functions, numerical approximations are commonly employed.
For a random variable X distributed according to a κ-Gaussian distribution with mean 0 and standard deviation formula_16, κ-Error function means the probability that X falls in the interval formula_17.
Applications.
The "κ"-Gaussian distribution has been applied in several areas, such as:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \nf_{_{\\kappa}}(x) = Z_\\kappa \\exp_\\kappa(-\\beta x^2)\n"
},
{
"math_id": 1,
"text": "|\\kappa| < 1"
},
{
"math_id": 2,
"text": "\\beta > 0"
},
{
"math_id": 3,
"text": " \nZ_\\kappa = \\sqrt{\\frac{2 \\beta \\kappa}{ \\pi } } \\Bigg( 1 + \\frac{1}{2}\\kappa \\Bigg) \n\\frac{ \\Gamma \\Big( \\frac{1}{2 \\kappa} + \\frac{1}{4}\\Big)}{ \\Gamma \\Big( \\frac{1}{2 \\kappa} - \\frac{1}{4}\\Big) } \n"
},
{
"math_id": 4,
"text": "\\kappa \\rightarrow 0."
},
{
"math_id": 5,
"text": "F_\\kappa(x) = \n\\frac{1}{2} + \\frac{1}{2} \\textrm{erf}_\\kappa \\big( \\sqrt{\\beta} x\\big)"
},
{
"math_id": 6,
"text": "\\textrm{erf}_\\kappa(x) = \\Big( 2+ \\kappa \\Big) \\sqrt{ \\frac{2 \\kappa}{\\pi} } \\frac{\\Gamma\\Big( \\frac{1}{2\\kappa} + \\frac{1}{4} \\Big)}{ \\Gamma\\Big( \\frac{1}{2\\kappa} - \\frac{1}{4} \\Big) } \\int_0^x \\exp_\\kappa(-t^2 )\ndt"
},
{
"math_id": 7,
"text": "\\textrm{erf}(x)"
},
{
"math_id": 8,
"text": "\\kappa \\rightarrow 0"
},
{
"math_id": 9,
"text": "\\kappa < 2/3"
},
{
"math_id": 10,
"text": "\\operatorname{Var}[X] = \\sigma_\\kappa^2 = \\frac{1}{\\beta} \\frac{2 + \\kappa}{2 - \\kappa} \\frac{4\\kappa}{4 - 9 \\kappa^2 } \\left[\\frac{\\Gamma \\left( \\frac{1}{2\\kappa} + \\frac{1}{ 4 }\\right)}{\\Gamma \\left( \\frac{1}{2\\kappa} - \\frac{1}{ 4 }\\right)}\\right]^2 "
},
{
"math_id": 11,
"text": "\\operatorname{Kurt}[X] = \\operatorname{E}\\left[\\frac{X^4}{\\sigma_\\kappa^4}\\right] "
},
{
"math_id": 12,
"text": "\\operatorname{Kurt}[X] = \\frac{2 Z_\\kappa}{\\sigma_\\kappa^4} \\int_0^\\infty x^4 \\, \\exp_\\kappa \\left( -\\beta x^2 \\right) dx "
},
{
"math_id": 13,
"text": "\\operatorname{Kurt}[X] = \\frac{3\\sqrt \\pi Z_\\kappa}{ 2 \\beta^{2/3} \\sigma_\\kappa^4 } \\frac{|2 \\kappa|^{-5/2}}{1+\\frac{5}{2} |\\kappa| } \\frac{\\Gamma \\left( \\frac{1}{|2 \\kappa| } - \\frac{5}{4} \\right)}{\\Gamma \\left( \\frac{1}{|2 \\kappa| } + \\frac{5}{4} \\right)} "
},
{
"math_id": 14,
"text": "\\operatorname{Kurt}[X] = \\frac{ 3\\beta^{11/6}\\sqrt{2 \\kappa} }{ 2 } \\frac{|2 \\kappa|^{-5/2}}{1+\\frac{5}{2} |\\kappa| } \\Bigg( 1 + \\frac{1}{2}\\kappa \\Bigg) \\left(\\frac{2 - \\kappa}{2 + \\kappa} \\right)^2 \\left( \\frac{4 - 9 \\kappa^2 }{4\\kappa} \\right)^2 \\left[\\frac{\\Gamma \\Big( \\frac{1}{2\\kappa} - \\frac{1}{ 4 }\\Big)}{\\Gamma \\Big( \\frac{1}{2\\kappa} + \\frac{1}{ 4 }\\Big)}\\right]^3 \\frac{\\Gamma \\left( \\frac{1}{|2 \\kappa| } - \\frac{5}{4} \\right)}{\\Gamma \\left( \\frac{1}{|2 \\kappa| } + \\frac{5}{4} \\right)} "
},
{
"math_id": 15,
"text": "\\operatorname{erf}_\\kappa(x) = \\Big( 2+ \\kappa \\Big) \\sqrt{ \\frac{2 \\kappa}{\\pi} } \\frac{\\Gamma\\Big( \\frac{1}{2\\kappa} + \\frac{1}{4} \\Big)}{ \\Gamma\\Big( \\frac{1}{2\\kappa} - \\frac{1}{4} \\Big) } \\int_0^x \\exp_\\kappa(-t^2 )\ndt"
},
{
"math_id": 16,
"text": "\\sqrt \\beta"
},
{
"math_id": 17,
"text": "[-x, \\, x]"
}
]
| https://en.wikipedia.org/wiki?curid=71272605 |
71272853 | Kaniadakis logistic distribution | Probability distribution
The Kaniadakis Logistic distribution (also known as "κ-"Logisticdistribution) is a generalized version of the Logistic distribution associated with the Kaniadakis statistics. It is one example of a Kaniadakis distribution. The κ-Logistic probability distribution describes the population kinetics behavior of bosonic (formula_0) or fermionic (formula_1) character.
Definitions.
Probability density function.
The Kaniadakis "κ"-Logistic distribution is a four-parameter family of continuous statistical distributions, which is part of a class of statistical distributions emerging from the Kaniadakis κ-statistics. This distribution has the following probability density function:
formula_2
valid for formula_3, where formula_4 is the entropic index associated with the Kaniadakis entropy, formula_5 is the rate parameter, formula_6, and formula_7 is the shape parameter.
The Logistic distribution is recovered as formula_8
Cumulative distribution function.
The cumulative distribution function of "κ"-Logistic is given by
formula_9
valid for formula_3. The cumulative Logistic distribution is recovered in the classical limit formula_10.
Survival and hazard functions.
The survival distribution function of "κ"-Logistic distribution is given by
formula_11
valid for formula_3. The survival Logistic distribution is recovered in the classical limit formula_10.
The hazard function associated with the "κ"-Logistic distribution is obtained by the solution of the following evolution equation:formula_12with formula_13, where formula_14 is the hazard function:
formula_15
The cumulative Kaniadakis "κ"-Logistic distribution is related to the hazard function by the following expression:
formula_16
where formula_17 is the cumulative hazard function. The cumulative hazard function of the Logistic distribution is recovered in the classical limit formula_10.
Applications.
The "κ"-Logistic distribution has been applied in several areas, such as:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "0 < \\lambda < 1"
},
{
"math_id": 1,
"text": " \\lambda > 1"
},
{
"math_id": 2,
"text": " \nf_{_{\\kappa}}(x) = \n\\frac{\\lambda \\alpha \\beta x^{\\alpha-1}}{\\sqrt{1+\\kappa^2 \\beta^2 x^{2\\alpha} }} \\frac{ \\exp_\\kappa(-\\beta x^\\alpha) }{ [ 1 + (\\lambda - 1) \\exp_\\kappa(-\\beta x^\\alpha)]^2 }\n"
},
{
"math_id": 3,
"text": "x \\geq 0"
},
{
"math_id": 4,
"text": "0 \\leq |\\kappa| < 1"
},
{
"math_id": 5,
"text": "\\beta > 0"
},
{
"math_id": 6,
"text": "\\lambda > 0"
},
{
"math_id": 7,
"text": "\\alpha > 0"
},
{
"math_id": 8,
"text": "\\kappa \\rightarrow 0."
},
{
"math_id": 9,
"text": "F_\\kappa(x) = \n\\frac{ 1 - \\exp_\\kappa(-\\beta x^\\alpha) }{ 1 + (\\lambda - 1) \\exp_\\kappa(-\\beta x^\\alpha) } "
},
{
"math_id": 10,
"text": "\\kappa \\rightarrow 0"
},
{
"math_id": 11,
"text": "S_\\kappa(x) = \n\\frac{\\lambda}{\\exp_\\kappa(\\beta x^\\alpha) + \\lambda - 1}"
},
{
"math_id": 12,
"text": "\\frac{ S_\\kappa(x) }{ dx } = -h_\\kappa S_\\kappa(x) \\left( 1 - \\frac{ \\lambda -1 }{ \\lambda } S_\\kappa(x) \\right) "
},
{
"math_id": 13,
"text": "S_\\kappa(0) = 1"
},
{
"math_id": 14,
"text": "h_\\kappa"
},
{
"math_id": 15,
"text": "h_\\kappa = \\frac{\\alpha \\beta x^{\\alpha-1}}{\\sqrt{1+\\kappa^2 \\beta^2 x^{2\\alpha} }} "
},
{
"math_id": 16,
"text": "S_\\kappa = e^{-H_\\kappa(x)} "
},
{
"math_id": 17,
"text": "H_\\kappa (x) = \\int_0^x h_\\kappa(z) dz "
},
{
"math_id": 18,
"text": "\\lambda = 1"
},
{
"math_id": 19,
"text": "\\lambda = 2"
},
{
"math_id": 20,
"text": "\\alpha = 1"
},
{
"math_id": 21,
"text": "\\kappa = 0"
}
]
| https://en.wikipedia.org/wiki?curid=71272853 |
71292446 | Recurrent event analysis | Recurrent event analysis
Recurrent event analysis is a branch of survival analysis that analyzes the time until recurrences occur, such as recurrences of traits or diseases. Recurrent events are often analyzed in social sciences and medical studies, for example recurring infections, depressions or cancer recurrences. Recurrent event analysis attempts to answer certain questions, such as: how many recurrences occur on average within a certain time interval? Which factors are associated with a higher or lower risk of recurrence?
The processes which generate events repeatedly over time are referred to as recurrent event processes, which are different from processes analyzed in time-to-event analysis: whereas time-to-event analysis focuses on the time to a single terminal event, individuals may be at risk for subsequent events after the first in recurrent event analysis, until they are censored.
Introduction.
Objectives of recurrent event analysis include:
Notation and frameworks.
For a single recurrent event process starting at formula_0, let formula_1 denote the event times, where formula_2 is the time of the formula_3th event. The associated "counting process" formula_4 records the cumulative number of events generated by the process; specifically, formula_5 is the number of events occurring over the time interval formula_6.
Models for recurrent events can be specified by considering the probability distribution for the number of recurrences in short intervals formula_7, given the history of event occurrence before time formula_8. The "intensity function" describes the instantaneous probability of an event occurring at time formula_8, conditional on the process history, and describes the process mathematically. Define the process history as formula_9, then the intensity is formally defined asformula_10When a heterogeneous group of individuals or processes is considered, the assumption of a common event intensity is no longer plausible. Greater generality can be achieved by incorporating fixed or time-varying covariates in the intensity function.
Description of recurrent event data.
As a counterpart of the Kaplan–Meier curve, which is used to describe the time to a terminal event, recurrent event data can be described using the mean cumulative function, which is the average number of cumulative events experienced by an individual in the study at each point in time since the start of follow-up.
Statistical models for recurrent event data.
Poisson model.
The Poisson model is a popular model for recurrent event data, which models the number of recurrences that have occurred. Poisson regression assumes that the number of recurrences has a Poisson distribution with a fixed rate of recurrence over time. The logarithm of the expected number of recurrences is modeled by a linear combination of explanatory variables.
Marginal means/rates model.
The marginal means/rates model considers all recurrent events of the same subject as a single counting process and does not require time-varying covariates to reflect the past history of the process, which makes it a more flexible model. Instead, the full history of the counting process may influence the mean function of recurrent events.
Multi-state model.
In multi-state models, the recurrent event processes of individuals are described by different states. The different states may describe the recurrence number, or whether the subject is at risk of recurrence. A change of state is called a transition (or an event) and is central in this framework, which is fully characterized through estimation of transition probabilities between states and transition intensities that are defined as instantaneous hazards of progression to one state, conditional on occupying another state.
Extended Cox proportional hazards (PH) models.
Extensions of the Cox proportional hazard models are popular models in social sciences and medical science to assess associations between variables and risk of recurrence, or to predict recurrent event outcomes. Many extensions of survival models based on the Cox proportional hazards approach have been proposed to handle recurrent event data. These models can be characterized by four model components:
Well-known examples of Cox-based recurrent event models are the Andersen and Gill model, the Prentice, Williams and Petersen model and the Wei–Lin–Weissfeld model
Correlated event times within subjects.
Time to recurrence is often correlated within subjects, as some subjects can be more frail to experiencing recurrences. If the correlated nature of the data is ignored, the confidence intervals (CI) for the estimated rates could be artificially narrow, which may result in false positive results.
Robust variance.
It is possible to use robust 'sandwich' estimators for the variance of regression coefficients. Robust variance estimators are based on a jackknife estimate, which anticipates correlation within subjects and provides robust standard errors.
Frailty models.
In frailty models, a random effect is included in the recurrent event model which describes the individual excess risk that can not be explained by the included covariates. The frailty term induces dependence among the recurrence times within subjects.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "t = 0"
},
{
"math_id": 1,
"text": "0 \\leq T_1 < T_2 < \\dots "
},
{
"math_id": 2,
"text": "T_k"
},
{
"math_id": 3,
"text": "k"
},
{
"math_id": 4,
"text": "\\{N(t), 0 \\leq t\\}"
},
{
"math_id": 5,
"text": "N(t) = \\sum_{k=1}^{\\infty}I(T_k \\leq t)"
},
{
"math_id": 6,
"text": "[0, t]"
},
{
"math_id": 7,
"text": "[t, t + \\Delta t)"
},
{
"math_id": 8,
"text": "t"
},
{
"math_id": 9,
"text": "H(t) = \\{N(s): 0 \\leq s < t \\}"
},
{
"math_id": 10,
"text": "\\lambda(t|H(t)) = \\lim_{\\Delta t \\downarrow 0}\\frac{P(N(t + \\Delta t) - N(t) = 1)}{\\Delta t}."
}
]
| https://en.wikipedia.org/wiki?curid=71292446 |
71299604 | Wasserstein GAN | Proposed generative adversarial network variant
<templatestyles src="Machine learning/styles.css"/>
The Wasserstein Generative Adversarial Network (WGAN) is a variant of generative adversarial network (GAN) proposed in 2017 that aims to "improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches".
Compared with the original GAN discriminator, the Wasserstein GAN discriminator provides a better learning signal to the generator. This allows the training to be more stable when generator is learning distributions in very high dimensional spaces.
Motivation.
The GAN game.
The original GAN method is based on the GAN game, a zero-sum game with 2 players: generator and discriminator. The game is defined over a probability space formula_0, The generator's strategy set is the set of all probability measures formula_1 on formula_2, and the discriminator's strategy set is the set of measurable functions formula_3.
The objective of the game isformula_4
The generator aims to minimize it, and the discriminator aims to maximize it.
A basic theorem of the GAN game states that<templatestyles src="Math_theorem/styles.css" />
Theorem (the optimal discriminator computes the Jensen–Shannon divergence) — For any fixed generator strategy formula_1, let the optimal reply be formula_5, then
formula_6
where the derivative is the Radon–Nikodym derivative, and formula_7 is the Jensen–Shannon divergence.
Repeat the GAN game many times, each time with the generator moving first, and the discriminator moving second. Each time the generator formula_1 changes, the discriminator must adapt by approaching the idealformula_8
Since we are really interested in formula_9, the discriminator function formula_10 is by itself rather uninteresting. It merely keeps track of the likelihood ratio between the generator distribution and the reference distribution. At equilibrium, the discriminator is just outputting formula_11 constantly, having given up trying to perceive any difference.
Concretely, in the GAN game, let us fix a generator formula_1, and improve the discriminator step-by-step, with formula_12 being the discriminator at step formula_13. Then we (ideally) haveformula_14so we see that the discriminator is actually lower-bounding formula_15.
Wasserstein distance.
Thus, we see that the point of the discriminator is mainly as a critic to provide feedback for the generator, about "how far it is from perfection", where "far" is defined as Jensen–Shannon divergence.
Naturally, this brings the possibility of using a different criteria of farness. There are many possible divergences to choose from, such as the f-divergence family, which would give the f-GAN.
The Wasserstein GAN is obtained by using the Wasserstein metric, which satisfies a "dual representation theorem" that renders it highly efficient to compute:
<templatestyles src="Math_theorem/styles.css" />
Theorem (Kantorovich-Rubenstein duality) — When the probability space formula_16 is a metric space, then
for any fixed formula_17, formula_18
where formula_19 is the Lipschitz norm.
A proof can be found in the main page on Wasserstein metric.
Definition.
By the Kantorovich-Rubenstein duality, the definition of Wasserstein GAN is clear:<templatestyles src="Template:Blockquote/styles.css" />A Wasserstein GAN game is defined by a probability space formula_0, where formula_16 is a metric space, and a constant formula_17.
There are 2 players: generator and discriminator (also called "critic").
The generator's strategy set is the set of all probability measures formula_1 on formula_2.
The discriminator's strategy set is the set of measurable functions of type formula_20 with bounded Lipschitz-norm: formula_21.
The Wasserstein GAN game is a zero-sum game, with objective functionformula_22
The generator goes first, and the discriminator goes second. The generator aims to minimize the objective, and the discriminator aims to maximize the objective:formula_23
By the Kantorovich-Rubenstein duality, for any generator strategy formula_1, the optimal reply by the discriminator is formula_24, such that formula_25Consequently, if the discriminator is good, the generator would be constantly pushed to minimize formula_26, and the optimal strategy for the generator is just formula_27, as it should.
Comparison with GAN.
In the Wasserstein GAN game, the discriminator provides a better gradient than in the GAN game.
Consider for example a game on the real line where both formula_1 and formula_9 are Gaussian. Then the optimal Wasserstein critic formula_28 and the optimal GAN discriminator formula_10 are plotted as below:
For fixed discriminator, the generator needs to minimize the following objectives:
Let formula_1 be parametrized by formula_31, then we can perform stochastic gradient descent by using two unbiased estimators of the gradient:formula_32formula_33where we used the reparametrization trick.
As shown, the generator in GAN is motivated to let its formula_1 "slide down the peak" of formula_35. Similarly for the generator in Wasserstein GAN.
For Wasserstein GAN, formula_28 has gradient 1 almost everywhere, while for GAN, formula_34 has flat gradient in the middle, and steep gradient elsewhere. As a result, the variance for the estimator in GAN is usually much larger than that in Wasserstein GAN. See also Figure 3 of.
The problem with formula_7 is much more severe in actual machine learning situations. Consider training a GAN to generate ImageNet, a collection of photos of size 256-by-256. The space of all such photos is formula_36, and the distribution of ImageNet pictures, formula_9, concentrates on a manifold of much lower dimension in it. Consequently, any generator strategy formula_1 would almost surely be entirely disjoint from formula_9, making formula_37. Thus, a good discriminator can almost perfectly distinguish formula_9 from formula_1, as well as any formula_38 close to formula_1. Thus, the gradient formula_39, creating no learning signal for the generator.
Detailed theorems can be found in.
Training Wasserstein GANs.
Training the generator in Wasserstein GAN is just gradient descent, the same as in GAN (or most deep learning methods), but training the discriminator is different, as the discriminator is now restricted to have bounded Lipschitz norm. There are several methods for this.
Upper-bounding the Lipschitz norm.
Let the discriminator function formula_10 to be implemented by a multilayer perceptron:formula_40where formula_41, and formula_42 is a fixed activation function with formula_43. For example, the hyperbolic tangent function formula_44 satisfies the requirement.
Then, for any formula_45, let formula_46, we have by the chain rule:formula_47Thus, the Lipschitz norm of formula_10 is upper-bounded byformula_48where formula_49 is the operator norm of the matrix, that is, the largest singular value of the matrix, that is, the spectral radius of the matrix (these concepts are the same for matrices, but different for general linear operators).
Since formula_43, we have formula_50, and consequently the upper bound:formula_51Thus, if we can upper-bound operator norms formula_52 of each matrix, we can upper-bound the Lipschitz norm of formula_10.
Weight clipping.
Since for any formula_53 matrix formula_54, let formula_55, we haveformula_56by clipping all entries of formula_54 to within some interval formula_57, we have can bound formula_58.
This is the weight clipping method, proposed by the original paper.
Spectral normalization.
The spectral radius can be efficiently computed by the following algorithm:<templatestyles src="Template:Blockquote/styles.css" />INPUT matrix formula_54 and initial guess formula_45
Iterate formula_59 to convergence formula_60. This is the eigenvector of formula_54 with eigenvalue formula_58.
RETURN formula_61
By reassigning formula_62 after each update of the discriminator, we can upper bound formula_63, and thus upper bound formula_64.
The algorithm can be further accelerated by memoization: At step formula_13, store formula_65. Then at step formula_66, use formula_65 as the initial guess for the algorithm. Since formula_67 is very close to formula_68, so is formula_65 close to formula_69, so this allows rapid convergence.
This is the spectral normalization method.
Gradient penalty.
Instead of strictly bounding formula_70, we can simply add a "gradient penalty" term for the discriminator, of formformula_71where formula_72 is a fixed distribution used to estimate how much the discriminator has violated the Lipschitz norm requirement.
The discriminator, in attempting to minimize the new loss function, would naturally bring formula_73 close to formula_74 everywhere, thus making formula_75.
This is the gradient penalty method.
References.
<templatestyles src="Reflist/styles.css" />
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(\\Omega, \\mathcal B, \\mu_{ref})"
},
{
"math_id": 1,
"text": "\\mu_G"
},
{
"math_id": 2,
"text": "(\\Omega, \\mathcal B)"
},
{
"math_id": 3,
"text": "D: \\Omega \\to [0, 1]"
},
{
"math_id": 4,
"text": "L(\\mu_G, D) := \\mathbb{E}_{x\\sim \\mu_{ref}}[\\ln D(x)] + \\mathbb{E}_{x\\sim \\mu_G}[\\ln (1-D(x))]."
},
{
"math_id": 5,
"text": "D^* = \\arg\\max_{D} L(\\mu_G, D)"
},
{
"math_id": 6,
"text": "\\begin{align}\nD^*(x) &= \\frac{d\\mu_{ref}}{d(\\mu_{ref} + \\mu_G)}\\\\\nL(\\mu_G, D^*) &= 2D_{JS}(\\mu_{ref}; \\mu_G) - 2\\ln 2,\n\\end{align}"
},
{
"math_id": 7,
"text": "D_{JS}"
},
{
"math_id": 8,
"text": "D^*(x) = \\frac{d\\mu_{ref}}{d(\\mu_{ref} + \\mu_G)}."
},
{
"math_id": 9,
"text": "\\mu_{ref}"
},
{
"math_id": 10,
"text": "D"
},
{
"math_id": 11,
"text": "\\frac 12"
},
{
"math_id": 12,
"text": "\\mu_{D, t}"
},
{
"math_id": 13,
"text": "t"
},
{
"math_id": 14,
"text": "L(\\mu_G, \\mu_{D, 1}) \\leq L(\\mu_G, \\mu_{D, 2}) \\leq \\cdots \\leq \\max_{\\mu_D} L(\\mu_G, \\mu_D) = 2D_{JS}(\\mu_{ref} \\| \\mu_G) - 2\\ln 2,"
},
{
"math_id": 15,
"text": "D_{JS}(\\mu_{ref} \\| \\mu_G)"
},
{
"math_id": 16,
"text": "\\Omega"
},
{
"math_id": 17,
"text": "K > 0"
},
{
"math_id": 18,
"text": "W_1(\\mu, \\nu) = \\frac 1 K\\sup_{\\|f\\|_L \\leq K} \\mathbb{E}_{x\\sim \\mu}[f(x)] -\\mathbb E_{y\\sim \\nu}[f(y)]"
},
{
"math_id": 19,
"text": "\\|\\cdot\\|_L"
},
{
"math_id": 20,
"text": "D: \\Omega \\to \\R"
},
{
"math_id": 21,
"text": "\\|D\\|_L \\leq K"
},
{
"math_id": 22,
"text": "L_{WGAN}(\\mu_G, D) := \\mathbb{E}_{x\\sim \\mu_G}[D(x)] -\\mathbb E_{x\\sim \\mu_{ref}}[D(x)]."
},
{
"math_id": 23,
"text": "\\min_{\\mu_G} \\max_{D} L_{WGAN}(\\mu_G, D)."
},
{
"math_id": 24,
"text": "D^*"
},
{
"math_id": 25,
"text": " L_{WGAN}(\\mu_G, D^*) = K \\cdot W_1(\\mu_G, \\mu_{ref})."
},
{
"math_id": 26,
"text": " W_1(\\mu_G, \\mu_{ref})"
},
{
"math_id": 27,
"text": " \\mu_G = \\mu_{ref}"
},
{
"math_id": 28,
"text": "D_{WGAN}"
},
{
"math_id": 29,
"text": "\\mathbb E_{x\\sim \\mu_G} [\\ln(1-D(x))]"
},
{
"math_id": 30,
"text": "\\mathbb E_{x\\sim \\mu_G} [D_{WGAN}(x)]"
},
{
"math_id": 31,
"text": "\\theta"
},
{
"math_id": 32,
"text": "\\nabla_{\\theta} \\mathbb E_{x\\sim \\mu_G} [\\ln(1-D(x))] = \\mathbb E_{x\\sim \\mu_G} [\\ln(1-D(x))\\cdot \\nabla_{\\theta} \\ln\\rho_{\\mu_G}(x)]"
},
{
"math_id": 33,
"text": "\\nabla_{\\theta} \\mathbb E_{x\\sim \\mu_G} [D_{WGAN}(x)] = \\mathbb E_{x\\sim \\mu_G} [D_{WGAN}(x)\\cdot \\nabla_{\\theta} \\ln\\rho_{\\mu_G}(x)]"
},
{
"math_id": 34,
"text": "\\ln(1-D)"
},
{
"math_id": 35,
"text": "\\ln(1-D(x))"
},
{
"math_id": 36,
"text": "\\R^{256^2}"
},
{
"math_id": 37,
"text": "D_{JS}(\\mu_G \\| \\mu_{ref}) = +\\infty"
},
{
"math_id": 38,
"text": "\\mu_G'"
},
{
"math_id": 39,
"text": "\\nabla_{\\mu_G} L(\\mu_G, D) \\approx 0"
},
{
"math_id": 40,
"text": "D = D_n \\circ D_{n-1} \\circ \\cdots \\circ D_1"
},
{
"math_id": 41,
"text": "D_i(x) = h(W_i x)"
},
{
"math_id": 42,
"text": "h:\\R \\to \\R"
},
{
"math_id": 43,
"text": "\\sup_x |h'(x)| \\leq 1"
},
{
"math_id": 44,
"text": "h = \\tanh"
},
{
"math_id": 45,
"text": "x"
},
{
"math_id": 46,
"text": "x_i = (D_i \\circ D_{i-1} \\circ \\cdots \\circ D_1)(x)"
},
{
"math_id": 47,
"text": "d D(x) = diag(h'(W_n x_{n-1})) \\cdot W_n \\cdot diag(h'(W_{n-1} x_{n-2})) \\cdot W_{n-1} \\cdots diag(h'(W_1 x)) \\cdot W_1 \\cdot dx"
},
{
"math_id": 48,
"text": "\\|D \\|_L \\leq \\sup_{x}\\| diag(h'(W_n x_{n-1})) \\cdot W_n \\cdot diag(h'(W_{n-1} x_{n-2})) \\cdot W_{n-1} \\cdots diag(h'(W_1 x)) \\cdot W_1\\|_F"
},
{
"math_id": 49,
"text": "\\|\\cdot\\|_s"
},
{
"math_id": 50,
"text": "\\|diag(h'(W_i x_{i-1}))\\|_s = \\max_j |h'(W_i x_{i-1, j})| \\leq 1"
},
{
"math_id": 51,
"text": "\\|D \\|_L \\leq \\prod_{i=1}^n \\|W_i \\|_s"
},
{
"math_id": 52,
"text": "\\|W_i\\|_s"
},
{
"math_id": 53,
"text": "m\\times l"
},
{
"math_id": 54,
"text": "W"
},
{
"math_id": 55,
"text": "c = \\max_{i, j} |W_{i, j}|"
},
{
"math_id": 56,
"text": "\\|W\\|_s^2 = \\sup_{\\|x\\|_2=1}\\|W x\\|_2^2 = \\sup_{\\|x\\|_2=1}\\sum_{i}\\left(\\sum_j W_{i, j} x_j\\right)^2 = \\sup_{\\|x\\|_2=1}\\sum_{i, j, k}W_{ij}W_{ik}x_jx_k \\leq c^2 ml^2"
},
{
"math_id": 57,
"text": "[-c, c]"
},
{
"math_id": 58,
"text": "\\|W\\|_s"
},
{
"math_id": 59,
"text": "x \\mapsto \\frac{1}{\\|Wx\\|_2}Wx"
},
{
"math_id": 60,
"text": "x^*"
},
{
"math_id": 61,
"text": "x^*, \\|Wx^*\\|_2"
},
{
"math_id": 62,
"text": "W_i \\leftarrow \\frac{W_i}{\\|W_i\\|_s}"
},
{
"math_id": 63,
"text": "\\|W_i\\|_s \\leq 1"
},
{
"math_id": 64,
"text": "\\|D \\|_L"
},
{
"math_id": 65,
"text": "x^*_i(t)"
},
{
"math_id": 66,
"text": "t+1"
},
{
"math_id": 67,
"text": "W_i(t+1)"
},
{
"math_id": 68,
"text": "W_i(t)"
},
{
"math_id": 69,
"text": "x^*_i(t+1)"
},
{
"math_id": 70,
"text": "\\|D\\|_L"
},
{
"math_id": 71,
"text": "\\mathbb{E}_{x\\sim\\hat\\mu}[(\\|\\nabla D(x)\\|_2 - a)^2]"
},
{
"math_id": 72,
"text": "\\hat \\mu"
},
{
"math_id": 73,
"text": "\\nabla D(x)"
},
{
"math_id": 74,
"text": "a"
},
{
"math_id": 75,
"text": "\\|D\\|_L \\approx a"
}
]
| https://en.wikipedia.org/wiki?curid=71299604 |
71300141 | Inception score | The Inception Score (IS) is an algorithm used to assess the quality of images created by a generative image model such as a generative adversarial network (GAN). The score is calculated based on the output of a separate, pretrained Inceptionv3 image classification model applied to a sample of (typically around 30,000) images generated by the generative model. The Inception Score is maximized when the following conditions are true:
It has been somewhat superseded by the related Fréchet inception distance. While the Inception Score only evaluates the distribution of generated images, the FID compares the distribution of generated images with the distribution of a set of real images ("ground truth").
Definition.
Let there be two spaces, the space of images formula_0 and the space of labels formula_1. The space of labels is finite.
Let formula_2 be a probability distribution over formula_0 that we wish to judge.
Let a discriminator be a function of type formula_3where formula_4 is the set of all probability distributions on formula_1. For any image formula_5, and any label formula_6, let formula_7 be the probability that image formula_5 has label formula_6, according to the discriminator. It is usually implemented as an Inception-v3 network trained on ImageNet.
The Inception Score of formula_2 relative to formula_8 isformula_9Equivalent rewrites includeformula_10formula_11formula_12 is nonnegative by Jensen's inequality.
Pseudocode:<templatestyles src="Template:Blockquote/styles.css" />INPUT discriminator formula_8.
INPUT generator formula_13.
Sample images formula_14 from generator.
Compute formula_15, the probability distribution over labels conditional on image formula_14.
Sum up the results to obtain formula_16, an empirical estimate of formula_17.
Sample more images formula_14 from generator, and for each, compute formula_18.
Average the results, and take its exponential.
RETURN the result.
Interpretation.
A higher inception score is interpreted as "better", as it means that formula_2 is a "sharp and distinct" collection of pictures.
formula_19, where formula_20 is the total number of possible labels.
formula_21 iff for almost all formula_22formula_23That means formula_2 is completely "indistinct". That is, for any image formula_5 sampled from formula_2, discriminator returns exactly the same label predictions formula_24.
The highest inception score formula_20 is achieved if and only if the two conditions are both true:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Omega_X"
},
{
"math_id": 1,
"text": "\\Omega_Y"
},
{
"math_id": 2,
"text": "p_{gen}"
},
{
"math_id": 3,
"text": "p_{dis}:\\Omega_X \\to M(\\Omega_Y)"
},
{
"math_id": 4,
"text": "M(\\Omega_Y)"
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": "y"
},
{
"math_id": 7,
"text": "p_{dis}(y|x)"
},
{
"math_id": 8,
"text": "p_{dis}"
},
{
"math_id": 9,
"text": "IS(p_{gen}, p_{dis}) := \\exp\\left( \\mathbb E_{x\\sim p_{gen}}\\left[\n\t D_{KL} \\left(p_{dis}(\\cdot | x) \\| \\int p_{dis}(\\cdot | x) p_{gen}(x)dx \\right)\n\t \\right]\\right)"
},
{
"math_id": 10,
"text": "\\ln IS(p_{gen}, p_{dis}) := \\mathbb E_{x\\sim p_{gen}}\\left[\n\t\t D_{KL} \\left(p_{dis}(\\cdot | x) \\| \\mathbb E_{x\\sim p_{gen}}[p_{dis}(\\cdot | x)]\\right)\n\t\t \\right]"
},
{
"math_id": 11,
"text": "\\ln IS(p_{gen}, p_{dis}) := \n\t\t H[\\mathbb E_{x\\sim p_{gen}}[p_{dis}(\\cdot | x)]]\n\t\t -\\mathbb E_{x\\sim p_{gen}}[ H[p_{dis}(\\cdot | x)]]"
},
{
"math_id": 12,
"text": "\\ln IS"
},
{
"math_id": 13,
"text": "g"
},
{
"math_id": 14,
"text": "x_i"
},
{
"math_id": 15,
"text": "p_{dis}(\\cdot |x_i)"
},
{
"math_id": 16,
"text": "\\hat p"
},
{
"math_id": 17,
"text": "\\int p_{dis}(\\cdot | x) p_{gen}(x)dx "
},
{
"math_id": 18,
"text": "D_{KL} \\left(p_{dis}(\\cdot | x_i) \\| \\hat p\\right)"
},
{
"math_id": 19,
"text": "\\ln IS(p_{gen}, p_{dis}) \\in [0, \\ln N]"
},
{
"math_id": 20,
"text": "N"
},
{
"math_id": 21,
"text": "\\ln IS(p_{gen}, p_{dis}) = 0"
},
{
"math_id": 22,
"text": "x\\sim p_{gen}"
},
{
"math_id": 23,
"text": "p_{dis}(\\cdot | x) = \\int p_{dis}(\\cdot | x) p_{gen}(x)dx"
},
{
"math_id": 24,
"text": "p_{dis}(\\cdot | x)"
},
{
"math_id": 25,
"text": "H_y[p_{dis}(y|x)] = 0"
},
{
"math_id": 26,
"text": "\\mathbb E_{x\\sim p_{gen}}[p_{dis}(y | x)] = \\frac 1 N"
}
]
| https://en.wikipedia.org/wiki?curid=71300141 |
71307878 | Poisson-Dirichlet distribution | Definition and first properties of the Poisson-Dirichlet distributions
In probability theory, Poisson-Dirichlet distributions are probability distributions on the set of nonnegative, non-increasing sequences with sum 1, depending on two parameters formula_0 and formula_1. It can be defined as follows. One considers independent random variables formula_2 such that formula_3 follows the beta distribution of parameters formula_4 and formula_5. Then, the Poisson-Dirichlet distribution formula_6 of parameters formula_7 and formula_8 is the law of the random decreasing sequence containing formula_9 and the products formula_10. This definition is due to Jim Pitman and Marc Yor. It generalizes Kingman's law, which corresponds to the particular case formula_11.
Number theory.
Patrick Billingsley has proven the following result: if formula_12 is a uniform random integer in formula_13, if formula_14 is a fixed integer, and if formula_15 are the formula_16 largest prime divisors of formula_12 (with formula_17 arbitrarily defined if formula_12 has less than formula_18 prime factors), then the joint distribution offormula_19converges to the law of the formula_16 first elements of a formula_20 distributed random sequence, when formula_21 goes to infinity.
Random permutations and Ewens's sampling formula.
The Poisson-Dirichlet distribution of parameters formula_22 and formula_23 is also the limiting distribution, for formula_21 going to infinity, of the sequence formula_24, where formula_25 is the length of the formula_26 largest cycle of a uniformly distributed permutation of order formula_27. If for formula_28, one replaces the uniform distribution by the distribution formula_29 on formula_30 such that formula_31, where formula_32 is the number of cycles of the permutation formula_33, then we get the Poisson-Dirichlet distribution of parameters formula_22 and formula_8. The probability distribution formula_29 is called Ewens's distribution, and comes from the Ewens's sampling formula, first introduced by Warren Ewens in population genetics, in order to describe the probabilities associated with counts of how many different alleles are observed a given number of times in the sample.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha \\in [0,1)"
},
{
"math_id": 1,
"text": "\\theta \\in (-\\alpha, \\infty)"
},
{
"math_id": 2,
"text": "(Y_n)_{n \\geq 1}"
},
{
"math_id": 3,
"text": "Y_n"
},
{
"math_id": 4,
"text": "1-\\alpha"
},
{
"math_id": 5,
"text": "\\theta+n \\alpha"
},
{
"math_id": 6,
"text": "PD(\\alpha, \\theta)"
},
{
"math_id": 7,
"text": "\\alpha"
},
{
"math_id": 8,
"text": "\\theta"
},
{
"math_id": 9,
"text": "Y_1"
},
{
"math_id": 10,
"text": "Y_n \\prod_{k=1}^{n-1}(1-Y_k)"
},
{
"math_id": 11,
"text": "\\alpha = 0"
},
{
"math_id": 12,
"text": "n "
},
{
"math_id": 13,
"text": "\\{2,3,\\dots,N\\}"
},
{
"math_id": 14,
"text": " k \\geq 1 "
},
{
"math_id": 15,
"text": " p_1 \\geq p_2 \\geq \\dots \\geq p_k "
},
{
"math_id": 16,
"text": " k "
},
{
"math_id": 17,
"text": " p_j "
},
{
"math_id": 18,
"text": " j "
},
{
"math_id": 19,
"text": "(\\log p_1/\\log n, \\log p_2/\\log n, \\dots, \\log p_k/\\log n)"
},
{
"math_id": 20,
"text": " PD(0,1) "
},
{
"math_id": 21,
"text": " N "
},
{
"math_id": 22,
"text": " \\alpha = 0 "
},
{
"math_id": 23,
"text": "\\theta = 1"
},
{
"math_id": 24,
"text": "(\\ell_1/N, \\ell_2/N, \\ell_3/N, \\dots)"
},
{
"math_id": 25,
"text": "\\ell_j"
},
{
"math_id": 26,
"text": "j^{\\operatorname{th}}"
},
{
"math_id": 27,
"text": "N"
},
{
"math_id": 28,
"text": "\\theta > 0"
},
{
"math_id": 29,
"text": "\\mathbb{P}_{N, \\theta}"
},
{
"math_id": 30,
"text": "\\mathfrak{S}_N"
},
{
"math_id": 31,
"text": "\\mathbb{P}_{N, \\theta} (\\sigma) = \\frac{\\theta^{n(\\sigma)}}{\\theta (\\theta+ 1) \\dots (\\theta + n-1)} "
},
{
"math_id": 32,
"text": "n(\\sigma)"
},
{
"math_id": 33,
"text": "\\sigma"
}
]
| https://en.wikipedia.org/wiki?curid=71307878 |
71308775 | Carathéodory function | In mathematical analysis, a Carathéodory function (or Carathéodory integrand) is a multivariable function that allows us to solve the following problem effectively: A composition of two Lebesgue-measurable functions does not have to be Lebesgue-measurable as well. Nevertheless, a composition of a measurable function with a continuous function is indeed Lebesgue-measurable, but in many situations, continuity is a too restrictive assumption. Carathéodory functions are more general than continuous functions, but still allow a composition with Lebesgue-measurable function to be measurable. Carathéodory functions play a significant role in calculus of variation, and it is named after the Greek mathematician Constantin Carathéodory.
Definition.
formula_0, for formula_1 endowed with the Lebesgue measure, is a Carathéodory function if:
1. The mapping formula_2 is Lebesgue-measurable for every formula_3.
2. the mapping formula_4 is continuous for almost every formula_5.
The main merit of Carathéodory function is the following: If formula_6 is a Carathéodory function and formula_7 is Lebesgue-measurable, then the composition formula_8 is Lebesgue-measurable.
Example.
Many problems in the calculus of variation are formulated in the following way: find the minimizer of the functional formula_9 where formula_10 is the Sobolev space, the space consisting of all function formula_11 that are weakly differentiable and that the function itself and all its first order derivative are in formula_12; and where formula_13 for some formula_14, a Carathéodory function.
The fact that formula_15 is a Carathéodory function ensures us that formula_13 is well-defined.
p-growth.
If formula_14 is Carathéodory and satisfies formula_16 for some formula_17 (this condition is called "p-growth"), then formula_18 where formula_13 is finite, and continuous in the strong topology (i.e. in the norm) of formula_10.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " W:\\Omega\\times\\mathbb{R}^{N}\\rightarrow\\mathbb{R}\\cup\\left\\{ +\\infty\\right\\} "
},
{
"math_id": 1,
"text": " \\Omega\\subseteq\\mathbb{R}^{d} "
},
{
"math_id": 2,
"text": " x\\mapsto W\\left(x,\\xi\\right) "
},
{
"math_id": 3,
"text": " \\xi\\in\\mathbb{R}^{N} "
},
{
"math_id": 4,
"text": " \\xi\\mapsto W\\left(x,\\xi\\right) "
},
{
"math_id": 5,
"text": " x\\in\\Omega "
},
{
"math_id": 6,
"text": " W:\\Omega\\times\\mathbb{R}^{N}\\rightarrow\\mathbb{R} "
},
{
"math_id": 7,
"text": " u:\\Omega\\rightarrow\\mathbb{R}^{N} "
},
{
"math_id": 8,
"text": " x\\mapsto W\\left(x,u\\left(x\\right)\\right) "
},
{
"math_id": 9,
"text": " \\mathcal{F}:W^{1,p}\\left(\\Omega;\\mathbb{R}^{m}\\right)\\rightarrow\\mathbb{R}\\cup\\left\\{ +\\infty\\right\\} "
},
{
"math_id": 10,
"text": " W^{1,p}\\left(\\Omega;\\mathbb{R}^{m}\\right) "
},
{
"math_id": 11,
"text": " u:\\Omega\\rightarrow\\mathbb{R}^{m} "
},
{
"math_id": 12,
"text": " L^{p}\\left(\\Omega;\\mathbb{R}^{m}\\right) "
},
{
"math_id": 13,
"text": " \\mathcal{F}\\left[u\\right]=\\int_{\\Omega}W\\left(x,u\\left(x\\right),\\nabla u\\left(x\\right)\\right)dx "
},
{
"math_id": 14,
"text": " W:\\Omega\\times\\mathbb{R}^{m}\\times\\mathbb{R}^{d\\times m}\\rightarrow\\mathbb{R} "
},
{
"math_id": 15,
"text": " W "
},
{
"math_id": 16,
"text": " \\left|W\\left(x,v,A\\right)\\right|\\leq C\\left(1+\\left|v\\right|^{p}+\\left|A\\right|^{p}\\right) "
},
{
"math_id": 17,
"text": " C>0 "
},
{
"math_id": 18,
"text": " \\mathcal{F}:W^{1,p}\\left(\\Omega;\\mathbb{R}^{m}\\right)\\rightarrow\\mathbb{R} "
}
]
| https://en.wikipedia.org/wiki?curid=71308775 |
71308823 | Capability curve | Capability curve of an electrical generator describes the limits of the active (MW) and reactive power (MVAr) that the generator can provide. The curve represents a boundary of all operating points in the MW/MVAr plane; it is typically drawn with the real power on the horizontal axis, and, for the synchronous generator, resembles a letter D in shape, thus another name for the same curve, D-curve. In some sources the axes are switched, and the curve gets a dome-shaped appearance.
Synchronous generators.
For a traditional synchronous generator the curve consists of multiple segments, each due to some physical constraint:
The corners between the sections of the curve define the limits of the power factor (PF) that the generator can sustain at its nameplate capacity (the illustration has the PF ticks placed at 0.85 lagging and 0.95 leading angles). In practice, the prime mover (a power source that drives the generator) is designed for less active power than the generator is capable of (due to the fact that in real life generator always has to deliver some reactive power), so a "prime mover limit" (a vertical dashed line on the illustration) changes the constraints somewhat (in the example, the leading PF limit, now at the intersection of the prime mover limit and core end heating limit, lowers to 0.93.
Due to high cost of a generator, a set of sensors and limiters will trigger the alarm when the generator approaches the capability-set boundary and, if no action is taken by the operator, will disconnect the generator from the grid.
The D-curve for a particular generator can be expanded by improved cooling. Hydrogen-cooled turbo generator's cooling can be improved by increasing the hydrogen pressure, larger generators, from 300 MVA, use more efficient water cooling.
The practical D-curve of a typical synchronous generator has one more limitation, minimum load. The minimum real power requirement means that the left-side of a D-curve is detached from the vertical axis. Although some generators are designed to be able to operate at zero load (as synchronous condensers), operation at real power levels between zero and the minimum is not possible even with these designs.
Wind and solar photovoltaics generators.
The inverter-based resources (like solar photovoltaic (PV) generators, doubly-fed induction generators and full-converter wind generators, also known as "Type 3" and "Type 4" turbines) need to have reactive capabilities in order to contribute to the grid stability, yet their contribution is quite different from the synchronous generators and is limited by internal voltage, temperature, and current constraints. Due to flexibility allowed by the presence of the power converter, the doubly-fed and full-converter wind generators on the market have different shapes of the capability curve: "triangular", "rectangular", "D-shape" (the latter one resembles the D-curve of a synchronous generator). The rectangular and D-shapes of the curve theoretically allow using the generator to provide voltage regulation services even when the unit does not produce any active energy (due to low wind or no sun), essentially working as a STATCOM, but not all designs include this feature. The fixed speed wind turbines without a power converter (also known as "Type 1" and "Type 2") cannot be used for voltage control. They simply absorb the reactive power (like any typical induction machine), so a switched capacitor bank is usually used to correct the power factor to unity.
Older PV generators were intended for distribution networks. Since the current state of these networks does not include the voltage regulation, the inverters in these units were operating at the unity power factor. When the PV devices started to appear in the transmission networks, the inverters with reactive power capability appeared on the market. Since the power limit of an invertor is based on the maximum total current, the natural shape of the capability curve is similar to a semicircle, and at full capacity the real power always needs to be lowered if the reactive power is to be produced or absorbed. Theoretically the PV generators can be used as STATCOMs, although in practice the solar plants are disconnected at night.
Effects on electricity pricing.
For a synchronous generator operating "inside" its D-curve, the marginal cost of providing reactive power is close to zero. However, once the generator's operating point reaches the corners of the D-curve, increasing the reactive power output will require reduction of the real (active) power. Since the electricity markets payments are typically based on real power, the generating company will have a disincentive to provide more reactive power if requested by the independent system operator. Therefore the reactive power management (voltage control) is separated into an ancillary service with its own tariffs, like the "Reactive Supply and Voltage Control from Generation Sources" (GSR) in the US.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{MW}^2 + {MVAr}^2 = Limit"
}
]
| https://en.wikipedia.org/wiki?curid=71308823 |
71313889 | Laplace's approximation | Laplace's approximation
Laplace's approximation provides an analytical expression for a posterior probability distribution by fitting a Gaussian distribution with a mean equal to the MAP solution and precision equal to the observed Fisher information. The approximation is justified by the Bernstein–von Mises theorem, which states that, under regularity conditions, the error of the approximation tends to 0 as the number of data points tends to infinity.
For example, consider a regression or classification model with data set formula_0 comprising inputs formula_1 and outputs formula_2 with (unknown) parameter vector formula_3 of length formula_4. The likelihood is denoted formula_5 and the parameter prior formula_6. Suppose one wants to approximate the joint density of outputs and parameters formula_7. Bayes' formula reads:
formula_8
The joint is equal to the product of the likelihood and the prior and by Bayes' rule, equal to the product of the marginal likelihood formula_9 and posterior formula_10. Seen as a function of formula_3 the joint is an un-normalised density.
In Laplace's approximation, we approximate the joint by an un-normalised Gaussian formula_11, where we use formula_12 to denote approximate density, formula_13 for un-normalised density and formula_14 the normalisation constant of formula_13 (independent of formula_3). Since the marginal likelihood formula_9 doesn't depend on the parameter formula_3 and the posterior formula_10 normalises over formula_3 we can immediately identify them with formula_14 and formula_15 of our approximation, respectively.
Laplace's approximation is
formula_16
where we have defined
formula_17
where formula_18 is the location of a mode of the joint target density, also known as the maximum a posteriori or MAP point and formula_19 is the formula_20 positive definite matrix of second derivatives of the negative log joint target density at the mode formula_21. Thus, the Gaussian approximation matches the value and the log-curvature of the un-normalised target density at the mode. The value of formula_18 is usually found using a gradient based method.
In summary, we have
formula_22
for the approximate posterior over formula_3 and the approximate log marginal likelihood respectively.
The main weaknesses of Laplace's approximation are that it is symmetric around the mode and that it is very local: the entire approximation is derived from properties at a single point of the target density. Laplace's method is widely used and was pioneered in the context of neural networks by David MacKay, and for Gaussian processes by Williams and Barber.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\{x_n,y_n\\}_{n=1,\\ldots,N}"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "y"
},
{
"math_id": 3,
"text": "\\theta"
},
{
"math_id": 4,
"text": "D"
},
{
"math_id": 5,
"text": "p({\\bf y}|{\\bf x},\\theta)"
},
{
"math_id": 6,
"text": "p(\\theta)"
},
{
"math_id": 7,
"text": "p({\\bf y},\\theta|{\\bf x})"
},
{
"math_id": 8,
"text": "\np({\\bf y},\\theta|{\\bf x})\\;=\\;p({\\bf y}|{\\bf x},\\theta)p(\\theta|{\\bf x})\\;=\\;p({\\bf y}|{\\bf x})p(\\theta|{\\bf y},{\\bf x})\\;\\simeq\\;\\tilde q(\\theta)\\;=\\;Zq(\\theta).\n"
},
{
"math_id": 9,
"text": "p({\\bf y}|{\\bf x})"
},
{
"math_id": 10,
"text": "p(\\theta|{\\bf y},{\\bf x})"
},
{
"math_id": 11,
"text": "\\tilde q(\\theta)=Zq(\\theta)"
},
{
"math_id": 12,
"text": "q"
},
{
"math_id": 13,
"text": "\\tilde q"
},
{
"math_id": 14,
"text": "Z"
},
{
"math_id": 15,
"text": "q(\\theta)"
},
{
"math_id": 16,
"text": "\np({\\bf y},\\theta|{\\bf x})\\;\\simeq\\;p({\\bf y},\\hat\\theta|{\\bf x})\\exp\\big(-\\tfrac{1}{2}(\\theta-\\hat\\theta)^\\top S^{-1}(\\theta-\\hat\\theta)\\big)\\;=\\;\\tilde q(\\theta),\n"
},
{
"math_id": 17,
"text": "\\begin{align}\n\\hat\\theta &\\;=\\; \\operatorname{argmax}_\\theta \\log p({\\bf y},\\theta|{\\bf x}),\\\\\nS^{-1} &\\;=\\; -\\left.\\nabla_\\theta\\nabla_\\theta\\log p({\\bf y},\\theta|{\\bf x})\\right|_{\\theta=\\hat\\theta}, \n\\end{align}"
},
{
"math_id": 18,
"text": "\\hat\\theta"
},
{
"math_id": 19,
"text": "S^{-1}"
},
{
"math_id": 20,
"text": "D\\times D"
},
{
"math_id": 21,
"text": "\\theta=\\hat\\theta"
},
{
"math_id": 22,
"text": "\\begin{align}\nq(\\theta) &\\;=\\; {\\cal N}(\\theta|\\mu=\\hat\\theta,\\Sigma=S),\\\\\n\\log Z &\\;=\\; \\log p({\\bf y},\\hat\\theta|{\\bf x}) + \\tfrac{1}{2}\\log|S| + \\tfrac{D}{2}\\log(2\\pi),\n\\end{align}"
}
]
| https://en.wikipedia.org/wiki?curid=71313889 |
71318 | 100-year flood | Indication of the likelihood of a flooding
A 100-year flood is a flood event that has on average a 1 in 100 chance (1% probability) of being equaled or exceeded in any given year.
A 100-year flood is also referred to as a 1% flood. For coastal or lake flooding, a 100-year flood is generally expressed as a flood elevation or depth, and may include wave effects. For river systems, a 100-year flood is generally expressed as a flowrate. Based on the expected 100-year flood flow rate, the flood water level can be mapped as an area of inundation. The resulting floodplain map is referred to as the 100-year floodplain. Estimates of the 100-year flood flowrate and other streamflow statistics for any stream in the United States are available. In the UK, the Environment Agency publishes a comprehensive map of all areas at risk of a 1 in 100 year flood. Areas near the coast of an ocean or large lake also can be flooded by combinations of tide, storm surge, and waves. Maps of the riverine or coastal 100-year floodplain may figure importantly in building permits, environmental regulations, and flood insurance. These analyses generally represent 20th-century climate.
Probability.
A common misunderstanding is that a 100-year flood is likely to occur only once in a 100-year period. In fact, there is approximately a 63.4% chance of one or more 100-year floods occurring in any 100-year period. On the Danube River at Passau, Germany, the actual intervals between 100-year floods during 1501 to 2013 ranged from 37 to 192 years. The probability Pe that one or more floods occurring during any period will exceed a given flood threshold can be expressed, using the binomial distribution, as
formula_0
where T is the threshold return period (e.g. 100-yr, 50-yr, 25-yr, and so forth), and n is the number of years in the period. The probability of exceedance Pe is also described as the natural, inherent, or hydrologic risk of failure. However, the expected value of the number of 100-year floods occurring in any 100-year period is 1.
Ten-year floods have a 10% chance of occurring in any given year (Pe =0.10); 500-year have a 0.2% chance of occurring in any given year (Pe =0.002); etc. The percent chance of an X-year flood occurring in a single year is 100/X. A similar analysis is commonly applied to coastal flooding or rainfall data. The recurrence interval of a storm is rarely identical to that of an associated riverine flood, because of rainfall timing and location variations among different drainage basins.
The field of extreme value theory was created to model rare events such as 100-year floods for the purposes of civil engineering. This theory is most commonly applied to the maximum or minimum observed stream flows of a given river. In desert areas where there are only ephemeral washes, this method is applied to the maximum observed rainfall over a given period of time (24-hours, 6-hours, or 3-hours). The extreme value analysis only considers the most extreme event observed in a given year. So, between the large spring runoff and a heavy summer rain storm, whichever resulted in more runoff would be considered the extreme event, while the smaller event would be ignored in the analysis (even though both may have been capable of causing terrible flooding in their own right).
Statistical assumptions.
There are a number of assumptions that are made to complete the analysis that determines the 100-year flood. First, the extreme events observed in each year must be independent from year to year. In other words, the maximum river flow rate from 1984 cannot be found to be significantly correlated with the observed flow rate in 1985, which cannot be correlated with 1986, and so forth. The second assumption is that the observed extreme events must come from the same probability density function. The third assumption is that the probability distribution relates to the largest storm (rainfall or river flow rate measurement) that occurs in any one year. The fourth assumption is that the probability distribution function is stationary, meaning that the mean (average), standard deviation and maximum and minimum values are not increasing or decreasing over time. This concept is referred to as stationarity.
The first assumption is often but not always valid and should be tested on a case-by-case basis. The second assumption is often valid if the extreme events are observed under similar climate conditions. For example, if the extreme events on record all come from late summer thunderstorms (as is the case in the southwest U.S.), or from snow pack melting (as is the case in north-central U.S.), then this assumption should be valid. If, however, there are some extreme events taken from thunder storms, others from snow pack melting, and others from hurricanes, then this assumption is most likely not valid. The third assumption is only a problem when trying to forecast a low, but maximum flow event (for example, an event smaller than a 2-year flood). Since this is not typically a goal in extreme analysis, or in civil engineering design, then the situation rarely presents itself.
The final assumption about stationarity is difficult to test from data for a single site because of the large uncertainties in even the longest flood records (see next section). More broadly, substantial evidence of climate change strongly suggests that the probability distribution is also changing and that managing flood risks in the future will become even more difficult. The simplest implication of this is that most of the historical data represent 20th-century climate and might not be valid for extreme event analysis in the 21st century.
Probability uncertainty.
When these assumptions are violated, there is an "unknown" amount of uncertainty introduced into the reported value of what the 100-year flood means in terms of rainfall intensity, or flood depth. When all of the inputs are known, the uncertainty can be measured in the form of a confidence interval. For example, one might say there is a 95% chance that the 100-year flood is greater than X, but less than Y.
Direct statistical analysis to estimate the 100-year riverine flood is possible only at the relatively few locations where an annual series of maximum instantaneous flood discharges has been recorded. In the United States as of 2014, taxpayers have supported such records for at least 60 years at fewer than 2,600 locations, for at least 90 years at fewer than 500, and for at least 120 years at only 11. For comparison, the total area of the nation is about , so there are perhaps 3,000 stream reaches that drain watersheds of and 300,000 reaches that drain . In urban areas, 100-year flood estimates are needed for watersheds as small as . For reaches without sufficient data for direct analysis, 100-year flood estimates are derived from indirect statistical analysis of flood records at other locations in a hydrologically similar region or from other hydrologic models. Similarly for coastal floods, tide gauge data exist for only about 1,450 sites worldwide, of which only about 950 added information to the global data center between January 2010 and March 2016.
Much longer records of flood elevations exist at a few locations around the world, such as the Danube River at Passau, Germany, but they must be evaluated carefully for accuracy and completeness before any statistical interpretation.
For an individual stream reach, the uncertainties in any analysis can be large, so 100-year flood estimates have large individual uncertainties for most stream reaches. For the largest recorded flood at any specific location, or any potentially larger event, the recurrence interval always is poorly known. Spatial variability adds more uncertainty, because a flood peak observed at different locations on the same stream during the same event commonly represents a different recurrence interval at each location. If an extreme storm drops enough rain on one branch of a river to cause a 100-year flood, but no rain falls over another branch, the flood wave downstream from their junction might have a recurrence interval of only 10 years. Conversely, a storm that produces a 25-year flood simultaneously in each branch might form a 100-year flood downstream. During a time of flooding, news accounts necessarily simplify the story by reporting the greatest damage and largest recurrence interval estimated at any location. The public can easily and incorrectly conclude that the recurrence interval applies to all stream reaches in the flood area.
Observed intervals between floods.
Peak elevations of 14 floods as early as 1501 on the Danube River at Passau, Germany, reveal great variability in the actual intervals between floods. Flood events greater than the 50-year flood occurred at intervals of 4 to 192 years since 1501, and the 50-year flood of 2002 was followed only 11 years later by a 500-year flood. Only half of the intervals between 50- and 100-year floods were within 50 percent of the nominal average interval. Similarly, the intervals between 5-year floods during 1955 to 2007 ranged from 5 months to 16 years, and only half were within 2.5 to 7.5 years.
Regulatory use.
In the United States, the 100-year flood provides the risk basis for flood insurance rates. Complete information on the National Flood Insurance Program (NFIP) is available here. A "regulatory flood" or "base flood" is routinely established for river reaches through a science-based rule-making process targeted to a 100-year flood at the historical average recurrence interval. In addition to historical flood data, the process accounts for previously established regulatory values, the effects of flood-control reservoirs, and changes in land use in the watershed. Coastal flood hazards have been mapped by a similar approach that includes the relevant physical processes. Most areas where serious floods can occur in the United States have been mapped consistently in this manner. On average nationwide, those 100-year flood estimates are well sufficient for the purposes of the NFIP and offer reasonable estimates of future flood risk, if the future is like the past. Approximately 3% of the U.S. population lives in areas subject to the 1% annual chance coastal flood hazard.
In theory, removing homes and businesses from areas that flood repeatedly can protect people and reduce insurance losses, but in practice it is difficult for people to retreat from established neighborhoods.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P_{e}=1-\\left[ 1-\\left( \\frac{1}{T} \\right) \\right]^{n}"
}
]
| https://en.wikipedia.org/wiki?curid=71318 |
7132002 | Wedderburn's little theorem | Result in algebra
In mathematics, Wedderburn's little theorem states that every finite division ring is a field. In other words, for finite rings, there is no distinction between domains, division rings and fields.
The Artin–Zorn theorem generalizes the theorem to alternative rings: every finite alternative division ring is a field.
History.
The original proof was given by Joseph Wedderburn in 1905, who went on to prove the theorem in two other ways. Another proof was given by Leonard Eugene Dickson shortly after Wedderburn's original proof, and Dickson acknowledged Wedderburn's priority. However, as noted in , Wedderburn's first proof was incorrect – it had a gap – and his subsequent proofs appeared only after he had read Dickson's correct proof. On this basis, Parshall argues that Dickson should be credited with the first correct proof.
A simplified version of the proof was later given by Ernst Witt. Witt's proof is sketched below. Alternatively, the theorem is a consequence of the Skolem–Noether theorem by the following argument. Let formula_0 be a finite division algebra with center formula_1. Let formula_2 and formula_3 denote the cardinality of formula_1. Every maximal subfield of formula_0 has formula_4 elements; so they are isomorphic and thus are conjugate by Skolem–Noether. But a finite group (the multiplicative group of formula_0 in our case) cannot be a union of conjugates of a proper subgroup; hence, formula_5.
A later "group-theoretic" proof was given by Ted Kaczynski in 1964. This proof, Kaczynski's first published piece of mathematical writing, was a short, two-page note which also acknowledged the earlier historical proofs.
Relationship to the Brauer group of a finite field.
The theorem is essentially equivalent to saying that the Brauer group of a finite field is trivial. In fact, this characterization immediately yields a proof of the theorem as follows: let "K" be a finite field. Since the Herbrand quotient vanishes by finiteness, formula_6 coincides with formula_7, which in turn vanishes by Hilbert 90.
The triviality of the Brauer group can also be obtained by direct computation, as follows. Let formula_8 and let formula_9 be a finite extension of degree formula_10 so that formula_11 Then formula_12 is a cyclic group of order formula_10 and the standard method of computing cohomology of finite cyclic groups shows that
formula_13
where the norm map formula_14 is given by
formula_15
Taking formula_16 to be a generator of the cyclic group formula_17 we find that formula_18 has order formula_19 and therefore it must be a generator of formula_20. This implies that formula_21 is surjective, and therefore formula_22 is trivial.
Proof.
Let "A" be a finite domain. For each nonzero "x" in "A", the two maps
formula_23
are injective by the cancellation property, and thus, surjective by counting. It follows from elementary group theory that the nonzero elements of formula_24 form a group under multiplication. Thus, formula_24 is a skew-field.
To prove that every finite skew-field is a field, we use strong induction on the size of the skew-field. Thus, let formula_24 be a skew-field, and assume that all skew-fields that are proper subsets of formula_24 are fields. Since the center formula_25 of formula_24 is a field, formula_24 is a vector space over formula_25 with finite dimension formula_26. Our objective is then to show formula_5. If formula_3 is the order of formula_25, then formula_24 has order formula_27. Note that because formula_25 contains the distinct elements formula_28 and formula_29, formula_30. For each formula_31 in formula_24 that is not in the center, the centralizer formula_32 of formula_31 is clearly a skew-field and thus a field, by the induction hypothesis, and because formula_32 can be viewed as a vector space over formula_25 and formula_24 can be viewed as a vector space over formula_32, we have that formula_32 has order formula_33 where formula_34 divides formula_26 and is less than formula_26. Viewing formula_35, formula_36, and formula_37 as groups under multiplication, we can write the class equation
formula_38
where the sum is taken over the conjugacy classes not contained within formula_35, and the formula_34 are defined so that for each conjugacy class, the order of formula_37 for any formula_31 in the class is formula_39. formula_40 and formula_41 both admit polynomial factorization in terms of cyclotomic polynomials
formula_42
The cyclotomic polynomials on formula_43 are in formula_44 and respect the following identities:
formula_45 and formula_46.
Because each formula_34 is a proper divisor of formula_26,
formula_47 divides both formula_48 and each formula_49 in formula_44,
so by the above class equation, formula_50 must divide formula_51, and therefore by taking the norms
formula_52.
To see that this forces formula_26 to be formula_29, we will show
formula_53
for formula_54 using factorization over the complex numbers. In the polynomial identity
formula_55
where formula_56 runs over the primitive formula_26-th roots of unity, set formula_31 to be formula_3 and then take absolute values
formula_57
For formula_54, we see that for each primitive formula_26-th root of unity formula_56,
formula_58
because of the location of formula_3, formula_29, and formula_56 in the complex plane. Thus
formula_59 | [
{
"math_id": 0,
"text": "D"
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": "[D:k]=n^{2}"
},
{
"math_id": 3,
"text": "q"
},
{
"math_id": 4,
"text": "q^{n}"
},
{
"math_id": 5,
"text": "n = 1"
},
{
"math_id": 6,
"text": "\\operatorname{Br}(K) = H^2(K^{\\text{al}}/K)"
},
{
"math_id": 7,
"text": "H^1(K^{\\text{al}}/K)"
},
{
"math_id": 8,
"text": "|K| = q,"
},
{
"math_id": 9,
"text": "L/K"
},
{
"math_id": 10,
"text": "n,"
},
{
"math_id": 11,
"text": "|L|=q^n."
},
{
"math_id": 12,
"text": "\\mathrm{Gal}(L/K)"
},
{
"math_id": 13,
"text": " H^2(L/K) = K^{\\times}/N_{L/K}(L^{\\times}),"
},
{
"math_id": 14,
"text": " N_{L/K}:L^{\\times} \\to K^{\\times} "
},
{
"math_id": 15,
"text": " N_{L/K}(\\alpha) = \\prod_{\\sigma \\in \\mathrm{Gal}(L/K)} \\sigma(\\alpha) = \\alpha \\cdot \\alpha^{q} \\cdot \\alpha^{q^2} \\cdots \\alpha^{q^{n-1}} = \\alpha^{\\frac{q^n-1}{q-1}}."
},
{
"math_id": 16,
"text": "\\alpha"
},
{
"math_id": 17,
"text": "L^{\\times},"
},
{
"math_id": 18,
"text": "N_{L/K}(\\alpha)"
},
{
"math_id": 19,
"text": "q-1,"
},
{
"math_id": 20,
"text": "K^{\\times}"
},
{
"math_id": 21,
"text": "N_{L/K}"
},
{
"math_id": 22,
"text": "H^{2}(L/K)"
},
{
"math_id": 23,
"text": "a \\mapsto ax, a \\mapsto xa: A \\to A"
},
{
"math_id": 24,
"text": "A"
},
{
"math_id": 25,
"text": "Z(A)"
},
{
"math_id": 26,
"text": "n"
},
{
"math_id": 27,
"text": "{q}^{n}"
},
{
"math_id": 28,
"text": "0"
},
{
"math_id": 29,
"text": "1"
},
{
"math_id": 30,
"text": "q>1"
},
{
"math_id": 31,
"text": "x"
},
{
"math_id": 32,
"text": "{Z}_{x}"
},
{
"math_id": 33,
"text": "{q}^{d}"
},
{
"math_id": 34,
"text": "d"
},
{
"math_id": 35,
"text": "{Z(A)}^{*}"
},
{
"math_id": 36,
"text": "A^{*}"
},
{
"math_id": 37,
"text": "{Z}^{*}_{x}"
},
{
"math_id": 38,
"text": "q^n - 1 = q - 1 + \\sum {q^n - 1 \\over q^d - 1}"
},
{
"math_id": 39,
"text": "{q}^{d} - 1"
},
{
"math_id": 40,
"text": "{q}^{n} - 1"
},
{
"math_id": 41,
"text": "q^{d} - 1"
},
{
"math_id": 42,
"text": "\\Phi_f(q)."
},
{
"math_id": 43,
"text": "\\Q"
},
{
"math_id": 44,
"text": "\\Z[X]"
},
{
"math_id": 45,
"text": "x^n-1 = \\prod_{m|n} \\Phi_m(x)"
},
{
"math_id": 46,
"text": "x^d-1 = \\prod_{m|d} \\Phi_m(x)"
},
{
"math_id": 47,
"text": "\\Phi_n(x)"
},
{
"math_id": 48,
"text": "{x}^{n} - 1"
},
{
"math_id": 49,
"text": "{x^n - 1 \\over x^d - 1}"
},
{
"math_id": 50,
"text": "\\Phi_n(q)"
},
{
"math_id": 51,
"text": "q - 1"
},
{
"math_id": 52,
"text": "|\\Phi_n(q)| \\leq q-1"
},
{
"math_id": 53,
"text": "|\\Phi_n(q)| > q-1"
},
{
"math_id": 54,
"text": "n>1"
},
{
"math_id": 55,
"text": "\\Phi_n(x) = \\prod (x - \\zeta),"
},
{
"math_id": 56,
"text": "\\zeta"
},
{
"math_id": 57,
"text": "|\\Phi_n(q)| = \\prod |q - \\zeta|."
},
{
"math_id": 58,
"text": "|q-\\zeta| > |q-1|"
},
{
"math_id": 59,
"text": "|\\Phi_n(q)| > q-1."
}
]
| https://en.wikipedia.org/wiki?curid=7132002 |
713220 | Organic composition of capital | Concept created by Karl Marx
The organic composition of capital (OCC) is a concept created by Karl Marx in his theory of capitalism, which was simultaneously his critique of the political economy of his time. It is derived from his more basic concepts of 'value composition of capital' and 'technical composition of capital'. Marx defines the organic composition of capital as "the value-composition of capital, in so far as it is determined by its technical composition and mirrors the changes of the latter". The 'technical composition of capital' measures the relation between the elements of constant capital (plant, equipment and materials) and variable capital (wage workers). It is 'technical' because no valuation is here involved. In contrast, the 'value composition of capital' is the ratio between the value of the elements of constant capital involved in production and the value of the labor. Marx found that the special concept of 'organic composition of capital' was sometimes useful in analysis, since it assumes that the relative values of all the elements of capital are constant.
Overview.
In Book I of Capital, Marx made the simplifying assumption that all valuation was in terms of what is often called labor-values (and he called 'values'). In Book III, however, he states that equilibrium values between industries could not be directly proportional to their labor content. The latter only determined equilibrium values in his pre-capitalist 'Simple Commodity Production', where the producers owned their means of production and natural resources were used freely. In Book III, first he assumed that land could be used freely and showed that the equilibrium prices were his 'prices of production'. Later, when he introduced land ownership and the rent on land, the equilibrium prices were to be 'modified production prices' that took the rent of land into account. The implication of this is that the valuation used for the 'value composition of capital' had to be accordingly modified, since the labor-values used throughout Book I was an expedient he used in order to not excessively complicate the communication of this theory. But Marx was not able to complete to his satisfaction Books II and III.
The various distinct concepts related to the composition of capital are often used in contemporary Marxian economics as a theoretical alternative to similar neo-classical concepts. The neoclassical concept most similar to the increasing organic composition of capital is capital deepening.
Marx's concept of constant capital is the monetary value of the plant, equipment and materials that are tied up in the production process. And his concept of variable capital is the money value that is tied up in the payment of wages. The concept of OCC does not apply to "all" capital assets, only to those invested in production (i.e. it excludes assets that are in the 'sphere of consumption', such as homes).
In "Capital Vol. 3" Marx demonstrates that the organic composition of capital decisively influences industrial profitability. According to Marx, the OCC expresses the specific form which the capitalist mode of production gives to the relationship between means of production and labor power, determining the productivity of labor and the creation of a surplus product. This relationship has both technical and social aspects, reflecting the fact that simultaneously consumable use values and commercial exchange-values are being produced.
Marx argues that a rising organic composition of capital is a necessary effect of capital accumulation and competition in the sphere of production, at least in the long term. This means that the share of constant capital in the total capital outlay increases, and that labor input per product unit declines.
In his discussion, Marx leaves out of account components of capital other than labour-power and means of production invested in, such as the faux frais of production (incidental expenses). The full importance of the OCC emerges in chapter 8 of the third volume of "Das Kapital".
Ratios.
The value composition of capital (VCC) is usually expressed as a ratio of constant capital to variable capital, or formula_0. Other measures are also used in the Marxian literature. One is formula_1. This is the ratio of constant capital to newly produced value (roughly, what modern economists call "value added"), i.e., surplus-value + variable capital and close to the concept of a capital/output ratio. Less common is the measure used by Paul M. Sweezy, i.e., formula_2, the ratio of constant capital to the total capital invested.
The total capital tied up by a capitalist enterprise includes more than fixed assets, materials and wages/salaries; it also includes liquid funds, reserves and other financial assets.
For instance, an employer must normally reserve funds to pay for ongoing operating expenses, until these are recouped from product sales.
Measures.
An empirical proxy measure for the technical composition of capital (TCC) is the average amount of fixed equipment and materials used per worker (capital intensity), or the ratio of the average amount of equipment & materials used to the total hours worked. The value composition of capital (VCC) is usually measured by summing the value of fixed capital ("Cf") and intermediate expenditures (circulating capital or "Cc") and dividing the total by the value of labour costs (V). The estimation procedure is not simple, for example because compensation of employees includes more than wages and part of the tax levy constitutes an element of surplus value.
In modern national accounts, an empirical proxy of the flow of variable capital is the wage-payments associated with productive activity in an accounting period, and a proxy for constant capital (flow measure) is depreciation charges + intermediate consumption; a stock measure of constant capital would be the fixed capital stock plus the average value of inventories held during the period of account (usually a year). However, because the "circulating" component of constant capital (denoted "Cc") includes purchases of external services and other operating costs, the stock of Cc is sometimes measured as the flow of intermediate consumption divided by the average inventory level.
The variable capital actually tied up by an enterprise at any point in time will usually be less than the annual flow value, because wages can in part be paid out of revenues received from ongoing product sales. Thus, the capital reserves held by an enterprise for paying wages may, at any time, be only 1/10 or so of their annual flow value.
The most accurate quantitative estimates for the OCC refer to the outlays in specific sectors, e.g. manufacturing.
Examples.
By any of these measures, the plant- and machinery-intensive oil industry would have a high organic composition of capital, while labor-intensive businesses such as catering would tend to have a low OCC. The OCC varies according to differences in production technology, between sectors of an economy, or according to changes in production technology over time.
The OCC and crises.
The magnitude of the OCC is important in Marxist crisis theory because of its impact on the average rate of profit. The implication of a rise in the organic composition of capital is a declining rate of profit; for every new increase in surplus-value realised as profit from sales, an even larger corresponding increase in constant capital investment becomes necessary.
But this represents only a "tendency", Marx argues, because the fall of the rate of profit can be offset by counteracting influences. The main ones include:
Because numerous different factors can affect profitability, the overall effects of a rising OCC on average industrial profitability therefore really have to be evaluated empirically in a longer time-span, e.g. 20–25 years.
Insofar as the trajectory of capitalist development is, as Marx argues, ruled by the quest for extra surplus-value, the economic fate of the system can be summarised as an interaction between the tendency of the profit rate to decline, and the factors that counteract it: in other words, the permanent battle to reduce costs, increase sales and increase profits.
The hypothetical final result of the rising OCC would be full automation of the production process, in which case labour-costs would be near-zero. This is argued to herald the end of capitalism's functioning as both a profit generating economic system for capitalists, and as a social system, among other things because the capitalist system does not contain a means for distributing incomes other than that based on labour-effort, and full automation would negate the concept of exploitation.
Marx and Ricardo.
The different organic compositions of capital of different branches of industry raised a problem for the classical economic schema of David Ricardo and others, who could not reconcile their labor-cost theory of price with the existence of differences in the OCC between sectors. The latter imply different profit rates in different industries. Also, while market competition would establish a ruling price level for a type of output, different enterprises would use more or less labour to produce it. For these reasons, values "produced" and prices "realised" by different producers would quantitatively diverge.
Marx either solved this problem with his theory of prices of production and the tendency for profitability differentials to be levelled out through competition, or he failed to solve it, according to which side of the debate over the transformation problem one finds convincing.
Others see this "problem" (the development of a mathematical relationship between prices and labor-values) as a false one, rejecting the idea that Marx aimed to use his labor theory value to understand relative prices. Here the argument is that he aimed to reveal only the "social nature" or "deep structure" of capitalist society.
In a third interpretation, Marx aspired both to relate values and prices, and offer a social critique, because both of these were necessary to make his case truly convincing. Here, the separate concepts of product-values and product-prices are regarded as essential for a theory of market "dynamics" and capitalist competition; it is argued that price behaviour in aggregate cannot be understood or theorised about at all without reference to value-relations, explicitly or implicitly.
Historical trends.
There has been a lengthy theoretical and statistical dispute among Marxian economists about whether the organic composition of capital really does tend to, or has to rise historically, as Marx predicted, or, to put it differently, whether in aggregate technological progress has a "labor-saving bias", and causes the average profit rate to decline.
One sort of question asked is, why capitalists would introduce new technology, if doing so would result specifically in a lower profit rate on capital invested? Marx's reply is essentially that:
The statistical and historical evidence about the Kondratiev waves of capitalist development from the 1830s onwards is certainly favourable to Marx's theory of the rising organic composition of capital. It is difficult to find industries where the "secular" historical trend is one of an increase in the share of wages in the total capital outlay. Generally, the opposite is the case.
However, it has been argued that the value of physical capital is notoriously difficult to measure empirically in an accurate way; and statistical time-series for economic variables over long periods are also susceptible to errors and distortions. The owners of a business may not even know exactly what the physical assets they use are currently worth, or what their business is currently worth, as a going concern. That worth is hypothetical until such time as the business is sold and paid for. However, the modern trend in official accounting standards is certainly for assets to be valued more and more at their "current market value", or current replacement cost, rather than at historic (original acquisition) cost.
In addition, during severe economic slumps, physical capital assets are subject to devaluation, lie idle or are destroyed, while workers become unemployed; the empirical effect is to "reduce" the organic composition of capital. Likewise, non-profitable war production can also lower the average OCC.
Finally, a technological revolution can also radically change the proportions between constant and variable capital, reducing the cost of constant capital, and lowering the OCC. In that case, operating costs are reduced in a short span of time, or cheaper alternatives substitute for the inputs traditionally used.
Much less discussed in the economic literature is the effect on the organic composition of capital of the growth of the services sector in the developed countries. For example, does the widespread use of computers in labour-intensive services lower the OCC?
References.
<templatestyles src="Reflist/styles.css" />
Karl Marx, "The General Law of Capitalist Accumulation". | [
{
"math_id": 0,
"text": "{c \\over v}"
},
{
"math_id": 1,
"text": "{c \\over {s+v}}"
},
{
"math_id": 2,
"text": "{c \\over {c+v}}"
}
]
| https://en.wikipedia.org/wiki?curid=713220 |
71322459 | Kaniadakis distribution | In statistics, a Kaniadakis distribution (also known as κ-distribution) is a statistical distribution that emerges from the Kaniadakis statistics. There are several families of Kaniadakis distributions related to different constraints used in the maximization of the Kaniadakis entropy, such as the κ-Exponential distribution, κ-Gaussian distribution, Kaniadakis κ-Gamma distribution and κ-Weibull distribution. The κ-distributions have been applied for modeling a vast phenomenology of experimental statistical distributions in natural or artificial complex systems, such as, in epidemiology, quantum statistics, in astrophysics and cosmology, in geophysics, in economy, in machine learning.
The κ-distributions are written as function of the κ-deformed exponential, taking the form
formula_0
enables the power-law description of complex systems following the consistent κ-generalized statistical theory., where formula_1 is the Kaniadakis κ-exponential function.
The κ-distribution becomes the common Boltzmann distribution at low energies, while it has a power-law tail at high energies, the feature of high interest of many researchers.
Common Kaniadakis distributions.
κ-Distribution Type IV.
Continuous probability distribution
The Kaniadakis distribution of Type IV (or κ-Distribution Type IV) is a three-parameter family of continuous statistical distributions.
The κ-Distribution Type IV distribution has the following probability density function:
formula_17
valid for formula_18, where formula_19 is the entropic index associated with the Kaniadakis entropy, formula_20 is the scale parameter, and formula_15 is the shape parameter.
The cumulative distribution function of κ-Distribution Type IV assumes the form:
formula_21
The κ-Distribution Type IV does not admit a classical version, since the probability function and its cumulative reduces to zero in the classical limit formula_9.
Its moment of order formula_22 given by
formula_23
The moment of order formula_22 of the κ-Distribution Type IV is finite for formula_24.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " f_i=\\exp_{\\kappa}(-\\beta E_i+\\beta \\mu) "
},
{
"math_id": 1,
"text": " \\exp_{\\kappa}(x)=(\\sqrt{1+ \\kappa^2 x^2}+\\kappa x)^{1/\\kappa} "
},
{
"math_id": 2,
"text": "\\kappa \\rightarrow 0."
},
{
"math_id": 3,
"text": "\\kappa, \\alpha, \\beta, \\nu"
},
{
"math_id": 4,
"text": "\\alpha = \\nu = 1"
},
{
"math_id": 5,
"text": "\\alpha = 1"
},
{
"math_id": 6,
"text": "\\nu = n = "
},
{
"math_id": 7,
"text": "\\alpha = 2"
},
{
"math_id": 8,
"text": "\\nu = 1/2 "
},
{
"math_id": 9,
"text": "\\kappa \\rightarrow 0"
},
{
"math_id": 10,
"text": "\\nu = "
},
{
"math_id": 11,
"text": "\\nu > 0 "
},
{
"math_id": 12,
"text": "\\nu = 1 "
},
{
"math_id": 13,
"text": "\\nu = "
},
{
"math_id": 14,
"text": "\\nu = 3/2 "
},
{
"math_id": 15,
"text": "\\alpha > 0"
},
{
"math_id": 16,
"text": "\\nu = 1/\\alpha "
},
{
"math_id": 17,
"text": " \nf_{_{\\kappa}}(x) = \\frac{\\alpha}{\\kappa} (2\\kappa \\beta )^{1/\\kappa} \\left(1 - \\frac{\\kappa \\beta x^\\alpha}{\\sqrt{1+\\kappa^2\\beta^2x^{2\\alpha} } } \\right) x^{ -1 + \\alpha / \\kappa} \\exp_\\kappa(-\\beta x^\\alpha)\n"
},
{
"math_id": 18,
"text": "x \\geq 0"
},
{
"math_id": 19,
"text": "0 \\leq |\\kappa| < 1"
},
{
"math_id": 20,
"text": "\\beta > 0"
},
{
"math_id": 21,
"text": "F_\\kappa(x) = (2\\kappa \\beta )^{1/\\kappa} x^{\\alpha / \\kappa} \\exp_\\kappa(-\\beta x^\\alpha) "
},
{
"math_id": 22,
"text": "m"
},
{
"math_id": 23,
"text": "\\operatorname{E}[X^m] = \\frac{(2 \\kappa \\beta)^{-m/\\alpha} }{ 1 + \\kappa \\frac{ m }{ 2\\alpha } } \\frac{\\Gamma\\Big(\\frac{1}{\\kappa} + \\frac{m}{\\alpha}\\Big) \\Gamma\\Big(1 - \\frac{m}{2\\alpha}\\Big)}{\\Gamma\\Big(\\frac{1}{\\kappa} + \\frac{m}{2\\alpha}\\Big)}"
},
{
"math_id": 24,
"text": "m < 2\\alpha"
}
]
| https://en.wikipedia.org/wiki?curid=71322459 |
7133142 | Okishio's theorem | Economic theorem regarding rate of profit
Okishio's theorem is a theorem formulated by Japanese economist Nobuo Okishio. It has had a major impact on debates about Marx's theory of value. Intuitively, it can be understood as saying that if one capitalist raises his profits by introducing a new technique that cuts his costs, the collective or general rate of profit in society goes up for all capitalists. In 1961, Okishio established this theorem under the assumption that the real wage remains constant. Thus, the theorem isolates the effect of pure innovation from any consequent changes in the wage.
For this reason the theorem, first proposed in 1961, excited great interest and controversy because, according to Okishio, it contradicts Marx's law of the tendency of the rate of profit to fall. Marx had claimed that the new general rate of profit, after a new technique has spread throughout the branch where it has been introduced, would be lower than before. In modern words, the capitalists would be caught in a rationality trap or prisoner's dilemma: that which is rational from the point of view of a single capitalist, turns out to be irrational for the system as a whole, for the collective of all capitalists. This result was widely understood, including by Marx himself, as establishing that capitalism contained inherent limits to its own success. Okishio's theorem was therefore received in the West as establishing that Marx's proof of this fundamental result was inconsistent.
More precisely, the theorem says that the general rate of profit in the economy as a whole will be higher if a new technique of production is introduced in which, at the prices prevailing at the time that the change is introduced, the unit cost of output in one industry is less than the pre-change unit cost. The theorem, as Okishio (1961:88) points out, does not apply to non-basic branches of industry.
The proof of the theorem may be most easily understood as an application of the Perron–Frobenius theorem. This latter theorem comes from a branch of linear algebra known as the theory of nonnegative matrices. A good source text for the basic theory is Seneta (1973). The statement of Okishio's theorem, and the controversies surrounding it, may however be understood intuitively without reference to, or in-depth knowledge of, the Perron–Frobenius theorem or the general theory of nonnegative matrices.
Sraffa model.
The argument of Nobuo Okishio, a Japanese economist, is based on a Sraffa-model. The economy consists of two departments I and II, where I is the investments goods department (means of production) and II is the consumption goods department, where the consumption goods for workers are produced. The coefficients of production tell how much of the several inputs is necessary to produce one unit of output of a given commodity ("production of commodities by means of commodities"). In the model below two outputs exist formula_0, the quantity of investment goods, and formula_1, the quantity of consumption goods.
The coefficients of production are defined as:
The worker receives a wage at a certain wage rate w (per unit of labour), which is defined by a certain quantity of consumption goods.
Thus:
This table describes the economy:
This is equivalent to the following equations:
In department I expenses for investment goods or for "constant capital" are:
In Department II expenses for "constant capital" are:
formula_15
and for "variable capital":
formula_16
"(The constant and variable capital of the economy as a whole is a weighted sum of these capitals of the two departments. See below for the relative magnitudes of the two departments which serve as weights for summing up constant and variable capitals.)"
Now the following assumptions are made:
Okishio, following some Marxist tradition, assumes a constant real wage rate equal to the value of labour power, that is the wage rate must allow to buy a basket of consumption goods necessary for workers to reproduce their labour power. So, in this example it is assumed that workers get two pieces of consumption goods per hour of labour in order to reproduce their labour power.
A technique of production is defined according to Sraffa by its coefficients of production. For a technique, for example, might be numerically specified by the following coefficients of production:
From this an equilibrium growth path can be computed. The price for the investment goods is computed as (not shown here): formula_23, and the profit rate is: formula_24. The equilibrium system of equations then is:
Introduction of technical progress.
A single firm of department I is supposed to use the same technique of production as the department as a whole. So, the technique of production of this firm is described by the following:
formula_8
formula_27
Now this firm introduces technical progress by introducing a technique, in which less working hours are needed to produce one unit of output, the respective production coefficient is reduced, say, by half from formula_20 to formula_28. This already increases the technical composition of capital, because to produce one unit of output (investment goods) only half as much of working hours are needed, while as much as before of investment goods are needed. In addition to this, it is assumed that the labour saving technique goes hand in hand with a higher productive consumption of investment goods, so that the respective production coefficient is increased from, say, formula_19 to formula_29.
This firm, after having adopted the new technique of production is now described by the following equation, keeping in mind that at first prices and the wage rate remain the same as long as only this one firm has changed its technique of production:
formula_30
So this firm has increased its rate of profit from formula_31 to formula_32. This accords with Marx's argument that firms introduce new techniques only if this raises the rate of profit.
Marx expected, however, that if the new technique will have spread through the whole branch, that if it has been adopted by the other firms of the branch, the new equilibrium rate of profit not only for the pioneering firm will be again somewhat lower, but for the branch and the economy as a whole. The traditional reasoning is that only "living labour" can produce value, whereas constant capital, the expenses for investment goods, do not create value. The value of constant capital is only transferred to the final products. Because the new technique is labour-saving on the one hand, outlays for investment goods have been increased on the other, the rate of profit must finally be lower.
Let us assume, the new technique spreads through all of department I. Computing the new equilibrium rate of growth and the new price formula_11 gives under the assumption that a new general rate of profit is established:
If the new technique is generally adopted inside department I, the new equilibrium general rate of profit is somewhat lower than the profit rate, the pioneering firm had at the beginning (formula_35), but it is still higher than the old prevailing general rate of profit: formula_36 larger than formula_37.
Result.
Nobuo Okishio proved this generally, which can be interpreted as a refutation of Marx's law of the tendency of the rate of profit to fall. This proof has also been confirmed if the model is extended to include not only circulating capital but also fixed capital. Mechanisation, defined as increased inputs of machinery per unit of output combined with the same or reduced amount of labour-input, necessarily lowers the maximum rate of profit.
Marxist responses.
Some Marxists simply dropped the law of the tendency of the rate of profit to fall, claiming that there are enough other reasons to criticise capitalism, that the tendency for crises can be established without the law, so that it is not an essential feature of Marx's economic theory. Others would say that the law helps to explain the recurrent cycle of crises, but cannot be used as a tool to explain the long term developments of the capitalist economy.
Others argued that Marx's law holds if one assumes a constant ‘’wage share’’ instead of a constant real wage ‘’rate’’. Then, the prisoner's dilemma works like this: The first firm to introduce technical progress by increasing its outlay for constant capital achieves an extra profit. But as soon as this new technique has spread through the branch and all firms have increased their outlays for constant capital also, workers adjust wages in proportion to the higher productivity of labour. The outlays for constant capital having increased, wages having been increased now also, this means that for all firms the rate of profit is lower.
However, Marx did not know the law of a constant wage share. Mathematically the rate of profit could always be stabilised by decreasing the wage share. In our example, for instance, the rise of the rate of profit goes hand in hand with a decrease of the wage share from formula_38 to formula_39, see computations below. However, a reduction in the wage share is not possible in neoclassical models due to the assumption that wages equal the marginal product of labour.
A third response is to reject the whole framework of the Sraffa-models, especially the comparative static method. In a capitalist economy entrepreneurs do not wait until the economy has reached a new equilibrium path but the introduction of new production techniques is an ongoing process. Marx's law could be valid if an ever-larger portion of production is invested per working place instead of in new additional working places. Such an ongoing process cannot be described by the comparative static method of the Sraffa models.
According to Alfred Müller the Okishio theorem could be true, if there was a coordination amongst capitalists for the whole economy, a centrally planned capitalist economy, which is a contradiction in itself. In a capitalist economy, in which means of production are private property, economy-wide planning is not possible. The individual capitalists follow their individual interests and do not cooperate to achieve a general high rate of growth or rate of profit.
Model in physical terms.
Dual system of equations.
Up to now it was sufficient to describe only monetary variables. In order to expand the analysis to compute for instance the value of constant capital "c", variable capital "v" und surplus value (or profit) "s" for the economy as whole or to compute the ratios between these magnitudes like rate of surplus value "s"/"v" or value composition of capital, it is necessary to know the relative size of one department with respect to the other. If both departments I (investment goods) and II (consumption goods) are to grow continuously in equilibrium there must be a certain proportion of size between these two departments. This proportion can be found by modelling continuous growth on the physical (or material) level in opposition to the monetary level.
In the equations above a general, for all branches, equal rate of profit was computed given
whereby a price had to be arbitrarily determined as numéraire. In this case the price formula_11 for the consumption good formula_1 was set equal to 1 (numéraire) and the price for the investment good formula_0 was then computed. Thus, in money terms, the conditions for steady growth were established.
General equations.
To establish this steady growth also in terms of the material level, the following must hold:
formula_40
formula_41
Thus, an additional magnitude K must be determined, which describes the relative size of the two branches I and II whereby I has a weight of 1 and department II has the weight of "K".
If it is assumed that total profits are used for investment in order to produce more in the next period of production on the given technical level, then the rate of profit "r" is equal to the rate of growth "g".
Numerical examples.
In the first numerical example with rate of profit formula_42 we have:
formula_43
formula_44
The weight of department II is formula_45.
For the second numerical example with rate of profit formula_46 we get:
formula_47
formula_48
Now, the weight of department II is formula_49. The rates of growth "g" are equal to the rates of profit "r", respectively.
For the two numerical examples, respectively, in the first equation on the left hand side is the input of formula_0 and in the second equation on the left hand side is the amount of input of formula_1. On the right hand side of the first equations of the two numerical examples, respectively, is the output of one unit of formula_0 and in the second equation of each example is the output of K units of formula_1.
The input of formula_0 multiplied by the price formula_10 gives the monetary value of constant capital "c". Multiplication of input formula_1 with the set price formula_17 gives the monetary value of variable capital v. One unit of output formula_0 and K units of output formula_1 multiplied by their prices formula_10 and formula_11 respectively gives total sales of the economy "c" + "v" + "s".
Subtracting from total sales the value of constant capital plus variable capital ("c" + "v") gives profits "s".
Now the value composition of capital "c"/"v", the rate of surplus value "s"/"v", and the "wage share" "v"/("s" + "v") can be computed.
With the first example the wage share is formula_38 and with the second example formula_39. The rates of surplus value are, respectively, 0.706 and 1.389. The value composition of capital "c"/"v" is in the first example 6,34 and in the second 12.49. According to the formula
formula_50
for the two numerical examples rates of profit can be computed, giving formula_37 and formula_36, respectively. These are the same rates of profit as were computed directly in monetary terms.
Comparative static analysis.
The problem with these examples is that they are based on comparative statics. The comparison is between different economies each on an equilibrium growth path. Models of dis-equilibrium lead to other results. If capitalists raise the technical composition of capital because thereby the rate of profit is raised, this might lead to an ongoing process in which the economy has not enough time to reach a new equilibrium growth path. There is a continuing process of increasing the technical composition of capital to the detriment of job creation resulting at least on the labour market in stagnation. The law of the tendency of the rate of profit to fall nowadays usually is interpreted in terms of disequilibrium analysis, not the least in reaction to the Okishio critique.
David Laibman and Okishio's theorem.
Between 1999 and 2004, David Laibman, a Marxist economist, published at least nine pieces dealing with the Temporal single-system interpretation (TSSI) of Marx's value theory.
His "The Okishio Theorem and Its Critics" was the first published response to the temporalist critique of Okishio's theorem. The theorem was widely thought to have disproved Karl Marx's law of the tendential fall in the rate of profit, but proponents of the TSSI claim that the Okishio theorem is false and that their work refutes it. Laibman argued that the theorem is true and that TSSI research does not refute it.
In his lead paper in a symposium carried in "Research in Political Economy" in 1999, Laibman's key argument was that the falling rate of profit exhibited in Kliman (1996) depended crucially on the paper's assumption that there is fixed capital which lasts forever. Laibman claimed that if there is any depreciation or premature scrapping of old, less productive, fixed capital: (1) productivity will increase, which will cause the temporally determined value rate of profit to rise; (2) this value rate of profit will therefore "converge toward" Okishio's material rate of profit; and thus (3) this value rate "is governed by" the material rate of profit.
These and other arguments were answered in Alan Freeman and Andrew Kliman's (2000) lead paper in a second symposium, published the following year in the same journal. In his response, Laibman chose not to defend claims (1) through (3). He instead put forward a "Temporal-Value Profit-Rate Tracking Theorem" that he described as "propos[ing] that [the temporally determined value rate of profit] must "eventually" follow the trend of [Okishio's material rate of profit]" The "Tracking Theorem" states, in part: "If the material rate [of profit] rises to an asymptote, the value rate either falls to an asymptote, or first falls and then rises to an asymptote permanently below the material rate" Kliman argues that this statement "contradicts claims (1) through (3) as well as Laibman's characterization of the 'Tracking Theorem.' If the physical [i.e. material] rate of profit rises forever, while the value rate of profit falls forever, the value rate is certainly not following the trend of the physical [i.e. material] rate, not even eventually."
In the same paper, Laibman claimed that Okishio's theorem was true, even though the path of the temporally determined value rate of profit can diverge forever from the path of Okishio's material rate of profit. He wrote, "If a viable technical change is made, and the real wage rate is constant, the new MATERIAL rate of profit must be higher than the old one. That is all that Okishio, or Roemer, or Foley, or I, or anyone else has ever claimed!" In other words, proponents of the Okishio theorem have always been talking about how the rate of profit would behave only in the case in which input and output prices happened to be equal. Kliman and Freeman suggested that this statement of Laibman's was simply "an effort to absolve the physicalist tradition of error." Okishio's theorem, they argued, has always been understood as a disproof of Marx's law of the tendential fall in the rate of profit, and Marx's law does not pertain to an imaginary special case in which input and output prices happen for some reason to be equal. | [
{
"math_id": 0,
"text": "x_1"
},
{
"math_id": 1,
"text": "x_2"
},
{
"math_id": 2,
"text": "a_{11}"
},
{
"math_id": 3,
"text": "a_{21}"
},
{
"math_id": 4,
"text": "a_{12}"
},
{
"math_id": 5,
"text": "a_{22}"
},
{
"math_id": 6,
"text": "w \\cdot a_{21}"
},
{
"math_id": 7,
"text": "w \\cdot a_{22}"
},
{
"math_id": 8,
"text": "(a_{11} x_1 p_1 + a_{21} w x_1 p_2) (1+r) = x_1 p_1"
},
{
"math_id": 9,
"text": "(a_{12} x_2 p_1 + a_{22} w x_2 p_2) (1+r) = x_2 p_2"
},
{
"math_id": 10,
"text": "p_1"
},
{
"math_id": 11,
"text": "p_2"
},
{
"math_id": 12,
"text": "r"
},
{
"math_id": 13,
"text": "a_{11} x_1 p_1"
},
{
"math_id": 14,
"text": "a_{21} w x_1 p_2"
},
{
"math_id": 15,
"text": "a_{12} x_2p_1"
},
{
"math_id": 16,
"text": "a_{22} w x_2 p_2."
},
{
"math_id": 17,
"text": "p_2 = 1"
},
{
"math_id": 18,
"text": "w = 2 p_2 = 2."
},
{
"math_id": 19,
"text": "a_{11}=0.8"
},
{
"math_id": 20,
"text": "a_{21}=0.1"
},
{
"math_id": 21,
"text": "a_{12}=0.4"
},
{
"math_id": 22,
"text": "a_{22}=0.1"
},
{
"math_id": 23,
"text": "p_1= 1.78"
},
{
"math_id": 24,
"text": "r = 0.0961 = 9.61 \\%"
},
{
"math_id": 25,
"text": "(0.8 \\cdot 1 \\cdot 1.78 + 0.1 \\cdot 2 \\cdot 1 \\cdot 1) \\cdot (1+0.0961) = 1 \\cdot 1.78"
},
{
"math_id": 26,
"text": "(0.4 \\cdot 1 \\cdot 1.78 + 0.1 \\cdot 2 \\cdot 1 \\cdot 1) \\cdot (1+0.0961) = 1 \\cdot 1"
},
{
"math_id": 27,
"text": "= (0.8 \\cdot 1 \\cdot 1.78 + 0{,}1 \\cdot 2 \\cdot 1 \\cdot 1) \\cdot (1+0.0961) = 1 \\cdot 1.78"
},
{
"math_id": 28,
"text": "a_{21}=0.05"
},
{
"math_id": 29,
"text": "a_{11}=0.85"
},
{
"math_id": 30,
"text": "= (0.85 \\cdot 1 \\cdot 1.78 + 0.05 \\cdot 2 \\cdot 1 \\cdot 1) \\cdot (1+0.1036) = 1 \\cdot 1.78"
},
{
"math_id": 31,
"text": "r = 9{,}61 \\%"
},
{
"math_id": 32,
"text": "10{,}36 \\%"
},
{
"math_id": 33,
"text": "(0.85 \\cdot 1 \\cdot 1.77 + 0.05 \\cdot 2 \\cdot 1 \\cdot 1) \\cdot (1+0.1030) = 1 \\cdot 1.77"
},
{
"math_id": 34,
"text": "(0.4 \\cdot 1 \\cdot 1.77 + 0.1 \\cdot 2 \\cdot 1 \\cdot 1) \\cdot (1+0.1030) = 1 \\cdot 1"
},
{
"math_id": 35,
"text": "10.36 \\%"
},
{
"math_id": 36,
"text": "10.30 \\%"
},
{
"math_id": 37,
"text": "9.61 \\%"
},
{
"math_id": 38,
"text": "58.6 \\%"
},
{
"math_id": 39,
"text": "41.9 \\%"
},
{
"math_id": 40,
"text": "(a_{11} x_1 + K a_{12} x_2) (1+g) = x_1"
},
{
"math_id": 41,
"text": "(a_{21} w \\cdot x_1 + K a_{22} \\cdot w x_2) (1+g) = K x_2"
},
{
"math_id": 42,
"text": "r = 9.61 \\%"
},
{
"math_id": 43,
"text": "(0.8 \\cdot 1 + 0.2808 \\cdot 0.4 \\cdot 1) \\cdot (1+0.0961) = 1"
},
{
"math_id": 44,
"text": "(0.1 \\cdot 2 \\cdot 1 + 0.2808 \\cdot 0.1 \\cdot 2 \\cdot 1) \\cdot (1+0.0961) = 0.2808 \\cdot 1"
},
{
"math_id": 45,
"text": "K = 0.2808"
},
{
"math_id": 46,
"text": "r = 10.30 \\%"
},
{
"math_id": 47,
"text": "(0.85 \\cdot 1 + 0.14154 \\cdot 0.4 \\cdot 1) \\cdot (1+0.1030) = 1"
},
{
"math_id": 48,
"text": "(0.1 \\cdot 2 \\cdot 1 + 0.14154 \\cdot 0.05 \\cdot 2 \\cdot 1) (1+0.1030) = 0.14154 \\cdot 1"
},
{
"math_id": 49,
"text": "K = 0.14154"
},
{
"math_id": 50,
"text": " \\text{Rate of profit }p = {{s \\over v} \\over {{c \\over v} + 1}}"
}
]
| https://en.wikipedia.org/wiki?curid=7133142 |
7133473 | Commutation matrix | In mathematics, especially in linear algebra and matrix theory, the commutation matrix is used for transforming the vectorized form of a matrix into the vectorized form of its transpose. Specifically, the commutation matrix K("m","n") is the "nm" × "mn" matrix which, for any "m" × "n" matrix A, transforms vec(A) into vec(AT):
K("m","n") vec(A) = vec(AT) .
Here vec(A) is the "mn" × 1 column vector obtain by stacking the columns of A on top of one another:
formula_1
where A = [A"i","j"]. In other words, vec(A) is the vector obtained by vectorizing A in column-major order. Similarly, vec(AT) is the vector obtaining by vectorizing A in row-major order.
In the context of quantum information theory, the commutation matrix is sometimes referred to as the swap matrix or swap operator
formula_5
formula_6
This property is often used in developing the higher order statistics of Wishart covariance matrices.
formula_7
This property is the reason that this matrix is referred to as the "swap operator" in the context of quantum information theory.
formula_8
formula_9
Where the "p,q" entry of "n x m" block-matrix K"i,j" is given by
formula_10
For example,
formula_11
Code.
For both square and rectangular matrices of codice_0 rows and codice_1 columns, the commutation matrix can be generated by the code below.
Python.
import numpy as np
def comm_mat(m, n):
# determine permutation applied by K
w = np.arange(m * n).reshape((m, n), order="F").T.ravel(order="F")
# apply this permutation to the rows (i.e. to each column) of identity matrix and return result
return np.eye(m * n)[w, :]
Alternatively, a version without imports:
def delta(i, j):
return int(i == j)
def comm_mat(m, n):
# determine permutation applied by K
v = [m * j + i for i in range(m) for j in range(n)]
# apply this permutation to the rows (i.e. to each column) of identity matrix
I = [[delta(i, j) for j in range(m * n)] for i in range(m * n)]
return [I[i] for i in v]
MATLAB.
function P = com_mat(m, n)
% determine permutation applied by K
A = reshape(1:m*n, m, n);
v = reshape(A', 1, []);
% apply this permutation to the rows (i.e. to each column) of identity matrix
P = eye(m*n);
P = P(v,:);
R.
comm_mat = function(m, n){
i = 1:(m * n)
j = NULL
for (k in 1:m) {
j = c(j, m * 0:(n-1) + k)
Matrix::sparseMatrix(
i = i, j = j, x = 1
Example.
Let formula_0 denote the following formula_12 matrix:
formula_13
formula_0 has the following column-major and row-major vectorizations (respectively):
formula_14
The associated commutation matrix is
formula_15
(where each formula_16 denotes a zero). As expected, the following holds:
formula_17
formula_18
References.
<templatestyles src="Reflist/styles.css" />
[[Category:Linear algebra]]
[[Category:Matrices]]
[[Category:Articles with example Python (programming language) code]]
[[Category:Articles with example MATLAB/Octave code]] | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "\\operatorname{vec}(\\mathbf{A}) = [\\mathbf{A}_{1,1}, \\ldots, \\mathbf{A}_{m,1}, \\mathbf{A}_{1,2}, \\ldots, \\mathbf{A}_{m,2}, \\ldots, \\mathbf{A}_{1,n}, \\ldots, \\mathbf{A}_{m,n}]^{\\mathrm{T}}"
},
{
"math_id": 2,
"text": "\\mathbf P_\\pi"
},
{
"math_id": 3,
"text": "\\pi"
},
{
"math_id": 4,
"text": "\\{1,\\dots,mn\\}"
},
{
"math_id": 5,
"text": "\n\\pi(i + m(j-1)) = j + n(i-1), \\quad i = 1,\\dots,m, \\quad j = 1,\\dots,n.\n"
},
{
"math_id": 6,
"text": "\\mathbf{K}^{(r, m)} (\\mathbf{A} \\otimes \\mathbf{B}) \\mathbf{K}^{(n, q)} = \\mathbf{B} \\otimes \\mathbf{A}."
},
{
"math_id": 7,
"text": " \\mathbf{K}^{(r,m)}(\\mathbf v \\otimes \\mathbf w) = \\mathbf w \\otimes \\mathbf v."
},
{
"math_id": 8,
"text": "\\mathbf{K}^{(r, m)} = \\sum_{i=1}^r \\sum_{j=1}^m \\left(\\mathbf{e}_{r,i} {\\mathbf{e}_{m,j}}^{\\mathrm{T}}\\right) \\otimes \\left(\\mathbf{e}_{m,j} {\\mathbf{e}_{r,i}}^{\\mathrm{T}}\\right)\n= \n\\sum_{i=1}^r \\sum_{j=1}^m \n\\left(\\mathbf{e}_{r,i} \\otimes \\mathbf{e}_{m,j}\\right) \n\\left( \\mathbf{e}_{m,j} \\otimes \\mathbf{e}_{r,i}\\right)^{\\mathrm{T}}\n."
},
{
"math_id": 9,
"text": " \n\\mathbf{K}^{(m,n)} = \\begin{bmatrix}\n\\mathbf{K}_{1,1} & \\cdots & \\mathbf{K}_{1,n}\\\\\n\\vdots & \\ddots & \\vdots\\\\\n\\mathbf{K}_{m,1} & \\cdots & \\mathbf{K}_{m,n},\n\\end{bmatrix},\n"
},
{
"math_id": 10,
"text": "\n\\mathbf K_{ij}(p,q) = \\begin{cases}\n1 & i=q \\text{ and } j = p,\\\\\n0 & \\text{otherwise}.\n\\end{cases}\n"
},
{
"math_id": 11,
"text": "\n\\mathbf K^{(3,4)} = \n\\left[\\begin{array}{ccc|ccc|ccc|ccc}\n1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\\\\n\\hline\n0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\\\\n\\hline\n0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\end{array}\\right].\n"
},
{
"math_id": 12,
"text": "3 \\times 2"
},
{
"math_id": 13,
"text": "\nA = \n\\begin{bmatrix} \n 1 & 4 \\\\ \n 2 & 5 \\\\\n 3 & 6 \\\\\n\\end{bmatrix}.\n"
},
{
"math_id": 14,
"text": "\n\\mathbf v_{\\text{col}} = \\operatorname{vec}(A) = \n\\begin{bmatrix} \n 1 \\\\ \n 2 \\\\\n 3 \\\\\n 4 \\\\ \n 5 \\\\ \n 6 \\\\ \n\\end{bmatrix} \n, \\quad \\mathbf v_{\\text{row}} = \\operatorname{vec}(A^{\\mathrm T}) =\n\\begin{bmatrix} \n 1 \\\\ \n 4 \\\\\n 2 \\\\\n 5 \\\\ \n 3 \\\\ \n 6 \\\\ \n\\end{bmatrix}. "
},
{
"math_id": 15,
"text": " K = \\mathbf K^{(3,2)} = \\begin{bmatrix}\n 1 & \\cdot & \\cdot & \\cdot & \\cdot & \\cdot \\\\\n \\cdot & \\cdot & \\cdot & 1 & \\cdot & \\cdot \\\\\n \\cdot & 1 & \\cdot & \\cdot & \\cdot & \\cdot \\\\\n \\cdot & \\cdot & \\cdot & \\cdot & 1 & \\cdot \\\\\n \\cdot & \\cdot & 1 & \\cdot & \\cdot & \\cdot \\\\\n \\cdot & \\cdot & \\cdot & \\cdot & \\cdot & 1 \\\\\n \\end{bmatrix},\n"
},
{
"math_id": 16,
"text": "\\cdot"
},
{
"math_id": 17,
"text": " K^\\mathrm{T} K = KK^\\mathrm{T} =\\mathbf I_6 "
},
{
"math_id": 18,
"text": " K \\mathbf v_{\\text{col}} = \\mathbf v_{\\text{row}} "
}
]
| https://en.wikipedia.org/wiki?curid=7133473 |
713354 | Epicycloid | Plane curve traced by a point on a circle rolled around another circle
In geometry, an epicycloid (also called hypercycloid) is a plane curve produced by tracing the path of a chosen point on the circumference of a circle—called an "epicycle"—which rolls without slipping around a fixed circle. It is a particular kind of roulette.
An epicycloid with a minor radius (R2) of 0 is a circle. This is a degenerate form.
Equations.
If the smaller circle has radius formula_0, and the larger circle has radius formula_1, then the parametric equations for the curve can be given by either:
formula_2
or:
formula_3
This can be written in a more concise form using complex numbers as
formula_4
where
Area and Arc Length.
(Assuming the initial point lies on the larger circle.) When formula_7 is a positive integer, the area formula_8 and arc length formula_9 of this epicycloid are
formula_10
formula_11
It means that the epicycloid is formula_12 larger in area than the original stationary circle.
If formula_7 is a positive integer, then the curve is closed, and has k cusps (i.e., sharp corners).
If formula_7 is a rational number, say formula_13 expressed as irreducible fraction, then the curve has formula_14 cusps.
Count the animation rotations to see p and q
If formula_7 is an irrational number, then the curve never closes, and forms a dense subset of the space between the larger circle and a circle of radius formula_15.
The distance formula_16 from the origin to the point formula_14 on the small circle varies up and down as
formula_17
where
The epicycloid is a special kind of epitrochoid.
An epicycle with one cusp is a cardioid, two cusps is a nephroid.
An epicycloid and its evolute are similar.
Proof.
We assume that the position of formula_14 is what we want to solve, formula_20 is the angle from the tangential point to the moving point formula_14, and formula_21 is the angle from the starting point to the tangential point.
Since there is no sliding between the two cycles, then we have that
formula_22
By the definition of angle (which is the rate arc over radius), then we have that
formula_23
and
formula_24.
From these two conditions, we get the identity
formula_25.
By calculating, we get the relation between formula_20 and formula_21, which is
formula_26.
From the figure, we see the position of the point formula_14 on the small circle clearly.
formula_27
formula_28 | [
{
"math_id": 0,
"text": "r"
},
{
"math_id": 1,
"text": "R = kr"
},
{
"math_id": 2,
"text": "\\begin{align}\n& x (\\theta) = (R + r) \\cos \\theta \\ - r \\cos \\left( \\frac{R + r}{r} \\theta \\right) \\\\\n& y (\\theta) = (R + r) \\sin \\theta \\ - r \\sin \\left( \\frac{R + r}{r} \\theta \\right)\n\\end{align}"
},
{
"math_id": 3,
"text": "\\begin{align}\n& x (\\theta) = r (k + 1) \\cos \\theta - r \\cos \\left( (k + 1) \\theta \\right) \\\\\n& y (\\theta) = r (k + 1) \\sin \\theta - r \\sin \\left( (k + 1) \\theta \\right).\n\\end{align}"
},
{
"math_id": 4,
"text": "z(\\theta) = r \\left( (k + 1)e^{ i\\theta} - e^{i(k+1)\\theta} \\right) "
},
{
"math_id": 5,
"text": "\\theta \\in [0, 2\\pi],"
},
{
"math_id": 6,
"text": "kr"
},
{
"math_id": 7,
"text": "k"
},
{
"math_id": 8,
"text": "A"
},
{
"math_id": 9,
"text": "s"
},
{
"math_id": 10,
"text": "A=(k+1)(k+2)\\pi r^2,"
},
{
"math_id": 11,
"text": "s=8(k+1)r."
},
{
"math_id": 12,
"text": "\\frac{(k+1)(k+2)}{k^2}"
},
{
"math_id": 13,
"text": "k = p/q"
},
{
"math_id": 14,
"text": "p"
},
{
"math_id": 15,
"text": "R + 2r"
},
{
"math_id": 16,
"text": "\\overline{OP}"
},
{
"math_id": 17,
"text": "R \\leq \\overline{OP} \\leq R+2r "
},
{
"math_id": 18,
"text": "R"
},
{
"math_id": 19,
"text": "2r"
},
{
"math_id": 20,
"text": "\\alpha"
},
{
"math_id": 21,
"text": "\\theta"
},
{
"math_id": 22,
"text": "\\ell_R=\\ell_r"
},
{
"math_id": 23,
"text": "\\ell_R= \\theta R"
},
{
"math_id": 24,
"text": "\\ell_r= \\alpha r"
},
{
"math_id": 25,
"text": "\\theta R=\\alpha r"
},
{
"math_id": 26,
"text": "\\alpha =\\frac{R}{r} \\theta"
},
{
"math_id": 27,
"text": " x=\\left( R+r \\right)\\cos \\theta -r\\cos\\left( \\theta+\\alpha \\right) =\\left( R+r \\right)\\cos \\theta -r\\cos\\left( \\frac{R+r}{r}\\theta \\right)"
},
{
"math_id": 28,
"text": "y=\\left( R+r \\right)\\sin \\theta -r\\sin\\left( \\theta+\\alpha \\right) =\\left( R+r \\right)\\sin \\theta -r\\sin\\left( \\frac{R+r}{r}\\theta \\right)"
}
]
| https://en.wikipedia.org/wiki?curid=713354 |
71337783 | N = 1 supersymmetric Yang–Mills theory | Supersymmetric generalization of Yang–Mills
In theoretical physics, more specifically in quantum field theory and supersymmetry, supersymmetric Yang–Mills, also known as super Yang–Mills and abbreviated to SYM, is a supersymmetric generalization of Yang–Mills theory, which is a gauge theory that plays an important part in the mathematical formulation of forces in particle physics. It is a special case of 4D N = 1 global supersymmetry.
Super Yang–Mills was studied by Julius Wess and Bruno Zumino in which they demonstrated the supergauge-invariance of the theory and wrote down its action, alongside the action of the Wess–Zumino model, another early supersymmetric field theory.
The treatment in this article largely follows that of Figueroa-O'Farrill's lectures on supersymmetry and of Tong.
While N = 4 supersymmetric Yang–Mills theory is also a supersymmetric Yang–Mills theory, it has very different properties to formula_0 supersymmetric Yang–Mills theory, which is the theory discussed in this article. The formula_1 supersymmetric Yang–Mills theory was studied by Seiberg and Witten in Seiberg–Witten theory. All three theories are based in formula_2 super Minkowski spaces.
The supersymmetric Yang–Mills action.
Preliminary treatment.
A first treatment can be done without defining superspace, instead defining the theory in terms of familiar fields in non-supersymmetric quantum field theory.
Spacetime and matter content.
The base spacetime is flat spacetime (Minkowski space).
SYM is a gauge theory, and there is an associated gauge group formula_3 to the theory. The gauge group has associated Lie algebra formula_4.
The field content then consists of
For gauge-invariance, the gauge field formula_5 is necessarily massless. This means its superpartner formula_6 is also massless if supersymmetry is to hold. Therefore formula_6 can be written in terms of two Weyl spinors which are conjugate to one another: formula_8, and the theory can be formulated in terms of the Weyl spinor field formula_9 instead of formula_6.
Supersymmetric pure electromagnetic theory.
When formula_10, the conceptual difficulties simplify somewhat, and this is in some sense the simplest gauge theory. The field content is simply a (co-)vector field formula_5, a Majorana spinor formula_11 and a auxiliary real scalar field formula_7.
The field strength tensor is defined as usual as formula_12.
The Lagrangian written down by Wess and Zumino is then
formula_13
This can be generalized to include a coupling constant formula_14, and theta term formula_15, where formula_16 is the dual field strength tensor
formula_17
and formula_18 is the alternating tensor or totally antisymmetric tensor. If we also replace the field formula_6 with the Weyl spinor formula_9, then a supersymmetric action can be written as
Supersymmetric Maxwell theory (preliminary form)
formula_19
This can be viewed as a supersymmetric generalization of a pure formula_20 gauge theory, also known as Maxwell theory or pure electromagnetic theory.
Supersymmetric Yang–Mills theory (preliminary treatment).
In full generality, we must define the gluon field strength tensor,
formula_21
and the covariant derivative of the adjoint Weyl spinor,
formula_22
To write down the action, an invariant inner product on formula_4 is needed: the Killing form formula_23 is such an inner product, and in a typical abuse of notation we write formula_24 simply as formula_25, suggestive of the fact that the invariant inner product arises as the trace in some representation of formula_4.
Supersymmetric Yang–Mills then readily generalizes from supersymmetric Maxwell theory. A simple version is
formula_26
while a more general version is given by
Supersymmetric Yang–Mills theory (preliminary form)
formula_27
Superspace treatment.
Superspace and superfield content.
The base superspace is formula_0 super Minkowski space.
The theory is defined in terms of a single adjoint-valued real superfield formula_28, fixed to be in Wess–Zumino gauge.
Supersymmetric Maxwell theory on superspace.
The theory is defined in terms of a superfield arising from taking covariant derivatives of formula_28:
formula_29.
The supersymmetric action is then written down, with a complex coupling constant formula_30, as
Supersymmetric Maxwell theory (superspace form)
formula_31
where h.c. indicates the Hermitian conjugate of the preceding term.
Supersymmetric Yang–Mills on superspace.
For non-abelian gauge theory, instead define
formula_32
and formula_33. Then the action is
Supersymmetric Yang–Mills theory (superspace form)
formula_34
Symmetries of the action.
Supersymmetry.
For the simplified Yang–Mills action on Minkowski space (not on superspace), the supersymmetry transformations are
formula_35
formula_36
where formula_37.
For the Yang–Mills action on superspace, since formula_38 is chiral, then so are fields built from formula_38. Then integrating over half of superspace, formula_39, gives a supersymmetric action.
An important observation is that the Wess–Zumino gauge is not a supersymmetric gauge, that is, it is not preserved by supersymmetry. However, it is possible to do a compensating gauge transformation to return to Wess–Zumino gauge. Then, after a supersymmetry transformation and the compensating gauge transformation, the superfields transform as
formula_40
formula_41
formula_42
Gauge symmetry.
The preliminary theory defined on spacetime is manifestly gauge invariant as it is built from terms studied in non-supersymmetric gauge theory which are gauge invariant.
The superfield formulation requires a theory of generalized gauge transformations. (Not supergauge transformations, which would be transformations in a theory with local supersymmetry).
Generalized abelian gauge transformations.
Such a transformation is parametrized by a chiral superfield formula_43, under which the real superfield transforms as
formula_44
In particular, upon expanding formula_28 and formula_43 appropriately into constituent superfields, then formula_28 contains a vector superfield formula_5 while formula_43 contains a scalar superfield formula_45, such that
formula_46
The chiral superfield used to define the action,
formula_47
is gauge invariant.
Generalized non-abelian gauge transformations.
The chiral superfield is adjoint valued. The transformation of formula_28 is prescribed by
formula_48,
from which the transformation for formula_28 can be derived using the Baker–Campbell–Hausdorff formula.
The chiral superfield formula_49 is not invariant but transforms by conjugation:
formula_50,
so that upon tracing in the action, the action is gauge-invariant.
Extra classical symmetries.
Superconformal symmetry.
As a classical theory, supersymmetric Yang–Mills theory admits a larger set of symmetries, described at the algebra level by the superconformal algebra. Just as the super Poincaré algebra is a supersymmetric extension of the Poincaré algebra, the superconformal algebra is a supersymmetric extension of the conformal algebra which also contains a spinorial generator of conformal supersymmetry formula_51.
Conformal invariance is broken in the quantum theory by trace and conformal anomalies.
While the quantum formula_0 supersymmetric Yang–Mills theory does not have superconformal symmetry, quantum N = 4 supersymmetric Yang–Mills theory does.
R-symmetry.
The formula_52 R-symmetry for formula_0 supersymmetry is a symmetry of the classical theory, but not of the quantum theory due to an anomaly.
Adding matter.
Abelian gauge.
Matter can be added in the form of Wess–Zumino model type superfields formula_11. Under a gauge transformation,
formula_53,
and instead of using just formula_54 as the Lagrangian as in the Wess–Zumino model, for gauge invariance it must be replaced with formula_55
This gives a supersymmetric analogue to QED. The action can be written
formula_56
For formula_57 flavours, we instead have formula_57 superfields formula_58, and the action can be written
formula_59
with implicit summation.
However, for a well-defined quantum theory, a theory such as that defined above suffers a gauge anomaly. We are obliged to add a partner formula_60 to each chiral superfield formula_11 (distinct from the idea of superpartners, and from conjugate superfields), which has opposite charge. This gives the action
formula_61
Non-Abelian gauge.
For non-abelian gauge, matter chiral superfields formula_11 are now valued in a representation formula_62 of the gauge group:
formula_63.
The Wess–Zumino kinetic term must be adjusted to formula_64.
Then a simple SQCD action would be to take formula_62 to be the fundamental representation, and add the Wess–Zumino term:
formula_65.
More general and detailed forms of the super QCD action are given in that article.
Fayet–Iliopoulos term.
When the center of the Lie algebra formula_4 is non-trivial, there is an extra term which can be added to the action known as the Fayet–Iliopoulos term.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{N} = 1"
},
{
"math_id": 1,
"text": "\\mathcal{N} = 2"
},
{
"math_id": 2,
"text": "d = 4"
},
{
"math_id": 3,
"text": "G"
},
{
"math_id": 4,
"text": "\\mathfrak{g}"
},
{
"math_id": 5,
"text": "A_\\mu"
},
{
"math_id": 6,
"text": "\\Psi"
},
{
"math_id": 7,
"text": "D"
},
{
"math_id": 8,
"text": "\\Psi = (\\lambda, \\bar \\lambda)"
},
{
"math_id": 9,
"text": "\\lambda"
},
{
"math_id": 10,
"text": "G = U(1)"
},
{
"math_id": 11,
"text": "\\Phi"
},
{
"math_id": 12,
"text": "F_{\\mu\\nu} := \\partial_\\mu A_\\nu - \\partial_\\nu A_\\mu"
},
{
"math_id": 13,
"text": "\\mathcal{L} = - \\frac{1}{4}F_{\\mu\\nu}F^{\\mu\\nu} - \\frac{i}{2}\\bar\\Psi\\gamma^\\mu \\partial_\\mu\\Psi + \\frac{1}{2}D^2."
},
{
"math_id": 14,
"text": "e"
},
{
"math_id": 15,
"text": "\\propto \\vartheta F_{\\mu\\nu}*F^{\\mu\\nu}"
},
{
"math_id": 16,
"text": "*F^{\\mu\\nu}"
},
{
"math_id": 17,
"text": "*F^{\\mu\\nu} = \\frac{1}{2}\\epsilon^{\\mu\\nu\\rho\\sigma}F_{\\rho\\sigma}."
},
{
"math_id": 18,
"text": "\\epsilon^{\\mu\\nu\\rho\\sigma}"
},
{
"math_id": 19,
"text": "S_{\\text{SMaxwell}} = \\int d^4x \\left[-\\frac{1}{4e^2}F_{\\mu\\nu}F^{\\mu\\nu} + \\frac{\\vartheta}{32\\pi^2}F_{\\mu\\nu}*F^{\\mu\\nu} - \\frac{i}{e^2}\\lambda\\sigma^\\mu\\partial_\\mu \\bar\\lambda + \\frac{1}{2e^2}D^2\\right] "
},
{
"math_id": 20,
"text": "U(1)"
},
{
"math_id": 21,
"text": " F_{\\mu\\nu} = \\partial_\\mu A_\\nu - \\partial_\\nu A_\\mu - i[A_\\mu, A_\\nu]"
},
{
"math_id": 22,
"text": " D_\\mu \\lambda = \\partial_\\mu\\lambda - i[A_\\mu, \\lambda]."
},
{
"math_id": 23,
"text": "B(\\cdot, \\cdot)"
},
{
"math_id": 24,
"text": "B"
},
{
"math_id": 25,
"text": "\\text{Tr}"
},
{
"math_id": 26,
"text": "S_{\\text{SYM}} = \\int d^4x \\text{Tr}\\left[-\\frac{1}{4}F_{\\mu\\nu}F^{\\mu\\nu} - \\frac{1}{2}\\bar\\Psi \\gamma^\\mu D_\\mu \\Psi\\right] "
},
{
"math_id": 27,
"text": "S_{\\text{SYM}} = \\int d^4x \\text{Tr}\\left[-\\frac{1}{2g^2}F_{\\mu\\nu}F^{\\mu\\nu} + \\frac{\\vartheta}{16\\pi^2}F_{\\mu\\nu}*F^{\\mu\\nu} - \\frac{2i}{g^2}\\lambda\\sigma^\\mu D_\\mu \\bar\\lambda + \\frac{1}{g^2}D^2\\right] "
},
{
"math_id": 28,
"text": "V"
},
{
"math_id": 29,
"text": "W_\\alpha = -\\frac{1}{4}\\mathcal{\\bar D^2}\\mathcal{D}_\\alpha V"
},
{
"math_id": 30,
"text": "\\tau = \\frac{\\vartheta}{2\\pi} + \\frac{4\\pi i}{e}"
},
{
"math_id": 31,
"text": "S_{\\text{SMaxwell}} = -\\int d^4x \\left[\\int d^2 \\theta \\frac{i\\tau}{16\\pi}W^\\alpha W_\\alpha + \\text{h.c.}\\right] "
},
{
"math_id": 32,
"text": "W_\\alpha = -\\frac{1}{8}\\bar\\mathcal{D}^2(e^{-2V}\\mathcal{D}_\\alpha e^{2V})"
},
{
"math_id": 33,
"text": "\\tau = \\frac{\\vartheta}{2\\pi} + \\frac{4\\pi i}{g}"
},
{
"math_id": 34,
"text": "S_{\\text{SYM}} = -\\int d^4x \\text{Tr}\\left[\\int d^2 \\theta \\frac{i\\tau}{8\\pi}W^\\alpha W_\\alpha + \\text{h.c.}\\right] "
},
{
"math_id": 35,
"text": " \\delta_\\epsilon A_\\mu = \\bar\\epsilon \\gamma_\\mu \\Psi "
},
{
"math_id": 36,
"text": " \\delta_\\epsilon \\Psi = -\\frac{1}{2} F_{\\mu\\nu}\\gamma^{\\mu\\nu}\\epsilon "
},
{
"math_id": 37,
"text": "\\gamma^{\\mu\\nu} = \\frac{1}{2}(\\gamma^\\mu \\gamma^\\nu - \\gamma^\\nu \\gamma^\\mu)"
},
{
"math_id": 38,
"text": "W_\\alpha"
},
{
"math_id": 39,
"text": "\\int d^2\\theta"
},
{
"math_id": 40,
"text": "\\delta A_\\mu = \\epsilon \\sigma_\\mu \\bar \\lambda + \\lambda \\sigma_\\mu \\bar \\epsilon,"
},
{
"math_id": 41,
"text": "\\delta \\lambda = \\epsilon D + (\\sigma^{\\mu\\nu}\\epsilon)F_{\\mu\\nu}"
},
{
"math_id": 42,
"text": "\\delta D = i\\epsilon \\sigma^\\mu\\partial_\\mu \\bar \\lambda - i \\partial_\\mu \\lambda \\bar \\sigma^\\mu \\bar \\epsilon."
},
{
"math_id": 43,
"text": "\\Omega"
},
{
"math_id": 44,
"text": "V \\mapsto V + i(\\Omega - \\Omega^\\dagger)."
},
{
"math_id": 45,
"text": "\\omega"
},
{
"math_id": 46,
"text": " A_\\mu \\mapsto A_\\mu - 2 \\partial_\\mu (\\text{Re}\\,\\omega) =: A_\\mu + \\partial_\\mu \\alpha."
},
{
"math_id": 47,
"text": " W_\\alpha = -\\frac{1}{4} \\bar\\mathcal{D}^2 \\mathcal{D}_\\alpha V,"
},
{
"math_id": 48,
"text": "e^{2V} \\mapsto e^{-2i\\Omega^\\dagger}e^{2V}e^{2i\\Omega}"
},
{
"math_id": 49,
"text": "W_\\alpha = -\\frac{1}{8} \\bar\\mathcal{D}^2(e^{-2V} \\mathcal{D}_\\alpha e^{2V})"
},
{
"math_id": 50,
"text": "W_\\alpha \\mapsto e^{2i\\Omega}W_\\alpha e^{-2i\\Omega}"
},
{
"math_id": 51,
"text": "S_\\alpha"
},
{
"math_id": 52,
"text": "\\text{U}(1)"
},
{
"math_id": 53,
"text": " \\Phi \\mapsto \\exp(- 2iq\\Omega)\\Phi "
},
{
"math_id": 54,
"text": "\\Phi^\\dagger \\Phi"
},
{
"math_id": 55,
"text": "\\Phi^\\dagger e^{2q V} \\Phi."
},
{
"math_id": 56,
"text": " S_{\\text{SMaxwell}} + \\int d^4x \\, \\int d^4\\theta \\, \\Phi^\\dagger e^{2qV} \\Phi."
},
{
"math_id": 57,
"text": "N_f"
},
{
"math_id": 58,
"text": "\\Phi_i"
},
{
"math_id": 59,
"text": " S_{\\text{SMaxwell}} + \\int d^4x \\, \\int d^4\\theta \\, \\Phi_i^\\dagger e^{2q_iV} \\Phi_i."
},
{
"math_id": 60,
"text": "\\tilde \\Phi"
},
{
"math_id": 61,
"text": " S_{\\text{SQED}} = S_{\\text{SMaxwell}} + \\int d^4x \\, \\int d^4\\theta \\, \\Phi_i^\\dagger e^{2q_iV} \\Phi_i + \\tilde\\Phi_i^\\dagger e^{-2q_iV} \\tilde\\Phi_i."
},
{
"math_id": 62,
"text": "R"
},
{
"math_id": 63,
"text": "\\Phi \\mapsto \\exp(-2i\\Omega)\\Phi"
},
{
"math_id": 64,
"text": "\\Phi^\\dagger e^{2V} \\Phi"
},
{
"math_id": 65,
"text": "S_{\\text{SYM}} + \\int d^4x \\, d^4\\theta \\, \\Phi^\\dagger e^{2V} \\Phi"
}
]
| https://en.wikipedia.org/wiki?curid=71337783 |
71343520 | Kaniadakis Erlang distribution | Continuous probability distribution
The Kaniadakis Erlang distribution (or κ-Erlang Gamma distribution) is a family of continuous statistical distributions, which is a particular case of the κ-Gamma distribution, when formula_0 and formula_1 positive integer. The first member of this family is the κ-exponential distribution of Type I. The κ-Erlang is a κ-deformed version of the Erlang distribution. It is one example of a Kaniadakis distribution.
Characterization.
Probability density function.
The Kaniadakis "κ"-Erlang distribution has the following probability density function:
formula_2
valid for formula_3 and formula_4, where formula_5 is the entropic index associated with the Kaniadakis entropy.
The ordinary Erlang Distribution is recovered as formula_6.
Cumulative distribution function.
The cumulative distribution function of "κ"-Erlang distribution assumes the form:
formula_7
valid for formula_3, where formula_5. The cumulative Erlang distribution is recovered in the classical limit formula_6.
Survival distribution and hazard functions.
The survival function of the "κ"-Erlang distribution is given by:formula_8The survival function of the "κ"-Erlang distribution enables the determination of hazard functions in closed form through the solution of the "κ"-rate equation:formula_9where formula_10 is the hazard function.
Family distribution.
A family of "κ"-distributions arises from the "κ"-Erlang distribution, each associated with a specific value of formula_11, valid for formula_12 and formula_5. Such members are determined from the "κ"-Erlang cumulative distribution, which can be rewritten as:
formula_13
where
formula_14
formula_15
with
formula_16
formula_17
formula_18
formula_19
formula_20
First member.
The first member (formula_21) of the "κ"-Erlang family is the "κ"-Exponential distribution of type I, in which the probability density function and the cumulative distribution function are defined as:
formula_22
formula_23
Second member.
The second member (formula_24) of the "κ"-Erlang family has the probability density function and the cumulative distribution function defined as:
formula_25
formula_26
Third member.
The second member (formula_27) has the probability density function and the cumulative distribution function defined as:
formula_28
formula_29
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha = 1"
},
{
"math_id": 1,
"text": "\\nu = n = "
},
{
"math_id": 2,
"text": " \nf_{_{\\kappa}}(x) = \\frac{1}{ (n - 1)! } \\prod_{m = 0}^n \\left[ 1 + (2m -n)\\kappa \\right] x^{n - 1} \\exp_\\kappa(-x)\n"
},
{
"math_id": 3,
"text": "x \\geq 0"
},
{
"math_id": 4,
"text": "n = \\textrm{positive} \\,\\,\\textrm{integer} "
},
{
"math_id": 5,
"text": "0 \\leq |\\kappa| < 1"
},
{
"math_id": 6,
"text": "\\kappa \\rightarrow 0"
},
{
"math_id": 7,
"text": "F_\\kappa(x) = \\frac{1}{ (n - 1)! } \\prod_{m = 0}^n \\left[ 1 + (2m -n)\\kappa \\right] \\int_0^x z^{n - 1} \\exp_\\kappa(-z) dz "
},
{
"math_id": 8,
"text": "S_\\kappa(x) = 1 - \\frac{1}{ (n - 1)! } \\prod_{m = 0}^n \\left[ 1 + (2m -n)\\kappa \\right] \\int_0^x z^{n - 1} \\exp_\\kappa(-z) dz "
},
{
"math_id": 9,
"text": "\\frac{ S_\\kappa(x) }{ dx } = -h_\\kappa S_\\kappa(x) "
},
{
"math_id": 10,
"text": "h_\\kappa"
},
{
"math_id": 11,
"text": "n"
},
{
"math_id": 12,
"text": "x \\ge 0"
},
{
"math_id": 13,
"text": "F_\\kappa(x) = 1 - \\left[ R_\\kappa(x) + Q_\\kappa(x) \\sqrt{1 + \\kappa^2 x^2} \\right] \\exp_\\kappa(-x) "
},
{
"math_id": 14,
"text": "Q_\\kappa(x) = N_\\kappa \\sum_{m=0}^{n-3} \\left( m + 1 \\right) c_{m+1} x^m + \\frac{N_\\kappa}{1-n^2\\kappa^2} x^{n-1} "
},
{
"math_id": 15,
"text": "R_\\kappa(x) = N_\\kappa \\sum_{m=0}^{n} c_{m} x^m "
},
{
"math_id": 16,
"text": "N_\\kappa = \\frac{1}{ (n - 1)! } \\prod_{m = 0}^n \\left[ 1 + (2m -n)\\kappa \\right] "
},
{
"math_id": 17,
"text": "c_n = \\frac{ n\\kappa^2 }{ 1 - n^2 \\kappa^2} "
},
{
"math_id": 18,
"text": "c_{n - 1} =0 "
},
{
"math_id": 19,
"text": "c_{n - 2} = \\frac{ n - 1 }{ (1 - n^2 \\kappa^2) [1 - (n-2)^2\\kappa^2]} "
},
{
"math_id": 20,
"text": "c_m = \\frac{ (m + 1)(m+2) }{ 1 - m^2 \\kappa^2} c_{m+2} \\quad \\textrm{for} \\quad 0 \\leq m \\leq n-3 "
},
{
"math_id": 21,
"text": "n = 1"
},
{
"math_id": 22,
"text": " \nf_{_{\\kappa}}(x) = (1 - \\kappa^2) \\exp_\\kappa(-x)\n"
},
{
"math_id": 23,
"text": "F_\\kappa(x) = 1-\\Big(\\sqrt{1+\\kappa^2 x^2} + \\kappa^2 x \\Big)\\exp_k({-x)}\n"
},
{
"math_id": 24,
"text": "n = 2"
},
{
"math_id": 25,
"text": " \nf_{_{\\kappa}}(x) = (1 - 4\\kappa^2)\\,x \\,\\exp_\\kappa(-x)\n"
},
{
"math_id": 26,
"text": "F_\\kappa(x) = 1-\\left(2\\kappa^2 x^2 + 1 + x\\sqrt{1+\\kappa^2 x^2} \\right) \\exp_k({-x)}\n"
},
{
"math_id": 27,
"text": "n = 3"
},
{
"math_id": 28,
"text": " \nf_{_{\\kappa}}(x) = \\frac{1}{2} (1 - \\kappa^2) (1 - 9\\kappa^2)\\,x^2 \\,\\exp_\\kappa(-x)\n"
},
{
"math_id": 29,
"text": "F_\\kappa(x) = 1-\\left\\{ \\frac{3}{2} \\kappa^2(1 - \\kappa^2)x^3 + x + \\left[ 1 + \\frac{1}{2}(1-\\kappa^2)x^2 \\right] \\sqrt{1+\\kappa^2 x^2}\\right\\} \\exp_\\kappa(-x)\n"
},
{
"math_id": 30,
"text": "\\kappa = 0"
}
]
| https://en.wikipedia.org/wiki?curid=71343520 |
71350420 | Gauss separation algorithm | Carl Friedrich Gauss, in his treatise "Allgemeine Theorie des Erdmagnetismus", presented a method, the Gauss separation algorithm, of partitioning the magnetic field vector, Bformula_0, measured over the surface of a sphere into two components, internal and external, arising from electric currents (per the Biot–Savart law) flowing in the volumes interior and exterior to the spherical surface, respectively. The method employs spherical harmonics. When radial currents flow through the surface of interest, the decomposition is more complex, involving the decomposition of the field into poloidal and toroidal components. In this case, an additional term (the toroidal component) accounts for the contribution of the radial current to the magnetic field on the surface.
The method is commonly used in studies of terrestrial and planetary magnetism, to relate measurements of magnetic fields either at the planetary surface or in orbit above the planet to currents flowing in the planet's interior (internal currents) and its magnetosphere (external currents). Ionospheric currents would be exterior to the planet's surface, but might be internal currents from the vantage point of a satellite orbiting the planent. | [
{
"math_id": 0,
"text": "(r, \\theta, \\phi)"
}
]
| https://en.wikipedia.org/wiki?curid=71350420 |
7135084 | Randomized response | Randomised response is a research method used in structured survey interview. It was first proposed by S. L. Warner in 1965 and later modified by B. G. Greenberg and coauthors in 1969. It allows respondents to respond to sensitive issues (such as criminal behavior or sexuality) while maintaining confidentiality. Chance decides, unknown to the interviewer, whether the question is to be answered truthfully, or "yes", regardless of the truth.
For example, social scientists have used it to ask people whether they use drugs, whether they have illegally installed telephones, or whether they have evaded paying taxes. Before abortions were legal, social scientists used the method to ask women whether they had had abortions.
The concept is somewhat similar to plausible deniability. Plausible deniability allows the subject to credibly say that they did not make a statement, while the randomized response technique allows the subject to credibly say that they had not been truthful when making a statement.
Example.
With a coin.
A person is asked if they had sex with a prostitute this month. Before they answer, they flip a coin. They are then instructed to answer "yes" if the coin comes up tails, and truthfully, if it comes up heads. Only they know whether their answer reflects the toss of the coin or their true experience. It is very important to assume that people who get heads will answer truthfully, otherwise the surveyor is not able to speculate.
Half the people—or half the questionnaire population—get tails and the other half get heads when they flip the coin. Therefore, half of those people will answer "yes" regardless of whether they have done it. The other half will answer truthfully according to their experience. So whatever proportion of the group said "no", the true number who did not have sex with a prostitute is double that, based on the assumption that the two halves are probably close to the same as it is a large randomized sampling. For example, if 20% of the population surveyed said "no", then the true fraction that did not have sex with a prostitute is 40%.
With cards.
The same question can be asked with three cards which are unmarked on one side, and bear a question on the other side. The cards are randomly mixed, and laid in front of the subject. The subject takes one card, turns it over, and answers the question on it truthfully with either "yes" or "no".
The researcher does not know which question has been asked.
Under the assumption that the "yes" and "no" answers to the control questions cancel each other out, the number of subjects who have had sex with a prostitute is triple that of all "yes" answers in excess of the "no" answers.
Original version.
Warner's original version (1965) is slightly different: The sensitive question is worded in two dichotomous alternatives, and chance decides, unknown to the interviewer, which one is to be answered honestly. The interviewer gets a "yes" or "no" without knowing to which of the two questions. For mathematical reasons chance cannot be "fair" ( and ). Let formula_0 be the probability to answer the sensitive question and formula_1 the true proportion of those interviewed bearing the embarrassing property, then the proportion of "yes"-answers formula_2 is composed as follows:
Transformed to yield EP:
Example.
The interviewed are asked to secretly throw a die and answer the first question only if they throw a 6, otherwise the second question (formula_5). The "yes"-answers are now composed of consumers who have thrown a 6 and non-consumers who have thrown a different number. Let the result be 75 "yes"-answers out of 100 interviewed (formula_6).
Inserted into the formula you get
If all interviewed have answered honestly then their true proportion of consumers is 1/8 (= 12.5%).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "EP"
},
{
"math_id": 2,
"text": "YA"
},
{
"math_id": 3,
"text": "YA = p\\times EP + (1 - p)(1 - EP)"
},
{
"math_id": 4,
"text": "EP = \\frac{YA + p - 1}{2p - 1}"
},
{
"math_id": 5,
"text": "p=\\tfrac{1}{6}"
},
{
"math_id": 6,
"text": "YA=\\tfrac{3}{4}"
},
{
"math_id": 7,
"text": "EP = (\\tfrac{3}{4} + \\tfrac{1}{6} - 1) / (2\\times \\tfrac{1}{6} - 1) = \\tfrac{1}{8}"
}
]
| https://en.wikipedia.org/wiki?curid=7135084 |
7136284 | Duplication and elimination matrices | In mathematics, especially in linear algebra and matrix theory, the duplication matrix and the elimination matrix are linear transformations used for transforming half-vectorizations of matrices into vectorizations or (respectively) vice versa.
Duplication matrix.
The duplication matrix formula_0 is the unique formula_1 matrix which, for any formula_2 symmetric matrix formula_3, transforms formula_4 into formula_5:
formula_6.
For the formula_7 symmetric matrix formula_8, this transformation reads
formula_9
The explicit formula for calculating the duplication matrix for a formula_10 matrix is:
formula_11
Where:
Here is a C++ function using Armadillo (C++ library):
arma::mat duplication_matrix(const int &n) {
arma::mat out((n*(n+1))/2, n*n, arma::fill::zeros);
for (int j = 0; j < n; ++j) {
for (int i = j; i < n; ++i) {
arma::vec u((n*(n+1))/2, arma::fill::zeros);
u(j*n+i-((j+1)*j)/2) = 1.0;
arma::mat T(n,n, arma::fill::zeros);
T(i,j) = 1.0;
T(j,i) = 1.0;
out += u * arma::trans(arma::vectorise(T));
return out.t();
Elimination matrix.
An elimination matrix formula_20 is a formula_21 matrix which, for any formula_10 matrix formula_22, transforms formula_5 into formula_4:
formula_23.
By the explicit (constructive) definition given by , the formula_24 by formula_25 elimination matrix formula_20 is given by
formula_26
where formula_27 is a unit vector whose formula_28-th element is one and zeros elsewhere, and formula_29.
Here is a C++ function using Armadillo (C++ library):
arma::mat elimination_matrix(const int &n) {
arma::mat out((n*(n+1))/2, n*n, arma::fill::zeros);
for (int j = 0; j < n; ++j) {
arma::rowvec e_j(n, arma::fill::zeros);
e_j(j) = 1.0;
for (int i = j; i < n; ++i) {
arma::vec u((n*(n+1))/2, arma::fill::zeros);
u(j*n+i-((j+1)*j)/2) = 1.0;
arma::rowvec e_i(n, arma::fill::zeros);
e_i(i) = 1.0;
out += arma::kron(u, arma::kron(e_j, e_i));
return out;
For the formula_30 matrix formula_31, one choice for this transformation is given by
formula_32. | [
{
"math_id": 0,
"text": " D_n "
},
{
"math_id": 1,
"text": "n^2 \\times \\frac{n(n+1)}{2}"
},
{
"math_id": 2,
"text": " n \\times n "
},
{
"math_id": 3,
"text": " A "
},
{
"math_id": 4,
"text": "\\mathrm{vech}(A)"
},
{
"math_id": 5,
"text": "\\mathrm{vec}(A)"
},
{
"math_id": 6,
"text": " D_n \\mathrm{vech}(A) = \\mathrm{vec}(A)"
},
{
"math_id": 7,
"text": "2 \\times 2"
},
{
"math_id": 8,
"text": "A=\\left[\\begin{smallmatrix} a & b \\\\ b & d \\end{smallmatrix}\\right]"
},
{
"math_id": 9,
"text": "D_n \\mathrm{vech}(A) = \\mathrm{vec}(A) \\implies \\begin{bmatrix} 1&0&0 \\\\ 0&1&0 \\\\ 0&1&0 \\\\ 0&0&1 \\end{bmatrix} \\begin{bmatrix} a \\\\ b \\\\ d \\end{bmatrix} = \\begin{bmatrix} a \\\\ b \\\\ b \\\\ d \\end{bmatrix}"
},
{
"math_id": 10,
"text": "n \\times n"
},
{
"math_id": 11,
"text": "D^T_n = \\sum \\limits_{i \\ge j} u_{ij} (\\mathrm{vec}T_{ij})^T"
},
{
"math_id": 12,
"text": " u_{ij} "
},
{
"math_id": 13,
"text": " \\frac{1}{2} n (n+1) "
},
{
"math_id": 14,
"text": "1"
},
{
"math_id": 15,
"text": "(j-1)n+i - \\frac{1}{2}j(j-1)"
},
{
"math_id": 16,
"text": " T_{ij} "
},
{
"math_id": 17,
"text": "n \\times n "
},
{
"math_id": 18,
"text": " (i,j) "
},
{
"math_id": 19,
"text": " (j,i) "
},
{
"math_id": 20,
"text": "L_n"
},
{
"math_id": 21,
"text": "\\frac{n(n+1)}{2} \\times n^2"
},
{
"math_id": 22,
"text": "A"
},
{
"math_id": 23,
"text": "L_n \\mathrm{vec}(A) = \\mathrm{vech}(A)"
},
{
"math_id": 24,
"text": "\\frac{1}{2}n(n+1)"
},
{
"math_id": 25,
"text": "n^2"
},
{
"math_id": 26,
"text": "L_n = \\sum_{i \\geq j} u_{ij} \\mathrm{vec}(E_{ij})^T = \\sum_{i \\geq j} (u_{ij}\\otimes e_j^T \\otimes e_i^T),"
},
{
"math_id": 27,
"text": "e_i"
},
{
"math_id": 28,
"text": "i"
},
{
"math_id": 29,
"text": "E_{ij} = e_ie_j^T"
},
{
"math_id": 30,
"text": "2 \\times 2 "
},
{
"math_id": 31,
"text": "A = \\left[\\begin{smallmatrix} a & b \\\\ c & d \\end{smallmatrix}\\right]"
},
{
"math_id": 32,
"text": "L_n \\mathrm{vec}(A) = \\mathrm{vech}(A) \\implies \\begin{bmatrix} 1&0&0&0 \\\\ 0&1&0&0 \\\\ 0&0&0&1 \\end{bmatrix} \\begin{bmatrix} a \\\\ c \\\\ b \\\\ d \\end{bmatrix} = \\begin{bmatrix} a \\\\ c \\\\ d \\end{bmatrix}"
}
]
| https://en.wikipedia.org/wiki?curid=7136284 |
71363144 | Zeldovich spontaneous wave | A Zeldovich spontaneous wave, also referred to as Zeldovich gradient mechanism, is a reaction wave that propagates spontaneously in a reacting medium with a nonuniform initial temperature distribution when there is no interaction between different fluid elements. The concept was put forward by Yakov Zeldovich in 1980, based on his earlier work with his coworkers. The spontaneous wave is different from the other two conventional combustion waves, namely the subsonic deflagrations and supersonic detonations. The wave, although strictly speaking unrealistic because gasdynamic effects are neglected, is often cited to explain the yet-unsolved problem of deflagration to detonation transition (DDT).
The mechanism behind the spontaneous wave is readily explained by considering a reaction medium at rest with a nonuniform temperature distribution such that the spatial temperature gradients are small or at least it is not sufficiently large (large temperature gradients will evidently lead to interactions between adjacent fluid elements via heat conduction). Corresponding to each fluid element with a definite temperature value, there is an adiabatic induction period, the time it takes to undergo thermal explosion in the absence of any heat loss mechanism. Thus, each fluid element will undergo thermal explosion at a definite time as if it is isolated from the rest of the gas. A sequence of these successive self-ignitions can be identified as some sort of a reaction front and tracked. The spontaneous wave is influenced by the initial condition and is independent of thermal conductivity and the speed of sound.
Description of the spontaneous reaction wave.
Let formula_0 be the initial temperature distribution, which is non trivial, indicating that chemical reactions at different points in space proceed at different rates. To this distribution, we can associate a function formula_1, where formula_2 is the adiabatic induction period. Now, define in space some surface formula_3; suppose if formula_4, then this surface for some constant will be parallel to formula_5-plane. Examine the change of position of this surface with the passage of time according to
formula_6
From this, we can easily extract the direction and the propagation speed of the spontaneous front. The direction of the wave is clearly normal to this surface which is given by formula_7 and the rate of propagation is just the magnitude of inverse of the gradient of formula_2:
formula_8
Note that adiabatic thermal runaways at different places are not casually connected events and therefore formula_9 can assume, in principle, any positive value. By comparing formula_9 with other relevant speeds such as, the deflagration speed, formula_10, the sound speed, formula_11 and the speed of the Chapman–Jouguet detonation wave, formula_12, we can identify different regimes:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T(x,y,z)"
},
{
"math_id": 1,
"text": "t_{ad}(x,y,z)"
},
{
"math_id": 2,
"text": "t_{ad}"
},
{
"math_id": 3,
"text": "t_{ad}(x,y,z)=\\mathrm{const.}"
},
{
"math_id": 4,
"text": "T=T(x)"
},
{
"math_id": 5,
"text": "yz"
},
{
"math_id": 6,
"text": "t_{ad}(x,y,z)=t."
},
{
"math_id": 7,
"text": "\\nabla t_{ad}/|\\nabla t_{ad}|"
},
{
"math_id": 8,
"text": "\\mathbf{u}_{sp} = \\frac{\\nabla t_{ad}}{|\\nabla t_{ad}|^2}, \\quad u_{sp} = |\\mathbf{u}_{up}|=\\frac{1}{|\\nabla t_{ad}|}."
},
{
"math_id": 9,
"text": "u_{sp}"
},
{
"math_id": 10,
"text": "u_f"
},
{
"math_id": 11,
"text": "c"
},
{
"math_id": 12,
"text": "u_{CJ}"
},
{
"math_id": 13,
"text": "u_{sp}<u_f"
},
{
"math_id": 14,
"text": "t_1"
},
{
"math_id": 15,
"text": "x_{21}"
},
{
"math_id": 16,
"text": "t_2=t_1 + x_{21}/u_{sp}"
},
{
"math_id": 17,
"text": "u_f<u_{sp}\\ll c<u_{CJ}"
},
{
"math_id": 18,
"text": "u_{sp}\\ll c"
},
{
"math_id": 19,
"text": "c\\sim u_{sp}<u_{CJ}"
},
{
"math_id": 20,
"text": "u_{sp}>u_{CJ}"
}
]
| https://en.wikipedia.org/wiki?curid=71363144 |
71367805 | Burnett equations | In continuum mechanics, a branch of mathematics, the Burnett equations is a set of higher-order continuum equations for non-equilibrium flows and the transition regimes where the Navier–Stokes equations do not perform well.
They were derived by the English mathematician D. Burnett.
Series expansion.
Series expansion approach.
The series expansion technique used to derive the Burnett equations involves expanding the distribution function formula_0 in the Boltzmann equation as a power series in the Knudsen number formula_1:
formula_2
Here, formula_3 represents the Maxwell-Boltzmann equilibrium distribution function, dependent on the number density formula_4, macroscopic velocity formula_5, and temperature formula_6. The terms formula_7 etc., are higher-order corrections that account for non-equilibrium effects, with each subsequent term incorporating higher powers of the Knudsen number formula_1.
Derivation.
The first-order term formula_8 in the expansion gives the Navier-Stokes equations, which include terms for viscosity and thermal conductivity. To obtain the Burnett equations, one must retain terms up to second order, corresponding to formula_9. The Burnett equations include additional second-order derivatives of velocity, temperature, and density, representing more subtle effects of non-equilibrium gas dynamics.
The Burnett equations can be expressed as:
formula_10
Here, the "higher-order terms" involve second-order gradients of velocity and temperature, which are absent in the Navier-Stokes equations. These terms become significant in situations with high Knudsen numbers, where the assumptions of the Navier-Stokes framework break down.
Extensions.
The Onsager-Burnett Equations, commonly referred to as OBurnett, which form a superset of the Navier-Stokes equations and are
second-order accurate for Knudsen number.
Eq. (1)
formula_11
Eq. (2)
formula_12
Derivation.
Starting with the Boltzmann equation
formula_13
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nf"
},
{
"math_id": 1,
"text": "\nKn"
},
{
"math_id": 2,
"text": "\nf(r,c,t) = f^{(0)}(c|n,u,T) \\left[1 + K_n \\phi^{(1)}(c|n,u,T) + K_n^2 \\phi^{(2)}(c|n,u,T) + \\cdots \\right]"
},
{
"math_id": 3,
"text": "\nf^{(0)}(c|n,u,T)"
},
{
"math_id": 4,
"text": "\nn"
},
{
"math_id": 5,
"text": "\nu"
},
{
"math_id": 6,
"text": "\nT"
},
{
"math_id": 7,
"text": "\\phi^{(1)}, \\phi^{(2)},\n"
},
{
"math_id": 8,
"text": "\nf^{(1)}"
},
{
"math_id": 9,
"text": "\n\\phi^{(2)}\n"
},
{
"math_id": 10,
"text": "\n\\mathbf{u}_t + (\\mathbf{u} \\cdot \\nabla)\\mathbf{u} + \\nabla p = \\nabla \\cdot (\\nu \\nabla \\mathbf{u}) + \\text{higher-order terms}\n"
},
{
"math_id": 11,
"text": " \n \\sqrt{\\tau} \\frac{du^s}{ds} - \\frac{9}{8} \\alpha_1 u^* (\\frac{du^*}{ds})^2 = \\frac{\\tau}{u^*} - \\tau_0 -1 + u^* \n"
},
{
"math_id": 12,
"text": "\n\\frac{45}{16} \\sqrt{\\tau} \\frac{d \\tau}{ds} + \\frac{9}{4} \\gamma_1 \\tau (\\frac{du^*}{ds})^2 - \\frac{9}{4} \\Psi u^* \\frac{d \\tau}{ds} \\frac{du^*}{ds} = \\frac{3}{2} (\\tau - \\tau_0) - \\frac{1}{2} (1-u^*)^2 - \\tau_0(1-u^*)\n"
},
{
"math_id": 13,
"text": " \\frac{\\partial{f}}{\\partial{t}} + c_k \\partial{f}{x_k} + F_k \\partial{f}{c_k} = J(f, f_1) "
}
]
| https://en.wikipedia.org/wiki?curid=71367805 |
71369165 | Continuous poset | Partially ordered set
In order theory, a continuous poset is a partially ordered set in which every element is the directed supremum of elements approximating it.
Definitions.
Let formula_0 be two elements of a preordered set formula_1. Then we say that formula_2 approximates formula_3, or that formula_2 is way-below formula_3, if the following two equivalent conditions are satisfied.
If formula_2 approximates formula_3, we write formula_11. The approximation relation formula_12 is a transitive relation that is weaker than the original order, also antisymmetric if formula_13 is a partially ordered set, but not necessarily a preorder. It is a preorder if and only if formula_1 satisfies the ascending chain condition.
For any formula_14, let
formula_15
formula_16
Then formula_17 is an upper set, and formula_18 a lower set. If formula_13 is an upper-semilattice, formula_18 is a directed set (that is, formula_19 implies formula_20), and therefore an ideal.
A preordered set formula_1 is called a continuous preordered set if for any formula_14, the subset formula_18 is directed and formula_21.
Properties.
The interpolation property.
For any two elements formula_0 of a continuous preordered set formula_1, formula_11 if and only if for any directed set formula_4 such that formula_5, there is a formula_6 such that formula_22. From this follows the interpolation property of the continuous preordered set formula_1: for any formula_0 such that formula_11 there is a formula_23 such that formula_24.
Continuous dcpos.
For any two elements formula_0 of a continuous dcpo formula_25, the following two conditions are equivalent.
Using this it can be shown that the following stronger interpolation property is true for continuous dcpos. For any formula_0 such that formula_11 and formula_26, there is a formula_23 such that formula_24 and formula_29.
For a dcpo formula_25, the following conditions are equivalent.
In this case, the actual left adjoint is
formula_31
formula_32
Continuous complete lattices.
For any two elements formula_33 of a complete lattice formula_34, formula_11 if and only if for any subset formula_35 such that formula_36, there is a finite subset formula_37 such that formula_38.
Let formula_34 be a complete lattice. Then the following conditions are equivalent.
A continuous complete lattice is often called a continuous lattice.
Examples.
Lattices of open sets.
For a topological space formula_44, the following conditions are equivalent.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a,b\\in P"
},
{
"math_id": 1,
"text": "(P,\\lesssim)"
},
{
"math_id": 2,
"text": "a"
},
{
"math_id": 3,
"text": "b"
},
{
"math_id": 4,
"text": "D\\subseteq P"
},
{
"math_id": 5,
"text": "b\\lesssim\\sup D"
},
{
"math_id": 6,
"text": "d\\in D"
},
{
"math_id": 7,
"text": "a\\lesssim d"
},
{
"math_id": 8,
"text": "I\\subseteq P"
},
{
"math_id": 9,
"text": "b\\lesssim\\sup I"
},
{
"math_id": 10,
"text": "a\\in I"
},
{
"math_id": 11,
"text": "a\\ll b"
},
{
"math_id": 12,
"text": "\\ll"
},
{
"math_id": 13,
"text": "P"
},
{
"math_id": 14,
"text": "a\\in P"
},
{
"math_id": 15,
"text": "\\mathop\\Uparrow a=\\{b\\in L\\mid a\\ll b\\}"
},
{
"math_id": 16,
"text": "\\mathop\\Downarrow a=\\{b\\in L\\mid b\\ll a\\}"
},
{
"math_id": 17,
"text": "\\mathop\\Uparrow a"
},
{
"math_id": 18,
"text": "\\mathop\\Downarrow a"
},
{
"math_id": 19,
"text": "b,c\\ll a"
},
{
"math_id": 20,
"text": "b\\vee c\\ll a"
},
{
"math_id": 21,
"text": "a=\\sup\\mathop\\Downarrow a"
},
{
"math_id": 22,
"text": "a\\ll d"
},
{
"math_id": 23,
"text": "c\\in P"
},
{
"math_id": 24,
"text": "a\\ll c\\ll b"
},
{
"math_id": 25,
"text": "(P,\\le)"
},
{
"math_id": 26,
"text": "a\\ne b"
},
{
"math_id": 27,
"text": "b\\le\\sup D"
},
{
"math_id": 28,
"text": "a\\ne d"
},
{
"math_id": 29,
"text": "a\\ne c"
},
{
"math_id": 30,
"text": "\\sup \\colon \\operatorname{Ideal}(P)\\to P"
},
{
"math_id": 31,
"text": "{\\Downarrow} \\colon P\\to\\operatorname{Ideal}(P)"
},
{
"math_id": 32,
"text": "\\mathord\\Downarrow\\dashv\\sup"
},
{
"math_id": 33,
"text": "a,b\\in L"
},
{
"math_id": 34,
"text": "L"
},
{
"math_id": 35,
"text": "A\\subseteq L"
},
{
"math_id": 36,
"text": "b\\le\\sup A"
},
{
"math_id": 37,
"text": "F\\subseteq A"
},
{
"math_id": 38,
"text": "a\\le\\sup F"
},
{
"math_id": 39,
"text": "\\sup \\colon \\operatorname{Ideal}(L)\\to L"
},
{
"math_id": 40,
"text": "\\mathcal D"
},
{
"math_id": 41,
"text": "\\textstyle\\inf_{D\\in\\mathcal D}\\sup D=\\sup_{f\\in\\prod\\mathcal D}\\inf_{D\\in\\mathcal D}f(D)"
},
{
"math_id": 42,
"text": "r \\colon \\{0,1\\}^\\kappa\\to\\{0,1\\}^\\kappa"
},
{
"math_id": 43,
"text": "\\{0,1\\}"
},
{
"math_id": 44,
"text": "X"
},
{
"math_id": 45,
"text": "\\operatorname{Open}(X)"
},
{
"math_id": 46,
"text": "\\operatorname{Top}"
},
{
"math_id": 47,
"text": "(-)\\times X\\colon\\operatorname{Top}\\to\\operatorname{Top}"
}
]
| https://en.wikipedia.org/wiki?curid=71369165 |
7136985 | Vectorization (mathematics) | Conversion of a matrix or a tensor to a vector
In mathematics, especially in linear algebra and matrix theory, the vectorization of a matrix is a linear transformation which converts the matrix into a vector. Specifically, the vectorization of a "m" × "n" matrix "A", denoted vec("A"), is the "mn" × 1 column vector obtained by stacking the columns of the matrix "A" on top of one another:
formula_0
Here, formula_1 represents the element in the "i"-th row and "j"-th column of "A", and the superscript formula_2 denotes the transpose. Vectorization expresses, through coordinates, the isomorphism formula_3 between these (i.e., of matrices and vectors) as vector spaces.
For example, for the 2×2 matrix formula_4, the vectorization is formula_5.
The connection between the vectorization of "A" and the vectorization of its transpose is given by the commutation matrix.
Compatibility with Kronecker products.
The vectorization is frequently used together with the Kronecker product to express matrix multiplication as a linear transformation on matrices. In particular,
formula_6
for matrices "A", "B", and "C" of dimensions "k"×"l", "l"×"m", and "m"×"n". For example, if formula_7 (the adjoint endomorphism of the Lie algebra gl("n", C) of all "n"×"n" matrices with complex entries), then formula_8, where formula_9 is the "n"×"n" identity matrix.
There are two other useful formulations:
formula_10
More generally, it has been shown that vectorization is a self-adjunction in the monoidal closed structure of any category of matrices.
Compatibility with Hadamard products.
Vectorization is an algebra homomorphism from the space of "n" × "n" matrices with the Hadamard (entrywise) product to C"n"2 with its Hadamard product:
formula_11
Compatibility with inner products.
Vectorization is a unitary transformation from the space of "n"×"n" matrices with the Frobenius (or Hilbert–Schmidt) inner product to C"n"2:
formula_12
where the superscript † denotes the conjugate transpose.
Vectorization as a linear sum.
The matrix vectorization operation can be written in terms of a linear sum. Let X be an "m" × "n" matrix that we want to vectorize, and let e"i" be the "i"-th canonical basis vector for the "n"-dimensional space, that is formula_13. Let B"i" be a ("mn") × "m" block matrix defined as follows:
formula_14
B"i" consists of "n" block matrices of size "m" × "m", stacked column-wise, and all these matrices are all-zero except for the "i"-th one, which is a "m" × "m" identity matrix I"m".
Then the vectorized version of X can be expressed as follows:
formula_15
Multiplication of X by e"i" extracts the "i"-th column, while multiplication by B"i" puts it into the desired position in the final vector.
Alternatively, the linear sum can be expressed using the Kronecker product:
formula_16
Half-vectorization.
For a symmetric matrix "A", the vector vec("A") contains more information than is strictly necessary, since the matrix is completely determined by the symmetry together with the lower triangular portion, that is, the "n"("n" + 1)/2 entries on and below the main diagonal. For such matrices, the half-vectorization is sometimes more useful than the vectorization. The half-vectorization, vech("A"), of a symmetric "n" × "n" matrix "A" is the "n"("n" + 1)/2 × 1 column vector obtained by vectorizing only the lower triangular part of "A":
formula_17
For example, for the 2×2 matrix formula_18, the half-vectorization is formula_19.
There exist unique matrices transforming the half-vectorization of a matrix to its vectorization and vice versa called, respectively, the duplication matrix and the elimination matrix.
Programming language.
Programming languages that implement matrices may have easy means for vectorization.
In Matlab/GNU Octave a matrix codice_0 can be vectorized by codice_1.
GNU Octave also allows vectorization and half-vectorization with codice_2 and codice_3 respectively. Julia has the codice_2 function as well.
In Python NumPy arrays implement the codice_5 method, while in R the desired effect can be achieved via the codice_6 or codice_7 functions. In R, function codice_8 of package 'ks' allows vectorization and function codice_9 implemented in both packages 'ks' and 'sn' allows half-vectorization.
Applications.
Vectorization is used in matrix calculus and its applications in establishing e.g., moments of random vectors and matrices, asymptotics, as well as Jacobian and Hessian matrices.
It is also used in local sensitivity and statistical diagnostics.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\operatorname{vec}(A) = [a_{1,1}, \\ldots, a_{m,1}, a_{1,2}, \\ldots, a_{m,2}, \\ldots, a_{1,n}, \\ldots, a_{m,n}]^\\mathrm{T}"
},
{
"math_id": 1,
"text": "a_{i,j}"
},
{
"math_id": 2,
"text": "{}^\\mathrm{T}"
},
{
"math_id": 3,
"text": "\\mathbf{R}^{m \\times n} := \\mathbf{R}^m \\otimes \\mathbf{R}^n \\cong \\mathbf{R}^{mn}"
},
{
"math_id": 4,
"text": "A = \\begin{bmatrix} a & b \\\\ c & d \\end{bmatrix}"
},
{
"math_id": 5,
"text": "\\operatorname{vec}(A) = \\begin{bmatrix} a \\\\ c \\\\ b \\\\ d \\end{bmatrix}"
},
{
"math_id": 6,
"text": " \\operatorname{vec}(ABC) = (C^\\mathrm{T}\\otimes A) \\operatorname{vec}(B) "
},
{
"math_id": 7,
"text": " \\operatorname{ad}_A(X) = AX-XA"
},
{
"math_id": 8,
"text": "\\operatorname{vec}(\\operatorname{ad}_A(X)) = (I_n\\otimes A - A^\\mathrm{T} \\otimes I_n ) \\text{vec}(X)"
},
{
"math_id": 9,
"text": "I_n"
},
{
"math_id": 10,
"text": " \\begin{align}\n\\operatorname{vec}(ABC) &= (I_n\\otimes AB)\\operatorname{vec}(C) = (C^\\mathrm{T}B^\\mathrm{T}\\otimes I_k) \\operatorname{vec}(A) \\\\\n\\operatorname{vec}(AB) &= (I_m \\otimes A) \\operatorname{vec}(B) = (B^\\mathrm{T}\\otimes I_k) \\operatorname{vec}(A)\n\\end{align}"
},
{
"math_id": 11,
"text": "\\operatorname{vec}(A \\circ B) = \\operatorname{vec}(A) \\circ \\operatorname{vec}(B) ."
},
{
"math_id": 12,
"text": "\\operatorname{tr}(A^\\dagger B) = \\operatorname{vec}(A)^\\dagger \\operatorname{vec}(B),"
},
{
"math_id": 13,
"text": "\\mathbf{e}_i=\\left[0,\\dots,0,1,0,\\dots,0\\right]^\\mathrm{T}"
},
{
"math_id": 14,
"text": "\n\\mathbf{B}_i = \\begin{bmatrix}\n\\mathbf{0} \\\\\n\\vdots \\\\\n\\mathbf{0} \\\\\n\\mathbf{I}_m \\\\\n\\mathbf{0} \\\\\n\\vdots \\\\\n\\mathbf{0}\n\\end{bmatrix}\n= \\mathbf{e}_i \\otimes \\mathbf{I}_m\n"
},
{
"math_id": 15,
"text": "\\operatorname{vec}(\\mathbf{X}) = \\sum_{i=1}^n \\mathbf{B}_i \\mathbf{X} \\mathbf{e}_i"
},
{
"math_id": 16,
"text": "\\operatorname{vec}(\\mathbf{X}) = \\sum_{i=1}^n \\mathbf{e}_i \\otimes \\mathbf{X} \\mathbf{e}_i"
},
{
"math_id": 17,
"text": " \\operatorname{vech}(A) = [A_{1,1}, \\ldots, A_{n,1}, A_{2,2}, \\ldots, A_{n,2}, \\ldots, A_{n-1,n-1}, A_{n,n-1}, A_{n,n}]^\\mathrm{T}."
},
{
"math_id": 18,
"text": "A = \\begin{bmatrix} a & b \\\\ b & d \\end{bmatrix}"
},
{
"math_id": 19,
"text": "\\operatorname{vech}(A) = \\begin{bmatrix} a \\\\ b \\\\ d \\end{bmatrix}"
}
]
| https://en.wikipedia.org/wiki?curid=7136985 |
71380095 | Twin-width | The twin-width of an undirected graph is a natural number associated with the graph, used to study the parameterized complexity of graph algorithms. Intuitively, it measures how similar the graph is to a cograph, a type of graph that can be reduced to a single vertex by repeatedly merging together "twins", vertices that have the same neighbors. The twin-width is defined from a sequence of repeated mergers where the vertices are not required to be twins, but have nearly equal sets of neighbors.
Definition.
Twin-width is defined for finite simple undirected graphs. These have a finite set of vertices, and a set of edges that are unordered pairs of vertices. The open neighborhood of any vertex is the set of other vertices that it is paired with in edges of the graph; the closed neighborhood is formed from the open neighborhood by including the vertex itself. Two vertices are "true twins" when they have the same closed neighborhood, and "false twins" when they have the same open neighborhood; more generally, both true twins and false twins can be called twins, without qualification.
The cographs have many equivalent definitions, but one of them is that these are the graphs that can be reduced to a single vertex by a process of repeatedly finding any two twin vertices and merging them into a single vertex. For a cograph, this reduction process will always succeed, no matter which choice of twins to merge is made at each step. For a graph that is not a cograph, it will always get stuck in a subgraph with more than two vertices that has no twins.
The definition of twin-width mimics this reduction process. A "contraction sequence", in this context, is a sequence of steps, beginning with the given graph, in which each step replaces a pair of vertices by a single vertex. This produces a sequence of graphs, with edges colored red and black; in the given graph, all edges are assumed to be black. When two vertices are replaced by a single vertex, the neighborhood of the new vertex is the union of the neighborhoods of the replaced vertices. In this new neighborhood, an edge that comes from black edges in the neighborhoods of both vertices remains black; all other edges are colored red.
A contraction sequence is called a formula_0-sequence if, throughout the sequence, every vertex touches at most formula_0 red edges. The twin-width of a graph is the smallest value of formula_0 for which it has a formula_0-sequence.
A dense graph may still have bounded twin-width; for instance, the cographs include all complete graphs. A variation of twin-width, "sparse twin-width", applies to families of graphs rather than to individual graphs. For a family of graphs that is closed under taking induced subgraphs and has bounded twin-width, the following properties are equivalent:
Such a family is said to have bounded sparse twin-width.
The concept of twin-width can be generalized from graphs to various totally ordered structures (including graphs equipped with a total ordering on their vertices), and is in many ways simpler for ordered structures than for unordered graphs. It is also possible to formulate equivalent definitions for other notions of graph width using contraction sequences with different requirements than having bounded degree.
Graphs of bounded twin-width.
Cographs have twin-width zero. In the reduction process for cographs, there will be no red edges: when two vertices are merged, their neighborhoods are equal, so there are no edges coming from only one of the two neighborhoods to be colored red. In any other graph, any contraction sequence will produce some red edges, and the twin-width will be greater than zero.
The path graphs with at most three vertices are cographs, but every larger path graph has twin-width one. For a contraction sequence that repeatedly merges the last two edges of the path, only the edge incident to the single merged vertex will be red, so this is a 1-sequence. Trees have twin-width at most two, and for some trees this is tight. A 2-contraction sequence for any tree may be found by choosing a root, and then repeatedly merging two leaves that have the same parent or, if this is not possible, merging the deepest leaf into its parent. The only red edges connect leaves to their parents, and when there are two at the same parent they can be merged, keeping the red degree at most two.
More generally, the following classes of graphs have bounded twin-width, and a contraction sequence of bounded width can be found for them in polynomial time:
In every hereditary family of graphs of bounded twin-width, it is possible to find a family of total orders for the vertices of its graphs so that the inherited ordering on an induced subgraph is also an ordering in the family, and so that the family is "small" with respect to these orders. This means that, for a total order on formula_3 vertices, the number of graphs in the family consistent with that order is at most singly exponential in formula_3. Conversely, every hereditary family of ordered graphs that is small in this sense has bounded twin-width. It was originally conjectured that every hereditary family of labeled graphs that is small, in the sense that the number of graphs is at most a singly exponential factor times formula_4, has bounded twin-width. However, this conjecture was disproved using a family of induced subgraphs of an infinite Cayley graph that are small as labeled graphs but do not have bounded twin-width.
There exist graphs of unbounded twin-width within the following families of graphs:
In each of these cases, the result follows by a counting argument: there are more graphs of the given type than there can be graphs of bounded twin-width.
Properties.
If a graph has bounded twin-width, then it is possible to find a "versatile tree of contractions". This is a large family of contraction sequences, all of some (larger) bounded width, so that at each step in each sequence there are linearly many disjoint pairs of vertices each of which could be contracted at the next step in the sequence. It follows from this that the number of graphs of bounded twin-width on any set of formula_3 given vertices is larger than formula_4 by only a singly exponential factor, that the graphs of bounded twin-width have an adjacency labelling scheme with only a logarithmic number of bits per vertex, and that they have universal graphs of polynomial size in which each formula_3-vertex graph of bounded twin-width can be found as an induced subgraph.
Algorithms.
The graphs of twin-width at most one can be recognized in polynomial time. However, it is NP-complete to determine whether a given graph has twin-width at most four, and NP-hard to approximate the twin-width with an approximation ratio better than 5/4. Under the exponential time hypothesis, computing the twin-width requires time at least exponential in formula_5, on formula_3-vertex graphs. In practice, it is possible to compute the twin-width of graphs of moderate size using SAT solvers. For most of the known families of graphs of bounded twin-width, it is possible to construct a contraction sequence of bounded width in polynomial time.
Once a contraction sequence has been given or constructed, many different algorithmic problems can be solved using it, in many cases more efficiently than is possible for graphs that do not have bounded twin-width. As detailed below, these include exact parameterized algorithms and approximation algorithms for NP-hard problems, as well as some problems that have classical polynomial time algorithms but can nevertheless be sped up using the assumption of bounded twin-width.
Parameterized algorithms.
An algorithmic problem on graphs having an associated parameter is called fixed-parameter tractable if it has an algorithm that, on graphs with formula_3 vertices and parameter value formula_6, runs in time formula_7 for some constant formula_8 and computable function formula_9. For instance, a running time of formula_10 would be fixed-parameter tractable in this sense. This style of analysis is generally applied to problems that do not have a known polynomial-time algorithm, because otherwise fixed-parameter tractability would be trivial. Many such problems have been shown to be fixed-parameter tractable with twin-width as a parameter, when a contraction sequence of bounded width is given as part of the input. This applies, in particular, to the graph families of bounded twin-width listed above, for which a contraction sequence can be constructed efficiently. However, it is not known how to find a good contraction sequence for an arbitrary graph of low twin-width, when no other structure in the graph is known.
The fixed-parameter tractable problems for graphs of bounded twin-width with given contraction sequences include:
Speedups of classical algorithms.
In graphs of bounded twin-width, it is possible to perform a breadth-first search, on a graph with formula_3 vertices, in time formula_16, even when the graph is dense and has more edges than this time bound.
Approximation algorithms.
Twin-width has also been applied in approximation algorithms. In particular, in the graphs of bounded twin-width, it is possible to find an approximation to the minimum dominating set with bounded approximation ratio. This is in contrast to more general graphs, for which it is NP-hard to obtain an approximation ratio that is better than logarithmic.
The maximum independent set and graph coloring problems can be approximated to within an approximation ratio of formula_17, for every formula_18, in polynomial time on graphs of bounded twin-width. In contrast, without the assumption of bounded twin-width, it is NP-hard to achieve any approximation ratio of this form with formula_19.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "d"
},
{
"math_id": 1,
"text": "K_5"
},
{
"math_id": 2,
"text": "K_{3,3}"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "n!"
},
{
"math_id": 5,
"text": "n/\\log n"
},
{
"math_id": 6,
"text": "k"
},
{
"math_id": 7,
"text": "O(n^c\\, f(k))"
},
{
"math_id": 8,
"text": "c"
},
{
"math_id": 9,
"text": "f"
},
{
"math_id": 10,
"text": "O(n2^k)"
},
{
"math_id": 11,
"text": "O(k^2d^{2k}n)"
},
{
"math_id": 12,
"text": "(d+2)"
},
{
"math_id": 13,
"text": "\\operatorname{col}_r(G)"
},
{
"math_id": 14,
"text": "k-1"
},
{
"math_id": 15,
"text": "r"
},
{
"math_id": 16,
"text": "O(n\\log n)"
},
{
"math_id": 17,
"text": "n^{\\varepsilon}"
},
{
"math_id": 18,
"text": "\\varepsilon>0"
},
{
"math_id": 19,
"text": "\\varepsilon<1"
}
]
| https://en.wikipedia.org/wiki?curid=71380095 |
71389706 | Pseudo-R-squared | Statistical measure of fit
Pseudo-R-squared values are used when the outcome variable is nominal or ordinal such that the coefficient of determination R2 cannot be applied as a measure for goodness of fit and when a likelihood function is used to fit a model.
In linear regression, the squared multiple correlation, R2 is used to assess goodness of fit as it represents the proportion of variance in the criterion that is explained by the predictors.
In logistic regression analysis, there is no agreed upon analogous measure, but there are several competing measures each with limitations.
Four of the most commonly used indices and one less commonly used one are examined in this article:
R2L by Cohen.
R2L is given by Cohen:
formula_0
This is the most analogous index to the squared multiple correlations in linear regression. It represents the proportional reduction in the deviance wherein the deviance is treated as a measure of variation analogous but not identical to the variance in linear regression analysis. One limitation of the likelihood ratio R2 is that it is not monotonically related to the odds ratio, meaning that it does not necessarily increase as the odds ratio increases and does not necessarily decrease as the odds ratio decreases.
R2CS by Cox and Snell.
R2CS is an alternative index of goodness of fit related to the R2 value from linear regression. It is given by:
formula_1
where LM and L0 are the likelihoods for the model being fitted and the null model, respectively. The Cox and Snell index corresponds to the standard R2 in case of a linear model with normal error. In certain situations, R2CS may be problematic as its maximum value is formula_2. For example, for logistic regression, the upper bound is formula_3 for a symmetric marginal distribution of events and decreases further for an asymmetric distribution of events.
R2N by Nagelkerke.
R2N, proposed by Nico Nagelkerke in a highly cited Biometrika paper, provides a correction to the Cox and Snell R2 so that the maximum value is equal to 1. Nevertheless, the Cox and Snell and likelihood ratio R2s show greater agreement with each other than either does with the Nagelkerke R2. Of course, this might not be the case for values exceeding 0.75 as the Cox and Snell index is capped at this value. The likelihood ratio R2 is often preferred to the alternatives as it is most analogous to R2 in linear regression, is independent of the base rate (both Cox and Snell and Nagelkerke R2s increase as the proportion of cases increase from 0 to 0.5) and varies between 0 and 1.
R2McF by McFadden.
The pseudo R2 by McFadden (sometimes called likelihood ratio index) is defined as
formula_4
and is preferred over R2CS by Allison. The two expressions R2McF and R2CS are then related respectively by,
formula_5
R2T by Tjur.
Allison prefers R2T which is a relatively new measure developed by Tjur. It can be calculated in two steps:
Interpretation.
A word of caution is in order when interpreting pseudo-R2 statistics. The reason these indices of fit are referred to as "pseudo" R2 is that they do not represent the proportionate reduction in error as the R2 in linear regression does. Linear regression assumes homoscedasticity, that the error variance is the same for all values of the criterion. Logistic regression will always be heteroscedastic – the error variances differ for each value of the predicted score. For each value of the predicted score there would be a different value of the proportionate reduction in error. Therefore, it is inappropriate to think of R2 as a proportionate reduction in error in a universal sense in logistic regression.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R^2_\\text{L} = \\frac{D_\\text{null} - D_\\text{fitted}} {D_\\text{null}} ."
},
{
"math_id": 1,
"text": "\n\\begin{align}\nR^2_\\text{CS} & = 1 - \\left(\\frac{L_0}{L_M}\\right)^{2/n} \\\\[5pt]\n& = 1 - \\exp \\left(\\frac{2}{n}(\\ln(L_0) - \\ln(L_M)) \\right)\n\\end{align}\n"
},
{
"math_id": 2,
"text": "1 - L_0^{2/n}"
},
{
"math_id": 3,
"text": "R^2_\\text{CS}\\leq0.75"
},
{
"math_id": 4,
"text": "R^2_\\text{McF} = 1 - \\frac{\\ln(L_M)}{\\ln(L_0)}, "
},
{
"math_id": 5,
"text": " \\begin{matrix} R^2_\\text{CS} = 1 - \\left(\\dfrac{1}{L_0}\\right)^{\\frac{2(R^2_\\text{McF})}{n}} \\\\ [1.5em] R^2_\\text{McF} = -\\dfrac{n}{2} \\cdot \\dfrac{\\ln( 1 - R^2_\\text{CS} )}{\\ln L_0} \\end{matrix}"
}
]
| https://en.wikipedia.org/wiki?curid=71389706 |
7139118 | Euler–Maruyama method | Method in Itô calculus
In Itô calculus, the Euler–Maruyama method (also simply called the Euler method) is a method for the approximate numerical solution of a stochastic differential equation (SDE). It is an extension of the Euler method for ordinary differential equations to stochastic differential equations named after Leonhard Euler and Gisiro Maruyama. The same generalization cannot be done for any arbitrary deterministic method.
Consider the stochastic differential equation (see Itô calculus)
formula_0
with initial condition "X"0 = "x"0, where "W""t" denotes the Wiener process, and suppose that we wish to solve this SDE on some interval of time [0, "T"]. Then the Euler–Maruyama approximation to the true solution "X" is the Markov chain "Y" defined as follows:
formula_2
formula_3
where
formula_4
The random variables Δ"W""n" are independent and identically distributed normal random variables with expected value zero and variance Δ"t".
Example.
Numerical simulation.
An area that has benefited significantly from SDEs is mathematical biology. As many biological processes are both stochastic and continuous in nature, numerical methods of solving SDEs are highly valuable in the field.
The graphic depicts a stochastic differential equation solved using the Euler-Maruyama method. The deterministic counterpart is shown in blue.
Computer implementation.
The following Python code implements the Euler–Maruyama method and uses it to solve the Ornstein–Uhlenbeck process defined by
formula_5
formula_6
The random numbers for formula_7 are generated using the NumPy mathematics package.
import numpy as np
import matplotlib.pyplot as plt
class Model:
"""Stochastic model constants."""
THETA = 0.7
MU = 1.5
SIGMA = 0.06
def mu(y: float, _t: float) -> float:
"""Implement the Ornstein–Uhlenbeck mu."""
return Model.THETA * (Model.MU - y)
def sigma(_y: float, _t: float) -> float:
"""Implement the Ornstein–Uhlenbeck sigma."""
return Model.SIGMA
def dW(delta_t: float) -> float:
"""Sample a random number at each call."""
return np.random.normal(loc=0.0, scale=np.sqrt(delta_t))
def run_simulation():
""" Return the result of one full simulation."""
T_INIT = 3
T_END = 7
N = 1000 # Compute at 1000 grid points
DT = float(T_END - T_INIT) / N
TS = np.arange(T_INIT, T_END + DT, DT)
assert TS.size == N + 1
Y_INIT = 0
ys = np.zeros(TS.size)
ys[0] = Y_INIT
for i in range(1, TS.size):
t = T_INIT + (i - 1) * DT
y = ys[i - 1]
ys[i] = y + mu(y, t) * DT + sigma(y, t) * dW(DT)
return TS, ys
def plot_simulations(num_sims: int):
""" Plot several simulations in one image."""
for _ in range(num_sims):
plt.plot(*run_simulation())
plt.xlabel("time")
plt.ylabel("y")
plt.show()
if __name__ == "__main__":
NUM_SIMS = 5
plot_simulations(NUM_SIMS)
The following is simply the translation of the above code into the MATLAB (R2019b) programming language:
%% Initialization and Utility
close all;
clear all;
numSims = 5; % display five runs
tBounds = [3 7]; % The bounds of t
N = 1000; % Compute at 1000 grid points
dt = (tBounds(2) - tBounds(1)) / N ;
y_init = 0; % Initial y condition
% Initialize the probability distribution for our
% random variable with mean 0 and stdev of sqrt(dt)
pd = makedist('Normal',0,sqrt(dt));
c = [0.7, 1.5, 0.06]; % Theta, Mu, and Sigma, respectively
ts = linspace(tBounds(1), tBounds(2), N); % From t0-->t1 with N points
ys = zeros(1,N); % 1xN Matrix of zeros
ys(1) = y_init;
%% Computing the Process
for j = 1:numSims
for i = 2:numel(ts)
t = tBounds(1) + (i-1) .* dt;
y = ys(i-1);
mu = c(1) .* (c(2) - y);
sigma = c(3);
dW = random(pd);
ys(i) = y + mu .* dt + sigma .* dW;
end
figure;
hold on;
plot(ts, ys, 'o')
end
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{d} X_t = a(X_t, t) \\, \\mathrm{d} t + b(X_t, t) \\, \\mathrm{d} W_t,"
},
{
"math_id": 1,
"text": "\\Delta t>0"
},
{
"math_id": 2,
"text": "0 = \\tau_{0} < \\tau_{1} < \\cdots < \\tau_{N} = T \\text{ and } \\Delta t = T/N;"
},
{
"math_id": 3,
"text": "\\, Y_{n + 1} = Y_n + a(Y_n, \\tau_n) \\, \\Delta t + b(Y_n, \\tau_n) \\, \\Delta W_n,"
},
{
"math_id": 4,
"text": "\\Delta W_n = W_{\\tau_{n + 1}} - W_{\\tau_n}."
},
{
"math_id": 5,
"text": " dY_t=\\theta \\cdot (\\mu-Y_t) \\, {\\mathrm d}t + \\sigma \\, {\\mathrm d}W_t"
},
{
"math_id": 6,
"text": " Y_0=Y_\\mathrm{init}."
},
{
"math_id": 7,
"text": "{\\mathrm d}W_t"
}
]
| https://en.wikipedia.org/wiki?curid=7139118 |
7139248 | Milstein method | Numerical method for solving stochastic differential equations
In mathematics, the Milstein method is a technique for the approximate numerical solution of a stochastic differential equation. It is named after Grigori N. Milstein who first published it in 1974.
Description.
Consider the autonomous Itō stochastic differential equation:
formula_0
with initial condition formula_1, where formula_2 denotes the Wiener process, and suppose that we wish to solve this SDE on some interval of time formula_3. Then the Milstein approximation to the true solution formula_4 is the Markov chain formula_5 defined as follows:
Note that when formula_20 (i.e. the diffusion term does not depend on formula_21) this method is equivalent to the Euler–Maruyama method.
The Milstein scheme has both weak and strong order of convergence formula_17 which is superior to the Euler–Maruyama method, which in turn has the same weak order of convergence formula_17 but inferior strong order of convergence formula_22.
Intuitive derivation.
For this derivation, we will only look at geometric Brownian motion (GBM), the stochastic differential equation of which is given by:
formula_23
with real constants formula_24 and formula_25. Using Itō's lemma we get:
formula_26
Thus, the solution to the GBM SDE is:
formula_27
where
formula_28
The numerical solution is presented in the graphic for three different trajectories.
Computer implementation.
The following Python code implements the Milstein method and uses it to solve the SDE describing geometric Brownian motion defined by
formula_29
import numpy as np
import matplotlib.pyplot as plt
class Model:
"""Stochastic model constants."""
mu = 3
sigma = 1
def dW(dt):
"""Random sample normal distribution."""
return np.random.normal(loc=0.0, scale=np.sqrt(dt))
def run_simulation():
""" Return the result of one full simulation."""
# One second and thousand grid points
T_INIT = 0
T_END = 1
N = 1000 # Compute 1000 grid points
DT = float(T_END - T_INIT) / N
TS = np.arange(T_INIT, T_END + DT, DT)
Y_INIT = 1
# Vectors to fill
ys = np.zeros(N + 1)
ys[0] = Y_INIT
for i in range(1, TS.size):
t = (i - 1) * DT
y = ys[i - 1]
dw = dW(DT)
# Sum up terms as in the Milstein method
ys[i] = y + \
Model.mu * y * DT + \
Model.sigma * y * dw + \
(Model.sigma**2 / 2) * y * (dw**2 - DT)
return TS, ys
def plot_simulations(num_sims: int):
"""Plot several simulations in one image."""
for _ in range(num_sims):
plt.plot(*run_simulation())
plt.xlabel("time (s)")
plt.ylabel("y")
plt.grid()
plt.show()
if __name__ == "__main__":
NUM_SIMS = 2
plot_simulations(NUM_SIMS)
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{d} X_t = a(X_t) \\, \\mathrm{d} t + b(X_t) \\, \\mathrm{d} W_t"
},
{
"math_id": 1,
"text": "X_{0} = x_{0}"
},
{
"math_id": 2,
"text": "W_{t}"
},
{
"math_id": 3,
"text": "[0,T]"
},
{
"math_id": 4,
"text": "X"
},
{
"math_id": 5,
"text": "Y"
},
{
"math_id": 6,
"text": "N"
},
{
"math_id": 7,
"text": "\\Delta t>0"
},
{
"math_id": 8,
"text": "0 = \\tau_0 < \\tau_1 < \\dots < \\tau_N = T\\text{ with }\\tau_n:=n\\Delta t\\text{ and }\\Delta t = \\frac{T}{N}"
},
{
"math_id": 9,
"text": "Y_0 = x_0;"
},
{
"math_id": 10,
"text": "Y_n"
},
{
"math_id": 11,
"text": "1 \\leq n \\leq N"
},
{
"math_id": 12,
"text": "Y_{n + 1} = Y_n + a(Y_n) \\Delta t + b(Y_n) \\Delta W_n + \\frac{1}{2} b(Y_n) b'(Y_n) \\left( (\\Delta W_n)^2 - \\Delta t \\right)"
},
{
"math_id": 13,
"text": "b'"
},
{
"math_id": 14,
"text": "b(x)"
},
{
"math_id": 15,
"text": "x"
},
{
"math_id": 16,
"text": "\\Delta W_n = W_{\\tau_{n + 1}} - W_{\\tau_n}"
},
{
"math_id": 17,
"text": "\\Delta t"
},
{
"math_id": 18,
"text": "X_{\\tau_n}"
},
{
"math_id": 19,
"text": "0 \\leq n \\leq N"
},
{
"math_id": 20,
"text": " b'(Y_n) = 0 "
},
{
"math_id": 21,
"text": "X_{t}"
},
{
"math_id": 22,
"text": "\\sqrt{\\Delta t}"
},
{
"math_id": 23,
"text": "\\mathrm{d} X_t = \\mu X \\mathrm{d} t + \\sigma X d W_t"
},
{
"math_id": 24,
"text": "\\mu"
},
{
"math_id": 25,
"text": "\\sigma"
},
{
"math_id": 26,
"text": "\\mathrm{d}\\ln X_t= \\left(\\mu - \\frac{1}{2} \\sigma^2\\right)\\mathrm{d}t+\\sigma\\mathrm{d}W_t"
},
{
"math_id": 27,
"text": "\n\\begin{align}\nX_{t+\\Delta t}&=X_t\\exp\\left\\{\\int_t^{t+\\Delta t}\\left(\\mu-\\frac{1}{2}\\sigma^2\\right)\\mathrm{d}t+\\int_t^{t+\\Delta t}\\sigma\\mathrm{d}W_u\\right\\} \\\\\n&\\approx X_t\\left(1+\\mu\\Delta t-\\frac{1}{2} \\sigma^2\\Delta t+\\sigma\\Delta W_t+\\frac{1}{2}\\sigma^2(\\Delta W_t)^2\\right) \\\\\n&= X_t + a(X_t)\\Delta t+b(X_t)\\Delta W_t+\\frac{1}{2}b(X_t)b'(X_t)((\\Delta W_t)^2-\\Delta t)\n\\end{align}"
},
{
"math_id": 28,
"text": " a(x) = \\mu x, ~b(x) = \\sigma x "
},
{
"math_id": 29,
"text": "\\begin{cases}\ndY_t= \\mu Y \\, {\\mathrm d}t + \\sigma Y \\, {\\mathrm d}W_t \\\\\nY_0=Y_\\text{init}\n\\end{cases}"
}
]
| https://en.wikipedia.org/wiki?curid=7139248 |
7139336 | Runge–Kutta method (SDE) | In mathematics of stochastic systems, the Runge–Kutta method is a technique for the approximate numerical solution of a stochastic differential equation. It is a generalisation of the Runge–Kutta method for ordinary differential equations to stochastic differential equations (SDEs). Importantly, the method does not involve knowing derivatives of the coefficient functions in the SDEs.
Most basic scheme.
Consider the Itō diffusion formula_0 satisfying the following Itō stochastic differential equation
formula_1
with initial condition formula_2, where formula_3 stands for the Wiener process, and suppose that we wish to solve this SDE on some interval of time formula_4. Then the basic Runge–Kutta approximation to the true solution formula_0 is the Markov chain formula_5 defined as follows:
The random variables formula_15 are independent and identically distributed normal random variables with expected value zero and variance formula_16.
This scheme has strong order 1, meaning that the approximation error of the actual solution at a fixed time scales with the time step formula_16. It has also weak order 1, meaning that the error on the statistics of the solution scales with the time step formula_16. See the references for complete and exact statements.
The functions formula_17 and formula_18 can be time-varying without any complication. The method can be generalized to the case of several coupled equations; the principle is the same but the equations become longer.
Variation of the Improved Euler is flexible.
A newer Runge—Kutta scheme also of strong order 1 straightforwardly reduces to the improved Euler scheme for deterministic ODEs.
Consider the vector stochastic process formula_19 that satisfies the general Ito SDE
formula_20
where drift formula_21 and volatility formula_22 are sufficiently smooth functions of their arguments.
Given time step formula_23, and given the value formula_24, estimate formula_25 by formula_26 for time formula_27 via
formula_28
The above describes only one time step.
Repeat this time step formula_33 times in order to integrate the SDE from time formula_34 to formula_35.
The scheme integrates Stratonovich SDEs to formula_36 provided one sets formula_37 throughout (instead of choosing formula_38).
Higher order Runge-Kutta schemes.
Higher-order schemes also exist, but become increasingly complex.
Rößler developed many schemes for Ito SDEs,
whereas Komori developed schemes for Stratonovich SDEs. Rackauckas extended these schemes to allow for adaptive-time stepping via Rejection Sampling with Memory (RSwM), resulting in orders of magnitude efficiency increases in practical biological models, along with coefficient optimization for improved stability.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "dX_t = a(X_t) \\, dt + b(X_t) \\, dW_t,"
},
{
"math_id": 2,
"text": "X_0=x_0"
},
{
"math_id": 3,
"text": "W_t"
},
{
"math_id": 4,
"text": "[0,T]"
},
{
"math_id": 5,
"text": "Y"
},
{
"math_id": 6,
"text": "N"
},
{
"math_id": 7,
"text": "\\delta=T/N > 0"
},
{
"math_id": 8,
"text": "0 = \\tau_{0} < \\tau_{1} < \\dots < \\tau_{N} = T;"
},
{
"math_id": 9,
"text": "Y_0 := x_0"
},
{
"math_id": 10,
"text": "Y_n"
},
{
"math_id": 11,
"text": "1\\leq n\\leq N"
},
{
"math_id": 12,
"text": "Y_{n + 1} := Y_{n} + a(Y_{n}) \\delta + b(Y_{n}) \\Delta W_{n} + \\frac{1}{2} \\left( b(\\hat{\\Upsilon}_{n}) - b(Y_{n}) \\right) \\left( (\\Delta W_{n})^{2} - \\delta \\right) \\delta^{-1/2},"
},
{
"math_id": 13,
"text": "\\Delta W_{n} = W_{\\tau_{n + 1}} - W_{\\tau_{n}}"
},
{
"math_id": 14,
"text": "\\hat{\\Upsilon}_{n} = Y_{n} + a(Y_n) \\delta + b(Y_{n}) \\delta^{1/2}."
},
{
"math_id": 15,
"text": "\\Delta W_{n}"
},
{
"math_id": 16,
"text": "\\delta"
},
{
"math_id": 17,
"text": "a"
},
{
"math_id": 18,
"text": "b"
},
{
"math_id": 19,
"text": "\\vec X(t)\\in \\mathbb R^n"
},
{
"math_id": 20,
"text": "\nd\\vec X=\\vec a(t,\\vec X)\\,dt+\\vec b(t,\\vec X)\\,dW,\n"
},
{
"math_id": 21,
"text": "\\vec a"
},
{
"math_id": 22,
"text": "\\vec b"
},
{
"math_id": 23,
"text": "h"
},
{
"math_id": 24,
"text": "\\vec X(t_k) = \\vec X_k"
},
{
"math_id": 25,
"text": "\\vec X(t_{k+1})"
},
{
"math_id": 26,
"text": "\\vec X_{k+1}"
},
{
"math_id": 27,
"text": "t_{k+1}=t_k+h"
},
{
"math_id": 28,
"text": "\n\\begin{array}{l}\n\\vec K_1=h\\vec a(t_k,\\vec X_k)+(\\Delta W_k-S_k\\sqrt h)\\vec b(t_k,\\vec X_k),\n\\\\\n\\vec K_2=h\\vec a(t_{k+1},\\vec X_k+\\vec K_1)+(\\Delta W_k+S_k\\sqrt h)\\vec b(t_{k+1},\\vec X_k+\\vec K_1),\n\\\\\n\\vec X_{k+1}=\\vec X_k+\\frac12(\\vec K_1+\\vec K_2),\n\\end{array}\n"
},
{
"math_id": 29,
"text": "\\Delta W_k=\\sqrt hZ_k"
},
{
"math_id": 30,
"text": "Z_k\\sim N(0,1)"
},
{
"math_id": 31,
"text": "S_k=\\pm1"
},
{
"math_id": 32,
"text": "1/2"
},
{
"math_id": 33,
"text": "(t_m-t_0)/h"
},
{
"math_id": 34,
"text": "t=t_0"
},
{
"math_id": 35,
"text": "t=t_m"
},
{
"math_id": 36,
"text": "O(h)"
},
{
"math_id": 37,
"text": "S_k=0"
},
{
"math_id": 38,
"text": "\\pm 1"
}
]
| https://en.wikipedia.org/wiki?curid=7139336 |
71397443 | Minimal residual method | Computational method
The Minimal Residual Method or MINRES is a Krylov subspace method for the iterative solution of symmetric linear equation systems. It was proposed by mathematicians Christopher Conway Paige and Michael Alan Saunders in 1975.
In contrast to the popular CG method, the MINRES method does not assume that the matrix is positive definite, only the symmetry of the matrix is mandatory.
GMRES vs. MINRES.
The GMRES method is essentially a generalization of MINRES for arbitrary matrices. Both minimize the 2-norm of the residual and do the same calculations in exact arithmetic when the matrix is symmetric. MINRES is a short-recurrence method with a constant memory requirement, whereas GMRES requires storing the whole Krylov space, so its memory requirement is roughly proportional to the number of iterations. On the other hand, GMRES tends to suffer less from loss of orthogonality.
Properties of the MINRES method.
The MINRES method iteratively calculates an approximate solution of a linear system of equations of the form
formula_0
where formula_1 is a symmetric matrix and formula_2 a vector.
For this, the norm of the residual formula_3 in a formula_4-dimensional Krylov subspace
formula_5
is minimized. Here formula_6 is an initial value (often zero) and formula_7.
More precisely, we define the approximate solutions formula_8 through
formula_9
where formula_10 is the standard Euclidean norm on formula_11.
Because of the symmetry of formula_12, unlike in the GMRES method, it is possible to carry out this minimization process recursively, storing only two previous steps (short recurrence). This saves memory.
MINRES algorithm.
Note: The MINRES method is more complicated than the algebraically equivalent Conjugate Residual method. The Conjugate Residual (CR) method was therefore produced below as a substitute. It differs from MINRES in that in MINRES, the columns of a basis of the Krylov space (denoted below by formula_13) can be orthogonalized, whereas in CR their images (below labeled with formula_14) can be orthogonalized via the Lanczos recursion. There are more efficient and preconditioned variants with fewer AXPYs. Compare with the article.
First you choose formula_6 arbitrary and compute
formula_15
Then we iterate for formula_16 in the following steps:
Convergence rate of the MINRES method.
In the case of positive definite matrices, the convergence rate of the MINRES method can be estimated in a way similar to that of the CG method. In contrast to the CG method, however, the estimation does not apply to the errors of the iterates, but to the residual. The following applies:
formula_17
where formula_18 is the condition number of matrix formula_12. Because formula_12 is normal, we have
formula_19
where formula_20 and formula_21 are maximal and minimal eigenvalues of formula_12, respectively.
Implementation in GNU Octave / MATLAB.
function [x, r] = minres(A, b, x0, maxit, tol)
x = x0;
r = b - A * x0;
p0 = r;
s0 = A * p0;
p1 = p0;
s1 = s0;
for iter = 1:maxit
p2 = p1; p1 = p0;
s2 = s1; s1 = s0;
alpha = r'*s1 / (s1'*s1);
x = x + alpha * p1;
r = r - alpha * s1;
if (r'*r < tol^2)
break
end
p0 = s1;
s0 = A * s1;
beta1 = s0'*s1 / (s1'*s1);
p0 = p0 - beta1 * p1;
s0 = s0 - beta1 * s1;
if iter > 1
beta2 = s0'*s2 / (s2'*s2);
p0 = p0 - beta2 * p2;
s0 = s0 - beta2 * s2;
end
end
end | [
{
"math_id": 0,
"text": "Ax = b,"
},
{
"math_id": 1,
"text": "A\\in\\mathbb{R}^{n\\times n}"
},
{
"math_id": 2,
"text": "b\\in\\mathbb{R}^n"
},
{
"math_id": 3,
"text": "r(x) := b - Ax"
},
{
"math_id": 4,
"text": "k"
},
{
"math_id": 5,
"text": "V_k = x_0 + \\operatorname{span}\\{r_0, Ar_0\\ldots,A^{k-1}r_0\\}"
},
{
"math_id": 6,
"text": "x_0\\in\\mathbb{R}^n"
},
{
"math_id": 7,
"text": "r_0 := r(x_0)"
},
{
"math_id": 8,
"text": "x_k"
},
{
"math_id": 9,
"text": "x_k := \\mathrm{argmin}_{x\\in V_k} \\|r(x)\\|,"
},
{
"math_id": 10,
"text": "\\|\\cdot\\|"
},
{
"math_id": 11,
"text": "\\mathbb{R}^n"
},
{
"math_id": 12,
"text": "A"
},
{
"math_id": 13,
"text": "p_k"
},
{
"math_id": 14,
"text": "s_k"
},
{
"math_id": 15,
"text": "\\begin{align}\nr_0 &= b - A x_0 \\\\\np_0 &= r_0 \\\\\ns_0 &= A p_0\n\\end{align}"
},
{
"math_id": 16,
"text": "k=1,2,\\dots"
},
{
"math_id": 17,
"text": "\\|r_k\\| \\le 2\\left(\\frac{\\sqrt{\\kappa(A)}-1}{\\sqrt{\\kappa(A)}+1}\\right)^k\\|r_{0}\\|,"
},
{
"math_id": 18,
"text": "\\kappa(A)"
},
{
"math_id": 19,
"text": "\\kappa(A) = \\frac{\\left|\\lambda_\\text{max}(A)\\right|}{\\left|\\lambda_\\text{min}(A)\\right|},"
},
{
"math_id": 20,
"text": "\\lambda_\\text{max}(A)"
},
{
"math_id": 21,
"text": "\\lambda_\\text{min}(A) "
}
]
| https://en.wikipedia.org/wiki?curid=71397443 |
71397452 | Helios Dust Instrumentation | The Helios 1 and 2 spacecraft each carried two dust instruments to characterize the Zodiacal dust cloud inside the Earth’s orbit down to spacecraft positions 0.3 AU from the sun. The Zodiacal light instrument measured the brightness of light scattered by interplanetary dust along the line of sight. The in situ Micrometeoroid analyzer recorded impacts of meteoroids onto the sensitive detector surface and characterized their composition. The instruments delivered radial profiles of their measured data. Comet or meteoroid streams, and even interstellar dust were identified in the data.
Overview.
The two Helios spacecraft were the result of a joint venture of West Germany's space agency DLR and NASA. The spacecraft were built in Germany and launched from Cape Canaveral Air Force Station, Florida. Helios 1 was launched in December 1974 onto an elliptic orbit between 1 and 0.31 AU. Helios 2 followed in January 1976 and reached 0.29 AU perihelion distance. The orbital periods were about 6 Months. The Helios spacecraft were spinning with the spin axis perpendicular to the ecliptic plane. The Helios 1 spin axis pointed to ecliptic north whereas the Helios 2 orientation was inverted and the spin axis pointed to ecliptic south. The despun high gain antenna beam pointed always to Earth. Because of the orbit the distance between the spacecraft and Earth varied between a few and 300 million km and the data transmission rate varied accordingly. Twice per Helios orbit the spacecraft was in conjunction (in front or behind the Sun) and no data transmission was possible for a few weeks. Helios 1 delivered scientific data for ten years and Helios 2 for five years.
The Zodiacal light instrument.
The primary goal of the Zodiacal light instrument on Helios was to determine the three-dimensional spatial distribution of interplanetary dust.
To this end, from all along its orbit, Helios performed
precise zodiacal light measurements covering a substantial part of the sky.
These partial sky maps, because of the rotation of Helios, consisted of
a band 1° wide at ecliptic latitude ß=16° with 32 sectors 5.62°, 11.25° and 22.5°
long, a similar band 2° wide at ecliptic latitude ß = 31° and a field of 3° diameter at the ecliptic pole.
All fields were in the south for Helios 1, in the north for Helios 2. The width of the
sectors was chosen to be smallest for the brightest regions of zodiacal light.
This map has been realized by three small (36 mm aperture) photometers, P15, P30, and P90, one for
each ecliptic latitude.
A stepping motor changed the observing wavelength - with or without polarization -
to 360 ± 30 nm, 420 ± 40 nm, 540 ± 70 nm (close to the UBV system) or to
dark current and calibration measurements.
Each of the 36 resulting different brightness maps represents an average over 512 Helios rotations, leading to a cycle of total length 5.2 hours, which is continually repeated.
The sensors were photomultipliers EMR 541 N operating in photon pulse counting mode.
Throughout their mission the Helios space probes were exposed to full sunlight, which exceed the typical zodiacal light intensity by factor of 1012 to 1013. For accurate (1%) measurements demanding stray light suppression by a factor of 1015 was required, the main design goal to be met. This could be achieved in three steps:
The Zodiacal light instrument was developed at the Max Planck Institute for Astronomy in Heidelberg by Christoph Leinert and colleagues and built by Dornier systems.
The Micrometeoroid analyzer.
The goal of the Micrometeoroid Analyzer was 1. to determine the spatial distribution of the dust flux in the inner planetary system, and 2. to search for variations of the compositional and physical properties of micrometeoroids.
The instrument consisted of two impact ionization time-of-flight mass spectrometers and was developed by PI Eberhard Grün, Principal Engineer Peter Gammelin, and colleagues at the Max Planck Institute for Nuclear Physics in Heidelberg. Each sensor (Ecliptic sensor and South sensor) was a 1 m long and 0.15 m diameter tube with two grids and a venetian blind type impact target in front, several more grids, a 0.8 m long field-free drift tube and an electron multiplier in the inside. Micrometeoroids hitting the venetian blind type impact target generate an impact plasma. Electrons are collected by the positively biased grid in front of the target while positive ions are drawn inward by a negatively biased grid behind the target. Part of the ions reach the time-lag focusing region from which they fly through the field-free drift tube at -200 V potential. Ions of different masses reach the electron multiplier at different times and generate a mass spectrum at the multiplier output. Impact signals are recorded by charge-sensitive preamplifiers attached to the electron grid in front and the ion grid behind the target. From these signals together with the mass spectrum the mass and energy of the dust particle and the composition of the impact plasma are obtained.
The South sensor was shielded by the spacecraft rim from direct sun light, whereas the ecliptic sensor was directly exposed to the intense solar radiation (up to 13 kW/m2). Therefore, the interior of the sensor was protected by a 0.3 micron thick aluminized parylene film which was attached to the first entrance grid. In order to study the effect of micrometeoroids penetrating the film, extensive dust accelerator studies with various materials were performed. It was shown that the penetration limit of the Helios film depends strongly on the density of meteoroids. Impact experiments with a lab version of the Helios micrometeoroid sensor were performed using several materials at the accelerators at the Max Planck Institute for Nuclear Physics in Heidelberg and at the Ames Research Center, ARC, in Moffet Field. The projectile materials included iron (Fe), quartz, glass, aluminium (Al), aluminium oxide (Al2O3), polystyrene, and kaolin. The mass resolution of the mass spectra of the Helios sensors was low formula_0, i.e. only ions of atomic mass unit 10 u could be separated from ions of mass 11 u.
These mass spectra served as reference for the spectra obtained in space. Spectra were recorded from 10 u to 70 u. The mean calibration spectra are presented in a three phase diagram: low masses (10 to 30 u), medium masses (30 to 50 u), and high masses (50 to 70 u).
Micrometeoroid data.
During ten orbits about the sun from 1974 to 1980 the Helios 1 micrometeoroid analyzer transmitted data of 235 dust impacts to Earth. Since the onboard data storage capability was limited and the data transmission rate varied strongly depending on the distance between spacecraft and Earth not all data recorded by the sensors was received on Earth. The effective measuring time ranged from ~30% at perihel to ~75% at 1AU distance. Many noise events caused by solar wind plasma and photo electrons were recorded by the sensors as well. Only events within a coincidence time of 12 micro seconds between positive and negative signals and, mainly, the measurement of a mass spectrum following the initial trigger were considered dust impacts. Quantities determined for each impact are: the time and position, the azimuth of the sensor viewing at the time of impact, the total positive charge of the impact signal, the rise-time of the charge signal (proxy for the impact speed) and a complete mass spectrum. The micrometeoroid instrument on Helios 2 was much noisier and recorded only a handful of impacts that did not provide additional information.
Results.
The Zodiacal light carries information on those regions of interplanetary space along the line of sight, which
contribute significantly to its observed brightness. For Helios this covers the range of 0.09 to about 2 Astronomical Units.
Spatial distribution.
Radial dependencies.
The zodiacal light instrument observed a strong increase of the zodiacal light brightness inward the Earth orbit. The brightness was more than a factor 10 higher at spacecraft position 0.3 AU than at 1 AU. This brightness increase corresponds to interplanetary dust density increase corresponding to formula_1. This strong increase requires that there is a source of interplanetary dust inside the Earth’s orbit. It was suggested that collisional fragmentation of bigger meteoroids generates the dust observed in the zodiacal light.
The radial flux of micrometeoroids recorded by Helios increased by a factor 5 to 10 depending on the mass from 10−17 kg to 10−13 kg. This information together with the position and azimuth measurements was used in the first dynamical model of the interplanetary dust cloud; also the zodiacal light intensities observed by the Helios Zodiacal light instrument were included in this model. The Helios data defined the core, the inclined, and the eccentric populations of this model.
Plane of symmetry.
From the difference between the measured zodiacal light
brightness during inbound and outbound parts of the orbit and
between right and left of the Sun the plane of symmetry of the
interplanetary dust cloud was determined. With its ascending node of
87 ± 5° and inclination of 3.0 ± 0.3° it lies between the
invariable plane of the Solar System and the plane of the
solar equator.
Orbital distribution.
Of the 235 impacts total 152 were recorded by the South sensor and 83 by the Ecliptic sensor. This excess of impacts on the South sensor had mostly small impact (charge) signals but there was also some excess of big impacts. From thee azimuth values of Ecliptic sensor impacts it was concluded that the micrometeoroids moved on low eccentric orbits, e < 0.4, whereas South sensor impacts moved mostly on higher eccentric orbits. There was even an excess of outward compared to inward trajectories like the ’’’beta-meteoroids’’’ which were observed earlier by the Pioneer 8 and 9 dust instruments.
Optical, physical, and chemical properties.
The measurements of zodiacal light color - essentially constant
along the Helios orbit - and of polarization - showing a decrease
closer toward the Sun - also contain information on properties
on interplanetary dust particles.
On the basis of the penetration studies with the Helios film the excess of impacts on the South sensor was interpreted to be due to low density, formula_2 < 1000 kg/m3, meteoroids that were shielded by the entrance film from entering the Ecliptic sensor.
Helios mass spectra range from those with dominant low masses up to 30 u that are compatible with silicates to those with dominant high masses between 50 and 60 u of iron and molecular ion types. The spectra display no clustering of single minerals. The continuous transition from low to high ion masses indicates that individual grains are a mixture of various minerals and carbonaceous compounds.
Cometary and interstellar dust streams.
The Helios zodiacal light measurements show excellent stability.
This allows detecting local brightness excesses if they are crossed
by the Helios field-of-view, like it happened for comet West or for
the Quadrantid meteor shower. Repetition by about 0.2% from
orbit to orbit sufficed to detect the dust ring along the orbit
of Venus.
Inspection of the Helios micrometeoroid data showed a clustering of impacts in the same region of space on different Helios orbits. A search with the Interplanetary Meteoroid Environment for eXploration (IMEX) dust streams in space model identified the trails of comets 45P/Honda-Mrkos-Pajdušáková and 72P/Denning-Fujikawa that Helios traversed multiple times during the first ten orbits around the Sun.
After the discovery of interstellar dust passing through the planetary system by the Ulysses spacecraft interstellar dust particles were also found in the Helios micrometeoroid data. Based on the spacecraft position, the azimuth and impact charge 27 impactors are compatible with an interstellar source. The Helios measurements comprise interstellar dust measurements closest to the Sun.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " R=\\cfrac{M}{\\Delta M}\\sim 10"
},
{
"math_id": 1,
"text": " N(r) \\sim r^{-1.3}"
},
{
"math_id": 2,
"text": " \\rho"
}
]
| https://en.wikipedia.org/wiki?curid=71397452 |
71398482 | Madhav V. Nori | Indian mathematician
Madhav Vithal Nori is an Indian mathematician. In 1980 he has received the INSA Medal for Young Scientists.
Career.
Nori was awarded his PhD in mathematics in 1981 from the University of Mumbai. He studies within the fields of algebraic geometry and commutative algebra. His areas of interest in research focus on algebraic cycles, K-theory, Hodge theory, Galois theory, and their interactions. Nori received the INSA Medal for Young Scientists in 1980 and is an elected Fellow of the Indian Academy of Sciences, Bangalore.
The fundamental group scheme.
Under the direction of Conjeerveram S. Seshadri Nori proved the existence of the fundamental group scheme formula_0 during his PhD work, using the theory of essentially finite vector bundles that he defined. The fundamental group scheme is also known as Nori fundamental group scheme, taking the name by his creator, and often also denoted as formula_1, where formula_2 stands for Nori. There is a special family of vector bundles called Nori-semistable vector bundles in Nori's honor as he had the first intuition for their existence and properties. His construction has been since then further generalized, for istance a proof of the existence of the fundamental group scheme for schemes defined over Dedekind schemes has been provided by Marco Antei, Michel Emsalem and Carlo Gasbarri.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\pi_1(X,x)"
},
{
"math_id": 1,
"text": "\\pi^N(X,x)"
},
{
"math_id": 2,
"text": "N"
}
]
| https://en.wikipedia.org/wiki?curid=71398482 |
71400884 | Nori-semistable vector bundle | Type of vector bundle
In mathematics, a Nori semistable vector bundle is a particular type of vector bundle whose first definition has been first implicitly suggested by Madhav V. Nori, as one of the main ingredients for the construction of the fundamental group scheme. The original definition given by Nori was obviously not called "Nori semistable". Also, Nori's definition was different from the one suggested nowadays. The category of Nori semistable vector bundles contains the Tannakian category of essentially finite vector bundles, whose naturally associated group scheme is the fundamental group scheme formula_0.
Definition.
Let formula_1 be a scheme over a field formula_2 and formula_3 a vector bundle on formula_1. It is said that formula_3 is "Nori semistable" if for any smooth and proper curve formula_4 over formula_5 and any morphism formula_6 the pull back formula_7 is semistable of degree 0.
Difference with Nori's original definition.
Nori semistable vector bundles were called by Nori "semistable" causing a lot of confusion with the already existing definition of semistable vector bundles. More importantly Nori simply said that the restriction of formula_3 to any curve in formula_1 had to be semistable of degree 0. Then for instance in positive characteristic a morphism formula_8 like the Frobenius morphism was not included in Nori's original definition. The importance of including it is that the above definition makes the category of Nori semistable vector bundles tannakian and the group scheme associated to it is the formula_9-fundamental group scheme formula_10. Instead, Nori's original definition didn't give rise to a Tannakian category but only to an abelian category. | [
{
"math_id": 0,
"text": "\\pi_1(X,x)"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "V"
},
{
"math_id": 4,
"text": "C"
},
{
"math_id": 5,
"text": "\\bar k"
},
{
"math_id": 6,
"text": "j:C\\to X"
},
{
"math_id": 7,
"text": "j^*(V)"
},
{
"math_id": 8,
"text": "j"
},
{
"math_id": 9,
"text": "S"
},
{
"math_id": 10,
"text": "\\pi^S(X,x)"
}
]
| https://en.wikipedia.org/wiki?curid=71400884 |
7140311 | International Fisher effect | The international Fisher effect (sometimes referred to as Fisher's open hypothesis) is a hypothesis in international finance that suggests differences in nominal interest rates reflect expected changes in the spot exchange rate between countries. The hypothesis specifically states that a spot exchange rate is expected to change equally in the opposite direction of the interest rate differential; thus, the currency of the country with the higher nominal interest rate is expected to depreciate against the currency of the country with the lower nominal interest rate, as higher nominal interest rates reflect an expectation of inflation.
Derivation of the International Fisher effect.
The International Fisher effect is an extension of the Fisher effect hypothesized by American economist Irving Fisher. The Fisher effect states that a change in a country's expected inflation rate will result in a proportionate change in the country's interest rate
formula_0
where
formula_1 is the nominal interest rate
formula_2 is the real interest rate
formula_3 is the expected inflation rate
This may be arranged as follows
formula_4
When the inflation rate is low, the term formula_5 will be negligible. This suggests that the expected inflation rate is approximately equal to the difference between the nominal and real interest rates in any given country
formula_6
Let us assume that the real interest rate is equal across two countries (the US and Germany for example) due to capital mobility, such that formula_7. Then substituting the approximate relationship above into the relative purchasing power parity formula results in the formal equation for the International Fisher effect
formula_8
where formula_9 refers to the spot exchange rate. This relationship tells us that the rate of change in the exchange rate between two countries is approximately equal to the difference in those countries' interest rates.
Relation to interest rate parity.
Combining the international Fisher effect with uncovered interest rate parity yields the following equation:
formula_10
where
formula_11 is the expected future spot exchange rate
formula_12 is the spot exchange rate
Combining the International Fisher effect with covered interest rate parity yields the equation for unbiasedness hypothesis, where the forward exchange rate is an unbiased predictor of the future spot exchange rate.:
formula_13
where
formula_14 is the forward exchange rate.
Example.
Suppose the current spot exchange rate between the United States and the United Kingdom is 1.4339 GBP/USD. Also suppose the current interest rates are 5 percent in the U.S. and 7 percent in the U.K. What is the expected spot exchange rate 12 months from now according to the international Fisher effect? The effect estimates future exchange rates based on the relationship between nominal interest rates. Multiplying the current spot exchange rate by the nominal annual U.S. interest rate and dividing by the nominal annual U.K. interest rate yields the estimate of the spot exchange rate 12 months from now:
formula_15
To check this example, use the formal or rearranged expressions of the international Fisher effect on the given interest rates:
formula_16
formula_17
The expected percentage change in the exchange rate is a depreciation of 1.87% for the GBP (it now only costs $1.4071 to purchase 1 GBP rather than $1.4339), which is consistent with the expectation that the value of the currency in the country with a higher interest rate will depreciate.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(1+i) = (1+r) \\times (1+E[\\pi])"
},
{
"math_id": 1,
"text": "i"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "E[\\pi]"
},
{
"math_id": 4,
"text": "i + 1 = 1 + E[\\pi] + r + r E[\\pi]"
},
{
"math_id": 5,
"text": "r E[\\pi]"
},
{
"math_id": 6,
"text": "E[\\pi] \\approx i - r"
},
{
"math_id": 7,
"text": "r_\\$ = r_\\euro"
},
{
"math_id": 8,
"text": "\\frac{\\Delta S(\\$/\\euro)}{S(\\$/\\euro)} = \\frac{i_\\$ - i_\\euro}{1 + i_\\euro} \\approx i_\\$ - i_\\euro"
},
{
"math_id": 9,
"text": "S"
},
{
"math_id": 10,
"text": "E(e) = \\frac {E(S_{t+k})} {S_t} - 1 = \\frac {(i_\\$ - i_\\euro)} {(1 + i_\\euro)}"
},
{
"math_id": 11,
"text": "E(S_{t+k})"
},
{
"math_id": 12,
"text": "S_t"
},
{
"math_id": 13,
"text": "\\frac {F_{t,T}} {S_t} - 1 = \\frac {(i_\\$ - i_\\euro)} {(1 + i_\\euro)} = E(e)"
},
{
"math_id": 14,
"text": "F_{t,T}"
},
{
"math_id": 15,
"text": "\\$1.4339 \\times \\frac {(1 + 5\\%)} {(1 + 7\\%)} = \\$1.4071"
},
{
"math_id": 16,
"text": "E(e) = \\frac {(5\\% - 7\\%)} {(1 + 7\\%)} = -0.018692 = -1.87\\%"
},
{
"math_id": 17,
"text": "E(e) = \\frac {(1 + 5\\%)} {(1 + 7\\%)} - 1 = -0.018692 = -1.87\\%"
}
]
| https://en.wikipedia.org/wiki?curid=7140311 |
71404791 | Thomas Bloom | British mathematician
Thomas F. Bloom is a mathematician, who is a Royal Society University Research Fellow at the University of Oxford. He works in arithmetic combinatorics and analytic number theory.
Education and career.
Thomas did his undergraduate degree in Mathematics and Philosophy at Merton College, Oxford. He then went on to do his PhD in mathematics at the University of Bristol under the supervision of Trevor Wooley. After finishing his PhD, he was a Heilbronn Research Fellow at the University of Bristol. In 2018, he became a postdoctoral research fellow at the University of Cambridge with Timothy Gowers. In 2021, he joined the University of Oxford as a Research Fellow.
Research.
In July 2020, Bloom and Sisask proved that any set such that formula_0 diverges must contain arithmetic progressions of length 3. This is the first non-trivial case of a conjecture of Erdős postulating that any such set must in fact contain arbitrarily long arithmetic progressions.
In November 2020, in joint work with James Maynard, he improved the best-known bound for square-difference-free sets, showing that a set formula_1 with no square difference has size at most formula_2 for some formula_3.
In December 2021, he proved that any set formula_4 of positive upper density contains a finite formula_5 such that formula_6. This answered a question of Erdős and Graham.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_{n \\in A} \\frac{1}{n}"
},
{
"math_id": 1,
"text": "A \\subset [N]"
},
{
"math_id": 2,
"text": "\\frac{N}{(\\log N)^{c\\log \\log\\log N}}"
},
{
"math_id": 3,
"text": "c>0"
},
{
"math_id": 4,
"text": "A \\subset \\mathbb{N}"
},
{
"math_id": 5,
"text": "S \\subset A"
},
{
"math_id": 6,
"text": "\\sum_{n \\in S} \\frac{1}{n}=1"
}
]
| https://en.wikipedia.org/wiki?curid=71404791 |
714050 | Vieta's formulas | Relating coefficients and roots of a polynomial
In mathematics, Vieta's formulas relate the coefficients of a polynomial to sums and products of its roots. They are named after François Viète (more commonly referred to by the Latinised form of his name, "Franciscus Vieta").
Basic formulas.
Any general polynomial of degree "n"
formula_0
(with the coefficients being real or complex numbers and "a""n" ≠ 0) has "n" (not necessarily distinct) complex roots "r"1, "r"2, ..., "r""n" by the fundamental theorem of algebra. Vieta's formulas relate the polynomial coefficients to signed sums of products of the roots "r"1, "r"2, ..., "r""n" as follows:
Vieta's formulas can equivalently be written as
formula_1
for "k"
1, 2, ..., "n" (the indices "i""k" are sorted in increasing order to ensure each product of "k" roots is used exactly once).
The left-hand sides of Vieta's formulas are the elementary symmetric polynomials of the roots.
Vieta's system (*) can be solved by Newton's method through an explicit simple iterative formula, the Durand-Kerner method.
Generalization to rings.
Vieta's formulas are frequently used with polynomials with coefficients in any integral domain R. Then, the quotients formula_2 belong to the field of fractions of R (and possibly are in R itself if formula_3 happens to be invertible in R) and the roots formula_4 are taken in an algebraically closed extension. Typically, R is the ring of the integers, the field of fractions is the field of the rational numbers and the algebraically closed field is the field of the complex numbers.
Vieta's formulas are then useful because they provide relations between the roots without having to compute them.
For polynomials over a commutative ring that is not an integral domain, Vieta's formulas are only valid when formula_3 is not a zero-divisor and formula_5 factors as formula_6. For example, in the ring of the integers modulo 8, the quadratic polynomial formula_7 has four roots: 1, 3, 5, and 7. Vieta's formulas are not true if, say, formula_8 and formula_9, because formula_10. However, formula_5 does factor as formula_11 and also as formula_12, and Vieta's formulas hold if we set either formula_8 and formula_13 or formula_14 and formula_15.
Example.
Vieta's formulas applied to quadratic and cubic polynomials:
The roots formula_16 of the quadratic polynomial formula_17 satisfy
formula_18
The first of these equations can be used to find the minimum (or maximum) of "P"; see .
The roots formula_19 of the cubic polynomial formula_20 satisfy
formula_21
Proof.
Direct proof.
Vieta's formulas can be proved by expanding the equality
formula_22
(which is true since formula_23 are all the roots of this polynomial), multiplying the factors on the right-hand side, and identifying the coefficients of each power of formula_24
Formally, if one expands formula_25 the terms are precisely formula_26 where formula_27 is either 0 or 1, accordingly as whether formula_4 is included in the product or not, and "k" is the number of formula_4 that are included, so the total number of factors in the product is "n" (counting formula_28 with multiplicity "k") – as there are "n" binary choices (include formula_4 or "x"), there are formula_29 terms – geometrically, these can be understood as the vertices of a hypercube. Grouping these terms by degree yields the elementary symmetric polynomials in formula_4 – for "xk," all distinct "k"-fold products of formula_30
As an example, consider the quadratic
formula_31
Comparing identical powers of formula_32, we find formula_33, formula_34 and formula_35, with which we can for example identify formula_36 and formula_37, which are Vieta's formula's for formula_38.
Proof by mathematical induction.
Vieta's formulas can also be proven by induction as shown below.
Inductive hypothesis:
Let formula_39 be polynomial of degree formula_40, with complex roots formula_41 and complex coefficients formula_42 where formula_43. Then the inductive hypothesis is thatformula_44
Base case, formula_45 (quadratic):
Let formula_46 be coefficients of the quadratic and formula_47be the constant term. Similarly, let formula_48 be the roots of the quadratic:formula_49Expand the right side using distributive property:formula_50Collect like terms:formula_51Apply distributive property again:formula_52The inductive hypothesis has now been proven true for formula_53.
Induction step:
Assuming the inductive hypothesis holds true for all formula_54, it must be true for all formula_55.formula_56By the factor theorem, formula_57 can be factored out of formula_58 leaving a 0 remainder. Note that the roots of the polynomial in the square brackets are formula_59:formula_60Factor out formula_61, the leading coefficient formula_5, from the polynomial in the square brackets:formula_62For simplicity sake, allow the coefficients and constant of polynomial be denoted as formula_63:formula_64Using the inductive hypothesis, the polynomial in the square brackets can be rewritten as:formula_65Using distributive property:formula_66After expanding and collecting like terms:formula_67The inductive hypothesis holds true for formula_68, therefore it must be true formula_69
Conclusion:formula_70By dividing both sides by formula_71, it proves the Vieta's formulas true.
History.
As reflected in the name, the formulas were discovered by the 16th-century French mathematician François Viète, for the case of positive roots.
In the opinion of the 18th-century British mathematician Charles Hutton, as quoted by Funkhouser, the general principle (not restricted to positive real roots) was first understood by the 17th-century French mathematician Albert Girard:
...[Girard was] the first person who understood the general doctrine of the formation of the coefficients of the powers from the sum of the roots and their products. He was the first who discovered the rules for summing the powers of the roots of any equation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P(x) = a_n x^n + a_{n-1}x^{n-1} + \\cdots + a_1 x + a_0"
},
{
"math_id": 1,
"text": "\\sum_{1\\le i_1 < i_2 < \\cdots < i_k\\le n} \\left(\\prod_{j = 1}^k r_{i_j}\\right)=(-1)^k\\frac{a_{n-k}}{a_n}"
},
{
"math_id": 2,
"text": "a_i/a_n"
},
{
"math_id": 3,
"text": "a_n"
},
{
"math_id": 4,
"text": "r_i"
},
{
"math_id": 5,
"text": "P(x)"
},
{
"math_id": 6,
"text": "a_n(x-r_1)(x-r_2)\\dots(x-r_n)"
},
{
"math_id": 7,
"text": "P(x) = x^2-1"
},
{
"math_id": 8,
"text": "r_1=1"
},
{
"math_id": 9,
"text": "r_2=3"
},
{
"math_id": 10,
"text": "P(x)\\neq (x-1)(x-3)"
},
{
"math_id": 11,
"text": "(x-1)(x-7)"
},
{
"math_id": 12,
"text": "(x-3)(x-5)"
},
{
"math_id": 13,
"text": "r_2=7"
},
{
"math_id": 14,
"text": "r_1=3"
},
{
"math_id": 15,
"text": "r_2=5"
},
{
"math_id": 16,
"text": "r_1, r_2"
},
{
"math_id": 17,
"text": "P(x) = ax^2 + bx + c"
},
{
"math_id": 18,
"text": " r_1 + r_2 = -\\frac{b}{a}, \\quad r_1 r_2 = \\frac{c}{a}."
},
{
"math_id": 19,
"text": "r_1, r_2, r_3"
},
{
"math_id": 20,
"text": "P(x) = ax^3 + bx^2 + cx + d"
},
{
"math_id": 21,
"text": " r_1 + r_2 + r_3 = -\\frac{b}{a}, \\quad r_1 r_2 + r_1 r_3 + r_2 r_3 = \\frac{c}{a}, \\quad r_1 r_2 r_3 = -\\frac{d}{a}."
},
{
"math_id": 22,
"text": "a_n x^n + a_{n-1}x^{n-1} +\\cdots + a_1 x+ a_0 = a_n (x-r_1) (x-r_2) \\cdots (x-r_n)"
},
{
"math_id": 23,
"text": "r_1, r_2, \\dots, r_n"
},
{
"math_id": 24,
"text": "x."
},
{
"math_id": 25,
"text": "(x-r_1) (x-r_2) \\cdots (x-r_n),"
},
{
"math_id": 26,
"text": "(-1)^{n-k}r_1^{b_1}\\cdots r_n^{b_n} x^k,"
},
{
"math_id": 27,
"text": "b_i"
},
{
"math_id": 28,
"text": "x^k"
},
{
"math_id": 29,
"text": "2^n"
},
{
"math_id": 30,
"text": "r_i."
},
{
"math_id": 31,
"text": "f(x) = a_2x^2 + a_1x + a_0 = a_2(x - r_1)(x - r_2) = a_2(x^2 - x(r_1 + r_2) + r_1 r_2)."
},
{
"math_id": 32,
"text": "x"
},
{
"math_id": 33,
"text": "a_2=a_2"
},
{
"math_id": 34,
"text": "a_1=-a_2 (r_1+r_2) "
},
{
"math_id": 35,
"text": " a_0 = a_2 (r_1r_2) "
},
{
"math_id": 36,
"text": " r_1+r_2 = - a_1/a_2 "
},
{
"math_id": 37,
"text": " r_1r_2 = a_0/a_2 "
},
{
"math_id": 38,
"text": "n=2"
},
{
"math_id": 39,
"text": "{P(x)}"
},
{
"math_id": 40,
"text": "n"
},
{
"math_id": 41,
"text": "{r_1},{r_2},{\\dots},{r_n}"
},
{
"math_id": 42,
"text": "a_0,a_1,\\dots,a_n"
},
{
"math_id": 43,
"text": "{ a_n} \\neq 0"
},
{
"math_id": 44,
"text": "{P(x)} = {a_n}{x^n}+{{a_{n-1}}{x^{n-1}}}+{\\cdots}+{{a_{1}}{x}}+{{a}_{0}} =\n{{a_n}{x^{n}}}-{a_n}{({r_1}+{r_2}+{\\cdots}+{r_n}){x^{n-1}}}+{\\cdots}+\n{{(-1)^{n}}{ (a_n)}{({r_1}{r_2}{\\cdots}{r_n})}}"
},
{
"math_id": 45,
"text": "n = 2\n"
},
{
"math_id": 46,
"text": "{a_2},{a_1}"
},
{
"math_id": 47,
"text": "a_0\n"
},
{
"math_id": 48,
"text": "{r_1},{r_2}"
},
{
"math_id": 49,
"text": "{a_2 x^2}+{a_1 x} + a_0 = {a_2}{(x-r_1)(x-r_2)}"
},
{
"math_id": 50,
"text": "{a_2 x^2}+{a_1 x} + a_0 = {a_2}{({x^2}-{r_1x}-{r_2x}+{r_1}{r_2})}"
},
{
"math_id": 51,
"text": "{a_2 x^2}+{a_1 x} + a_0 = {a_2}{({x^2}-{({r_1}+{r_2}){x}}+{r_1}{r_2})}"
},
{
"math_id": 52,
"text": "{a_2 x^2}+{a_1 x} + a_0 = {{a_2}{x^2}-{{a_2}({r_1}+{r_2}){x}}+{a_2}{({r_1}{r_2})}}"
},
{
"math_id": 53,
"text": "n = 2"
},
{
"math_id": 54,
"text": "n\\geqslant 2"
},
{
"math_id": 55,
"text": "n+1\n"
},
{
"math_id": 56,
"text": "{P(x)} = {a_{n+1}}{x^{n+1}}+{{a_{n}}{x^{n}}}+{\\cdots}+{{a_{1}}{x}}+{{a}_{0}}"
},
{
"math_id": 57,
"text": "{(x-r_{n+1})}"
},
{
"math_id": 58,
"text": "P(x)\n"
},
{
"math_id": 59,
"text": "r_1,r_2,\\cdots,r_n"
},
{
"math_id": 60,
"text": "{P(x)} = {(x-r_{n+1})} {[{\\frac{{a_ {n+ 1}}{x^ {n+1}}+{{a_{n}}{x^{n}}}+{\\cdots}+{{a_{1}}{x}}+{{a}_{0}}}{x- r_{n +1}}}]}"
},
{
"math_id": 61,
"text": "a_{n+1}"
},
{
"math_id": 62,
"text": "{P(x)} ={(a_{n+{1}})}{(x-r_{n+1})}\n{[{\\frac{{x^ {n+1}}+\n{\\frac{{a_{n}} {x^{n}}}{(a_{n+{1}})}}+{\\cdots}+{\\frac {a_{1}}{(a_{n+{1}})} {x}}+\n{{\\frac{a_0}{{(a_{n+{1}})}}}}} \n{x- r_{n +1}}}]}"
},
{
"math_id": 63,
"text": "\\zeta"
},
{
"math_id": 64,
"text": "P(x) = {(a_ {n+1})}{(x-r_ {n+1})}{[{x^n}+{\\zeta_{n-1}x^{n-1}}+{\\cdots}+{\\zeta_0}]}"
},
{
"math_id": 65,
"text": "P(x) = {(a_ {n+1})} {(x-r_ {n+1})} {[{{x^{n}}}-{({r_1}+{r_2}+{\\cdots}+{r_n}){x^{n-1}}}+{\\cdots}+\n{{(-1)^{n}}{({r_1}{r_2}{\\cdots}{r_n})}}]}"
},
{
"math_id": 66,
"text": "P(x) = {(a_ {n+1})}{({x} {[{{x^{n}}}-{({r_1}+{r_2}+{\\cdots}+{r_n}){x^{n-1}}}+{\\cdots}+\n{{(-1)^{n}}{({r_1}{r_2}{\\cdots}{r_n})}}]} {- r_ {n+1}} {[{{x^{n}}}-{({r_1}+{r_2}+{\\cdots}+{r_n}){x^{n-1}}}+{\\cdots}+\n{{(-1)^{n}}{({r_1}{r_2}{\\cdots}{r_n})}}]} )}"
},
{
"math_id": 67,
"text": "\\begin{align}\n{P(x)} = {{a_{n+1}}{x^{n+1}}}-{a_{n+1}}{({r_1}+{r_2}+{\\cdots}+{r_n}+{r_{n+1}}){x^{n}}}+{\\cdots}+\n{{(-1)^{n+1}}{({r_1}{r_2}{\\cdots}{r_n}{r_{n+1}})}} \\\\\n\n\\end{align}"
},
{
"math_id": 68,
"text": "n+1"
},
{
"math_id": 69,
"text": "\\forall n \\in \\mathbb{N}"
},
{
"math_id": 70,
"text": "{a_ n}{x^n}+{{a_{n-1}}{x^{n-1}}}+{\\cdots}+{{a_{1}}{x}}+{{a}_{0}} =\n{{a_n}{x^{n}}}-{a_n}{({r_1}+{r_2}+{\\cdots}+{r_n}){x^{n-1}}}+{\\cdots}+\n{{(-1)^{n}}{({r_1}{r_2}{\\cdots}{r_n})}}"
},
{
"math_id": 71,
"text": "a_{n}"
}
]
| https://en.wikipedia.org/wiki?curid=714050 |
71410654 | Thom's second isotopy lemma | In mathematics, especially in differential topology, Thom's second isotopy lemma is a family version of Thom's first isotopy lemma; i.e., it states a family of maps between Whitney stratified spaces is locally trivial when it is a Thom mapping. Like the first isotopy lemma, the lemma was introduced by René Thom.
gives a sketch of the proof. gives a simplified proof. Like the first isotopy lemma, the lemma also holds for the stratification with Bekka's condition (C), which is weaker than Whitney's condition (B).
Thom mapping.
Let formula_0 be a smooth map between smooth manifolds and formula_1 submanifolds such that formula_2 both have differential of constant rank. Then Thom's condition formula_3 is said to hold if for each sequence formula_4 in "X" converging to a point "y" in "Y" and such that formula_5 converging to a plane formula_6 in the Grassmannian, we have formula_7
Let formula_8 be Whitney stratified closed subsets and formula_9 maps to some smooth manifold "Z" such that formula_10 is a map over "Z"; i.e., formula_11 and formula_12. Then formula_13 is called a Thom mapping if the following conditions hold:
Then Thom's second isotopy lemma says that a Thom mapping is locally trivial over "Z"; i.e., each point "z" of "Z" has a neighborhood "U" with homeomorphisms formula_20 over "U" such that formula_21.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "f : M \\to N"
},
{
"math_id": 1,
"text": "X, Y \\subset M"
},
{
"math_id": 2,
"text": "f|_X, f|_Y"
},
{
"math_id": 3,
"text": "(a_f)"
},
{
"math_id": 4,
"text": "x_i"
},
{
"math_id": 5,
"text": "\\operatorname{ker}(d(f|_{X})_{x_i})"
},
{
"math_id": 6,
"text": "\\tau"
},
{
"math_id": 7,
"text": "\\operatorname{ker}(d(f|_Y)_y) \\subset \\tau."
},
{
"math_id": 8,
"text": "S \\subset M, S' \\subset N"
},
{
"math_id": 9,
"text": "p : S \\to Z, q : S' \\to Z"
},
{
"math_id": 10,
"text": "f : S \\to S'"
},
{
"math_id": 11,
"text": "f(S) \\subset S'"
},
{
"math_id": 12,
"text": "q \\circ f|_S = p"
},
{
"math_id": 13,
"text": "f"
},
{
"math_id": 14,
"text": "f|_S, q"
},
{
"math_id": 15,
"text": "q"
},
{
"math_id": 16,
"text": "S'"
},
{
"math_id": 17,
"text": "f(X)"
},
{
"math_id": 18,
"text": "f : X \\to Y"
},
{
"math_id": 19,
"text": "S"
},
{
"math_id": 20,
"text": "h_1 : p^{-1}(z) \\times U \\to p^{-1}(U), h_2 : q^{-1}(z) \\times U \\to q^{-1}(U)"
},
{
"math_id": 21,
"text": "f \\circ h_1 = h_2 \\circ (f|_{p^{-1}(z)} \\times \\operatorname{id})"
}
]
| https://en.wikipedia.org/wiki?curid=71410654 |
71415201 | Job 2 | Job 2 is the second chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter belongs to the prologue of the book,comprising Job 1:1–2:13.
Text.
The original text is written in Hebrew language. This chapter is divided into 13 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
Within the structure of the book, chapters 1 and 2 are grouped as "the Prologue" with the following outline:
The whole section precedes the following parts of the book:
The Prologue consists of five scenes in prose form (1:1–5; 1:6–12; 1:13–22; 2:1–6; 2:7–13 (3:1)) — alternating between earth and heaven — which introduce the main characters and the theological issue to be explored.
Second conversation (2:1–6).
The passage describes the conversation in the second heavenly court which is very similar to the first one. From verse 1 to the middle of verse 3, the narrative practically repeats Job 1:7–8, except for the addition of three Hebrew words at the end of verse 2:1 (, "lə-hiṯ-yaṣ-ṣêḇ ‘al- YHWH", translated: "to present himself before YHWH") and the difference in the Hebrew word used for "from where" (, "mê-’a-yin", in 1:7; , "’ê miz-zeh", in 2:2). It is indicated that after the series of calamities on his possession and children, Job continues to "cling to" his integrity, basically maintaining all his commendable personal qualities. YHWH states that he has been "incited" to "ruin" Job "without any reason", acknowledging that YHWH is accountable and responsible, but mainly also inviting the Adversary to concede that Job passed the test. The Adversary responds that the test did not go far enough, using the phrase "skin for skin" () to make the exchange equal by including all that a man would give up to save his own skin. YHWH permits the Adversary to proceed with the second test, to touch ("harm" or "strike") Job's "flesh and bone" but not Job's "life". Thereafter God will not speak again until chapter 38.
"The Lord said to the Adversary, "Have you considered My servant Job, that there is none like him on the earth, a blameless and an upright man, who fears God and avoids evil? He still holds fast his integrity, although you moved Me against him, to destroy him without cause.""
"The Lord said to the Adversary, “Very well, he is in your hand, but spare his life.”"
Verse 6.
This verse shows that YHWH is sovereign over the Adversary by putting limits on how far the action against Job may go.
Affliction of Job and the Arrival of Counselors (2:7–13).
The first part of the section (verses 7–8) describes the second attack by the Adversary on Job, which adds a negative aspect, by afflicting the physical pain of 'ghastly sores', to the removal of the positives in Job's life with the first attack. The words of Job's wife elicits a spoken response from Job about the second attack (verses 9–10). The arrival of Job's three friends, their mourning and silence, left to Job to speak first, setting the stage for the subsequent poetic dialogues (chapter 3 to 42).
"Therefore, the Adversary went out from the presence of the Lord, and he afflicted Job with severe sores from the sole of his foot to the top of his head."
"Job took a piece of broken pottery to scrape himself, and he sat in ashes in misery."
"Then his wife said to him, "Do you still hold fast your integrity? Curse God and die.""
Verse 9.
The words of Job's wife can be interpreted as suggesting a "theological method of committing suicide", that is, urging Job 'to put him out of misery by doing the one forbidden thing ("cursing God") that will ensure his immediate destruction and end his endless agony" according to the traditional 'doctrine of (divine) retribution'.
The Greek Septuagint has a longer reading with the phrase "when a long time had passed" in the beginning of the verse and the speech of Job's wife: "How long will you hold out, saying, 'Behold, I wait yet a little while, expecting the hope of my deliverance?' for behold, your memorial is abolished from the earth, even your sons and daughters, the pangs and pains of my womb which I bore in vain with sorrows, and you yourself sit down to spend the night in the open air among the corruption of worms, and I am a wanderer and a servant from place to place and house to house, waiting for the setting sun, that I may rest from my labors and pains that now beset me, but say some word against the Lord and die."
"And they sat with him on the ground seven days and seven nights, and no one spoke a word to him, for they saw that his suffering was very great."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=71415201 |
71415712 | Job 6 | Job 6 is the sixth chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Job, which belongs to the Dialogue section of the book, comprising –.
Text.
The original text is written in Hebrew language. This chapter is divided into 27 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008).
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The structure of the book is as follows:
Within the structure, chapter 6 is grouped into the Dialogue section with the following outline:
The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Chapters 6 and 7 record Job's response after the first speech of Eliphaz (in chapters 4 and 5), which can be divided into two main sections:
The pattern of first speaking to the friends and then turning to God is typical
of Job throughout the dialogue.
In chapter 6, the introduction (verse 1) and a sketch or outline of Job's s complaint (verses 2–7) is followed by Job's Request (verses 8-13) and his rebuke of the friends' failure to care for him (verses 14–23), then concluded with a challenge addressed to the friends (verses 24–30). The main purpose of chapter 6 is "to point out that the friend's explanation of Job's current plight in the light of tradition is insensitive and amounts to deception'.
Job's outline of complaints and requests (6:1–13).
Job's response (from the verb in verse 1) might not necessarily answer every matter raised by Eliphaz. First, Job requests that his 'angst and suffering' be taken seriously, that is, both be properly weighed (an intensive expression) together to demonstrate its excessiveness against what is right (verses 2–3); fitting with the call for vindication in verse 29). Secondly, with the metaphors of arrows aiming to him and the description of donkeys and oxen to be fed (verses 4–6), Job believes that God is in total control, even as Job is still crying out for answer. Lastly, Job seems to view Eliphaz's words bland, tasteless, and missed the point of Job's anguish, like "tasteless food without salt" (verse 7). In verses 8–13 Job states to his friends that he longs for God to finish his life, but in his petition he keeps his faith that God is the one in control; Job does not reduce the size of God's power nor deny God and His words.
[Job said:] "2"Oh that my grief were throughly weighed, and my calamity laid in the balances together!"
3"For now it would be heavier than the sand of the sea: therefore my words are swallowed up.""
Job rebukes and challenges his friends (6:14–30).
In this section Job criticizes his friends whom he hopes to get support from but they failed to do so. Job alludes to Eliphaz's words to let the fear of God be Job's ground of confidence (Job 4:6) and turns in around by saying that Eliphaz's speech is actually abandoning the fear of God. In verse 21, Job addresses all friends (using the plural word for "you", although until now only Eliphaz has spoken) that they have seen his situation and are afraid – perhaps afraid that it might also happen to them or that it would challenge their core belief in retribution. Therefore, Job challenges them to teach or correct him, if they can, by giving him explanation, not condemnation (verses 24–30). Job maintains to be a person of integrity and asks his friends twice to "turn" ("repent" or "change in direction") or reconsider their thought process. Verse 30 contains two rhetorical questions that answer "no" to the issue raised by the Adversary in Job 1:9 whether Job would fear God for nothing or Job's faith is based on self-interest.
[Job said:] "Is there iniquity in my tongue?"
"cannot my taste discern perverse things?"
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=71415712 |
71415714 | Job 13 | 13th chapter in the biblical Book of Job part of the Old Testament
Job 13 is the thirteenth chapter of the Book of Job in the Hebrew Bible or the Old Testament of the Christian Bible. The book is anonymous; most scholars believe it was written around 6th century BCE. This chapter records the speech of Job, which belongs to the Dialogue section of the book, comprising –.
Text.
The original text is written in Hebrew language. This chapter is divided into 28 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text, which includes the Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q100 (4QJobb; 50–1 BCE) with extant verse 4 and 4Q101 (4QpaleoJobc; 250–150 BCE) with extant verses 18–27.
There is also a translation into Koine Greek known as the Septuagint, made in the last few centuries BC; some extant ancient manuscripts of this version include Codex Vaticanus (B; formula_0B; 4th century), Codex Sinaiticus (S; BHK: formula_0S; 4th century), and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The structure of the book is as follows:
Within the structure, chapter 13 is grouped into the Dialogue section with the following outline:
The Dialogue section is composed in the format of poetry with distinctive syntax and grammar. Chapters 12 to 14 contain Job's closing speech of the first round, where he directly addresses his friends (12:2–3; 13:2, 4–12).
Job addresses his friends (13:1–19).
Verse 1 opens with Job summing up his speech in chapter 12 before he addresses his friends in verses 2–12, contrasting Job's stance ("but I", verse 3) and his friends' ("but you", verse 4). Job calls for silence from his friends (verse 5, 13) as he wants to 'boldly pursue truth as he comes before God'. Although Job was afraid to approach God (verses 13b-14, also verse 21), he would press for litigation, knowing the risk and yet the hope for vindication (as in chapter 14).
[Job said:] "Though He slay me, yet will I trust in Him,"
"but I will defend my own ways before Him."
Job addresses God (13:20–28).
At verse 20, Job switches his address to God who can give and withhold a solution to his problems. Verses 20–27 can be classified as a lament, outlining what Job wants God to address the number of his sins to warrant the extent of punishments he has received. The closing remark is an imagery about a person without dignity, rotting away or destroyed by moths.
[Job said:] "For You write bitter things against me"
"and make me inherit the iniquities of my youth."
Verse 26.
Job acknowledges that he committed sins in his youth (or 'youthful years'; cf. Psalm 25:7), but he had doubtless confessed them before and now wonders if his suffering is the long-delayed punishment for those past sins, which God has recorded and remembered. In Job 31:35, Job will use the same metaphor that he writes and signs his confession and places his case in God's hands.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=71415714 |
714163 | Cross-correlation | Covariance and correlation
In signal processing, cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other. This is also known as a "sliding dot product" or "sliding inner-product". It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomography, averaging, cryptanalysis, and neurophysiology. The cross-correlation is similar in nature to the convolution of two functions. In an autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero, and its size will be the signal energy.
In probability and statistics, the term "cross-correlations" refers to the correlations between the entries of two random vectors formula_1 and formula_2, while the "correlations" of a random vector formula_1 are the correlations between the entries of formula_1 itself, those forming the correlation matrix of formula_1. If each of formula_1 and formula_2 is a scalar random variable which is realized repeatedly in a time series, then the correlations of the various temporal instances of formula_1 are known as "autocorrelations" of formula_1, and the cross-correlations of formula_1 with formula_2 across time are temporal cross-correlations. In probability and statistics, the definition of correlation always includes a standardising factor in such a way that correlations have values between −1 and +1.
If formula_3 and formula_4 are two independent random variables with probability density functions formula_5 and formula_6, respectively, then the probability density of the difference formula_7 is formally given by the cross-correlation (in the signal-processing sense) formula_0; however, this terminology is not used in probability and statistics. In contrast, the convolution formula_8 (equivalent to the cross-correlation of formula_9 and formula_10) gives the probability density function of the sum formula_11.
Cross-correlation of deterministic signals.
For continuous functions formula_5 and formula_6, the cross-correlation is defined as:formula_12which is equivalent toformula_13where formula_14 denotes the complex conjugate of formula_9, and formula_15 is called "displacement" or "lag." For highly-correlated formula_5 and formula_6 which have a maximum cross-correlation at a particular formula_15, a feature in formula_5 at formula_16 also occurs later in formula_6 at formula_17, hence formula_6 could be described to "lag" formula_5 by formula_15.
If formula_5 and formula_6 are both continuous periodic functions of period formula_18, the integration from formula_19 to formula_20 is replaced by integration over any interval formula_21 of length formula_18:formula_22which is equivalent toformula_23Similarly, for discrete functions, the cross-correlation is defined as:formula_24which is equivalent to:formula_25For finite discrete functions formula_26, the (circular) cross-correlation is defined as:formula_27which is equivalent to:formula_28For finite discrete functions formula_29, formula_30, the kernel cross-correlation is defined as:formula_31where formula_32 is a vector of kernel functions formula_33 and formula_34 is an affine transform.
Specifically, formula_35 can be circular translation transform, rotation transform, or scale transform, etc. The kernel cross-correlation extends cross-correlation from linear space to kernel space. Cross-correlation is equivariant to translation; kernel cross-correlation is equivariant to any affine transforms, including translation, rotation, and scale, etc.
Explanation.
As an example, consider two real valued functions formula_5 and formula_6 differing only by an unknown shift along the x-axis. One can use the cross-correlation to find how much formula_6 must be shifted along the x-axis to make it identical to formula_5. The formula essentially slides the formula_6 function along the x-axis, calculating the integral of their product at each position. When the functions match, the value of formula_36 is maximized. This is because when peaks (positive areas) are aligned, they make a large contribution to the integral. Similarly, when troughs (negative areas) align, they also make a positive contribution to the integral because the product of two negative numbers is positive.
With complex-valued functions formula_5 and formula_6, taking the conjugate of formula_5 ensures that aligned peaks (or aligned troughs) with imaginary components will contribute positively to the integral.
In econometrics, lagged cross-correlation is sometimes referred to as cross-autocorrelation.
Cross-correlation of random vectors.
Definition.
For random vectors formula_37 and formula_38, each containing random elements whose expected value and variance exist, the cross-correlation matrix of formula_1 and formula_2 is defined byformula_39and has dimensions formula_40. Written component-wise:formula_41The random vectors formula_1 and formula_2 need not have the same dimension, and either might be a scalar value.
Where formula_42 is the expectation value.
Example.
For example, if formula_43 and formula_44 are random vectors, then formula_45 is a formula_46 matrix whose formula_47-th entry is formula_48.
Definition for complex random vectors.
If formula_49 and formula_50 are complex random vectors, each containing random variables whose expected value and variance exist, the cross-correlation matrix of formula_51 and formula_52 is defined byformula_53where formula_54 denotes Hermitian transposition.
Cross-correlation of stochastic processes.
In time series analysis and statistics, the cross-correlation of a pair of random process is the correlation between values of the processes at different times, as a function of the two times. Let formula_55 be a pair of random processes, and formula_16 be any point in time (formula_16 may be an integer for a discrete-time process or a real number for a continuous-time process). Then formula_56 is the value (or realization) produced by a given run of the process at time formula_16.
Cross-correlation function.
Suppose that the process has means formula_57 and formula_58 and variances formula_59 and formula_60 at time formula_16, for each formula_16. Then the definition of the cross-correlation between times formula_61 and formula_62 isformula_63where formula_42 is the expected value operator. Note that this expression may be not defined.
Cross-covariance function.
Subtracting the mean before multiplication yields the cross-covariance between times formula_61 and formula_62:formula_64Note that this expression is not well-defined for all time series or processes, because the mean or variance may not exist.
Definition for wide-sense stationary stochastic process.
Let formula_55 represent a pair of stochastic processes that are jointly wide-sense stationary. Then the cross-covariance function and the cross-correlation function are given as follows.
Cross-correlation function.
formula_65 or equivalently formula_66
Cross-covariance function.
formula_67 or equivalently formula_68where formula_69 and formula_70 are the mean and standard deviation of the process formula_71, which are constant over time due to stationarity; and similarly for formula_72, respectively. formula_73 indicates the expected value. That the cross-covariance and cross-correlation are independent of formula_16 is precisely the additional information (beyond being individually wide-sense stationary) conveyed by the requirement that formula_55 are "jointly" wide-sense stationary.
The cross-correlation of a pair of jointly wide sense stationary stochastic processes can be estimated by averaging the product of samples measured from one process and samples measured from the other (and its time shifts). The samples included in the average can be an arbitrary subset of all the samples in the signal (e.g., samples within a finite time window or a sub-sampling of one of the signals). For a large number of samples, the average converges to the true cross-correlation.
Normalization.
It is common practice in some disciplines (e.g. statistics and time series analysis) to normalize the cross-correlation function to get a time-dependent Pearson correlation coefficient. However, in other disciplines (e.g. engineering) the normalization is usually dropped and the terms "cross-correlation" and "cross-covariance" are used interchangeably.
The definition of the normalized cross-correlation of a stochastic process isformula_74If the function formula_75 is well-defined, its value must lie in the range formula_76, with 1 indicating perfect correlation and −1 indicating perfect anti-correlation.
For jointly wide-sense stationary stochastic processes, the definition isformula_77The normalization is important both because the interpretation of the autocorrelation as a correlation provides a scale-free measure of the strength of statistical dependence, and because the normalization has an effect on the statistical properties of the estimated autocorrelations.
Properties.
Symmetry property.
For jointly wide-sense stationary stochastic processes, the cross-correlation function has the following symmetry property:formula_78Respectively for jointly WSS processes:formula_79
Time delay analysis.
Cross-correlations are useful for determining the time delay between two signals, e.g., for determining time delays for the propagation of acoustic signals across a microphone array. After calculating the cross-correlation between the two signals, the maximum (or minimum if the signals are negatively correlated) of the cross-correlation function indicates the point in time where the signals are best aligned; i.e., the time delay between the two signals is determined by the argument of the maximum, or arg max of the cross-correlation, as informula_80Terminology in image processing
Zero-normalized cross-correlation (ZNCC).
For image-processing applications in which the brightness of the image and template can vary due to lighting and exposure conditions, the images can be first normalized. This is typically done at every step by subtracting the mean and dividing by the standard deviation. That is, the cross-correlation of a template formula_81 with a subimage formula_82 is
formula_83
where formula_84 is the number of pixels in formula_81 and formula_82,
formula_85 is the average of formula_5 and formula_86 is standard deviation of formula_5.
In functional analysis terms, this can be thought of as the dot product of two normalized vectors. That is, ifformula_87andformula_88then the above sum is equal toformula_89where formula_90 is the inner product and formula_91 is the "L"² norm. Cauchy–Schwarz then implies that ZNCC has a range of formula_92.
Thus, if formula_5 and formula_16 are real matrices, their normalized cross-correlation equals the cosine of the angle between the unit vectors formula_93 and formula_18, being thus formula_94 if and only if formula_93 equals formula_18 multiplied by a positive scalar.
Normalized correlation is one of the methods used for template matching, a process used for finding instances of a pattern or object within an image. It is also the 2-dimensional version of Pearson product-moment correlation coefficient.
Normalized cross-correlation (NCC).
NCC is similar to ZNCC with the only difference of not subtracting the local mean value of intensities:formula_95
Nonlinear systems.
Caution must be applied when using cross correlation for nonlinear systems. In certain circumstances, which depend on the properties of the input, cross correlation between the input and output of a system with nonlinear dynamics can be completely blind to certain nonlinear effects. This problem arises because some quadratic moments can equal zero and this can incorrectly suggest that there is little "correlation" (in the sense of statistical dependence) between two signals, when in fact the two signals are strongly related by nonlinear dynamics.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f \\star g"
},
{
"math_id": 1,
"text": "\\mathbf{X}"
},
{
"math_id": 2,
"text": "\\mathbf{Y}"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "Y"
},
{
"math_id": 5,
"text": "f"
},
{
"math_id": 6,
"text": "g"
},
{
"math_id": 7,
"text": "Y - X"
},
{
"math_id": 8,
"text": "f * g"
},
{
"math_id": 9,
"text": "f(t)"
},
{
"math_id": 10,
"text": "g(-t)"
},
{
"math_id": 11,
"text": "X + Y"
},
{
"math_id": 12,
"text": "(f \\star g)(\\tau)\\ \\triangleq \\int_{-\\infty}^{\\infty} \\overline{f(t)} g(t+\\tau)\\,dt"
},
{
"math_id": 13,
"text": "(f \\star g)(\\tau)\\ \\triangleq \\int_{-\\infty}^{\\infty} \\overline{f(t-\\tau)} g(t)\\,dt"
},
{
"math_id": 14,
"text": "\\overline{f(t)}"
},
{
"math_id": 15,
"text": "\\tau"
},
{
"math_id": 16,
"text": "t"
},
{
"math_id": 17,
"text": "t+\\tau"
},
{
"math_id": 18,
"text": "T"
},
{
"math_id": 19,
"text": "-\\infty"
},
{
"math_id": 20,
"text": "\\infty"
},
{
"math_id": 21,
"text": "[t_0,t_0+T]"
},
{
"math_id": 22,
"text": "(f \\star g)(\\tau)\\ \\triangleq \\int_{t_0}^{t_0+T} \\overline{f(t)} g(t + \\tau)\\,dt"
},
{
"math_id": 23,
"text": "(f \\star g)(\\tau)\\ \\triangleq \\int_{t_0}^{t_0+T} \\overline{f(t-\\tau)} g(t)\\,dt"
},
{
"math_id": 24,
"text": "(f \\star g)[n]\\ \\triangleq \\sum_{m=-\\infty}^{\\infty} \\overline{f[m]} g[m+n]"
},
{
"math_id": 25,
"text": "(f \\star g)[n]\\ \\triangleq \\sum_{m=-\\infty}^{\\infty} \\overline{f[m - n]} g[m]"
},
{
"math_id": 26,
"text": "f,g\\in\\mathbb{C}^N"
},
{
"math_id": 27,
"text": "(f \\star g)[n]\\ \\triangleq \\sum_{m=0}^{N-1} \\overline{f[m]} g[(m+n)_{\\text{mod}~N}]"
},
{
"math_id": 28,
"text": "(f \\star g)[n]\\ \\triangleq \\sum_{m=0}^{N-1} \\overline{f[(m-n)_{\\text{mod}~N}]} g[m]"
},
{
"math_id": 29,
"text": "f\\in\\mathbb{C}^N"
},
{
"math_id": 30,
"text": "g\\in\\mathbb{C}^M"
},
{
"math_id": 31,
"text": "(f \\star g)[n]\\ \\triangleq \\sum_{m=0}^{N-1} \\overline{f[m]} K_g[(m+n)_{\\text{mod}~N}]"
},
{
"math_id": 32,
"text": "K_g = [k(g, T_0(g)), k(g, T_1(g)), \\dots, k(g, T_{N-1}(g))]"
},
{
"math_id": 33,
"text": "k(\\cdot, \\cdot)\\colon \\mathbb{C}^M \\times \\mathbb{C}^M \\to \\mathbb{R}"
},
{
"math_id": 34,
"text": "T_i(\\cdot)\\colon \\mathbb{C}^M \\to \\mathbb{C}^M"
},
{
"math_id": 35,
"text": "T_i(\\cdot)"
},
{
"math_id": 36,
"text": "(f\\star g)"
},
{
"math_id": 37,
"text": "\\mathbf{X} = (X_1,\\ldots,X_m)"
},
{
"math_id": 38,
"text": "\\mathbf{Y} = (Y_1,\\ldots,Y_n)"
},
{
"math_id": 39,
"text": "\\operatorname{R}_{\\mathbf{X}\\mathbf{Y}} \\triangleq\\ \\operatorname{E}\\left[\\mathbf{X} \\mathbf{Y}\\right]"
},
{
"math_id": 40,
"text": "m \\times n"
},
{
"math_id": 41,
"text": "\\operatorname{R}_{\\mathbf{X}\\mathbf{Y}} =\n\\begin{bmatrix}\n\\operatorname{E}[X_1 Y_1] & \\operatorname{E}[X_1 Y_2] & \\cdots & \\operatorname{E}[X_1 Y_n] \\\\ \\\\\n\\operatorname{E}[X_2 Y_1] & \\operatorname{E}[X_2 Y_2] & \\cdots & \\operatorname{E}[X_2 Y_n] \\\\ \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\ \\\\\n\\operatorname{E}[X_m Y_1] & \\operatorname{E}[X_m Y_2] & \\cdots & \\operatorname{E}[X_m Y_n]\n\\end{bmatrix}\n"
},
{
"math_id": 42,
"text": "\\operatorname{E}"
},
{
"math_id": 43,
"text": "\\mathbf{X} = \\left( X_1,X_2,X_3 \\right)"
},
{
"math_id": 44,
"text": "\\mathbf{Y} = \\left( Y_1,Y_2 \\right)"
},
{
"math_id": 45,
"text": "\\operatorname{R}_{\\mathbf{X}\\mathbf{Y}}"
},
{
"math_id": 46,
"text": "3 \\times 2"
},
{
"math_id": 47,
"text": "(i,j)"
},
{
"math_id": 48,
"text": "\\operatorname{E}[X_i Y_j]"
},
{
"math_id": 49,
"text": "\\mathbf{Z} = (Z_1,\\ldots,Z_m)"
},
{
"math_id": 50,
"text": "\\mathbf{W} = (W_1,\\ldots,W_n)"
},
{
"math_id": 51,
"text": "\\mathbf{Z}"
},
{
"math_id": 52,
"text": "\\mathbf{W}"
},
{
"math_id": 53,
"text": "\\operatorname{R}_{\\mathbf{Z}\\mathbf{W}} \\triangleq\\ \\operatorname{E}[\\mathbf{Z} \\mathbf{W}^{\\rm H}]"
},
{
"math_id": 54,
"text": "{}^{\\rm H}"
},
{
"math_id": 55,
"text": "(X_t, Y_t)"
},
{
"math_id": 56,
"text": "X_t"
},
{
"math_id": 57,
"text": "\\mu_X(t)"
},
{
"math_id": 58,
"text": "\\mu_Y(t)"
},
{
"math_id": 59,
"text": "\\sigma_X^2(t)"
},
{
"math_id": 60,
"text": "\\sigma_Y^2(t)"
},
{
"math_id": 61,
"text": "t_1"
},
{
"math_id": 62,
"text": "t_2"
},
{
"math_id": 63,
"text": "\\operatorname{R}_{XY}(t_1, t_2) \\triangleq\\ \\operatorname{E}\\left[X_{t_1} \\overline{Y_{t_2}}\\right]"
},
{
"math_id": 64,
"text": "\\operatorname{K}_{XY}(t_1, t_2) \\triangleq\\ \\operatorname{E}\\left[\\left(X_{t_1} - \\mu_X(t_1)\\right)\\overline{(Y_{t_2} - \\mu_Y(t_2))}\\right]"
},
{
"math_id": 65,
"text": "\\operatorname{R}_{XY}(\\tau) \\triangleq\\ \\operatorname{E}\\left[X_t \\overline{Y_{t+\\tau}}\\right]"
},
{
"math_id": 66,
"text": "\\operatorname{R}_{XY}(\\tau) = \\operatorname{E}\\left[X_{t-\\tau} \\overline{Y_{t}}\\right]"
},
{
"math_id": 67,
"text": "\\operatorname{K}_{XY}(\\tau) \\triangleq\\ \\operatorname{E}\\left[\\left(X_t - \\mu_X\\right)\\overline{\\left(Y_{t+\\tau} - \\mu_Y\\right)}\\right]"
},
{
"math_id": 68,
"text": "\\operatorname{K}_{XY}(\\tau) = \\operatorname{E}\\left[\\left(X_{t-\\tau} - \\mu_X\\right)\\overline{\\left(Y_{t} - \\mu_Y\\right)}\\right]"
},
{
"math_id": 69,
"text": "\\mu_X"
},
{
"math_id": 70,
"text": "\\sigma_X"
},
{
"math_id": 71,
"text": "(X_t)"
},
{
"math_id": 72,
"text": "(Y_t)"
},
{
"math_id": 73,
"text": "\\operatorname{E}[\\ ]"
},
{
"math_id": 74,
"text": "\n \\rho_{XX}(t_1, t_2) =\n \\frac{\\operatorname{K}_{XX}(t_1, t_2)}{\\sigma_X(t_1)\\sigma_X(t_2)} =\n \\frac{\\operatorname{E}\\left[\\left(X_{t_1} - \\mu_{t_1}\\right)\\overline{\\left(X_{t_2} - \\mu_{t_2}\\right)}\\right]}{\\sigma_X(t_1)\\sigma_X(t_2)}\n"
},
{
"math_id": 75,
"text": "\\rho_{XX}"
},
{
"math_id": 76,
"text": "[-1,1]"
},
{
"math_id": 77,
"text": "\n \\rho_{XY}(\\tau) =\n \\frac{\\operatorname{K}_{XY}(\\tau)}{\\sigma_X \\sigma_Y} =\n \\frac{\\operatorname{E}\\left[\\left(X_t - \\mu_X\\right) \\overline{\\left(Y_{t+\\tau} - \\mu_Y\\right)}\\right]}{\\sigma_X \\sigma_Y}\n"
},
{
"math_id": 78,
"text": "\\operatorname{R}_{XY}(t_1, t_2) = \\overline{\\operatorname{R}_{YX}(t_2, t_1)}"
},
{
"math_id": 79,
"text": "\\operatorname{R}_{XY}(\\tau) = \\overline{\\operatorname{R}_{YX}(-\\tau)}"
},
{
"math_id": 80,
"text": "\\tau_\\mathrm{delay}=\\underset{t \\in \\mathbb{R}}{\\operatorname{arg\\,max}}((f \\star g)(t))"
},
{
"math_id": 81,
"text": "t(x,y)"
},
{
"math_id": 82,
"text": "f(x,y)"
},
{
"math_id": 83,
"text": "\\frac{1}{n} \\sum_{x,y}\\frac{1}{\\sigma_f \\sigma_t}\\left(f(x,y) - \\mu_f \\right)\\left(t(x,y) - \\mu_t \\right)"
},
{
"math_id": 84,
"text": "n"
},
{
"math_id": 85,
"text": "\\mu_f"
},
{
"math_id": 86,
"text": "\\sigma_f"
},
{
"math_id": 87,
"text": "F(x,y) = f(x,y) - \\mu_f"
},
{
"math_id": 88,
"text": "T(x,y) = t(x,y) - \\mu_t"
},
{
"math_id": 89,
"text": "\\left\\langle\\frac{F}{\\|F\\|},\\frac{T}{\\|T\\|}\\right\\rangle"
},
{
"math_id": 90,
"text": "\\langle\\cdot,\\cdot\\rangle"
},
{
"math_id": 91,
"text": "\\|\\cdot\\|"
},
{
"math_id": 92,
"text": "[-1, 1]"
},
{
"math_id": 93,
"text": "F"
},
{
"math_id": 94,
"text": "1"
},
{
"math_id": 95,
"text": "\\frac{1}{n} \\sum_{x,y}\\frac{1}{\\sigma_f \\sigma_t} f(x,y) t(x,y)"
}
]
| https://en.wikipedia.org/wiki?curid=714163 |
71420582 | Alfvén surface | Boundary between solar corona and wind
The Alfvén surface is the boundary separating a star's corona from the stellar wind defined as where the coronal plasma's Alfvén speed and the large-scale stellar wind speed are equal. It is named after Hannes Alfvén, and is also called Alfvén critical surface, Alfvén point, or Alfvén radius. In 2018, the Parker Solar Probe became the first spacecraft that crossed Alfvén surface of the Sun.
Definition.
Stars do not have a solid surface. However, they have a superheated atmosphere, made of solar material bound to the star by gravity and magnetic forces. The stellar corona extends far beyond the solar surface, or photosphere, and is considered the outer boundary of the star. It marks the transition to the solar wind which moves through the planetary system. This limit is defined by the distance at which disturbances in the solar wind cannot propagate back to the solar surface. Those disturbances cannot propagate back towards a star if the outbound solar wind speed exceeds Mach one, the speed of 'sound' as defined for the solar wind. This distance forms an irregular 'surface' around a star is called the Alfvén surface. It can also be described as a point where gravity and magnetic fields are too weak to contain heat and pressure that push the material away from a star. This is the point where solar atmosphere ends and where solar wind begins.
Adhikari, Zank, & Zhao (2019) define the Alfvén surface as:
the location at which the large-scale bulk solar wind speed formula_0 and the Alfvén speed formula_1 are equal, and thus it separates sub-Aflvénic coronal flow |formula_0|≪|formula_1| from super-Alfvénic solar wind flow |formula_0|≫|formula_1|
DeForest, Howard, & McComas (2014) define the Alfvén surface as:
a natural boundary that marks the causal disconnection of individual packets of plasma and magnetic flux from the Sun itself. The Alfvén surface is the locus where the radial motion of the accelerating solar wind passes the radial Alfvén speed, and therefore any displacement of material cannot carry information back down into the corona. It is thus the natural outer boundary of the solar corona, and the inner boundary of interplanetary space.
Alfvén surface separates the sub- and super-Alfvénic regimes of the stellar wind, which influence the structure of any magnetosphere/ionosphere around an orbiting planet in the system. Characterization of the Alfvén surface can serve as an inner-boundary of the habitable zone of the star. Alfven surface can be found "nominally" at 10-30 star radii.
Research.
Researchers were unsure exactly where the Alfvén critical surface of the Sun lay. Based on remote images of the corona, estimates had put it somewhere between 10 and 20 solar radii from the surface of the Sun. On April 28, 2021, during its eighth flyby of the Sun, NASA's Parker Solar Probe (PSP) encountered the specific magnetic and particle conditions at 18.8 solar radii that indicated that it penetrated the Alfvén surface; the probe measured the solar wind plasma environment with its FIELDS and SWEAP instruments. This event was described by NASA as "touching the Sun". During the flyby, Parker Solar Probe passed into and out of the corona several times. This proved the predictions that the Alfvén critical surface is not shaped like a smooth ball, but has spikes and valleys that wrinkle its surface.
At 09:33 UT on 28 April 2021 Parker Solar Probe entered the magnetized atmosphere of the Sun above the photosphere, crossing below the Alfvén critical surface for five hours into plasma in casual contact with the Sun with an Alfvén Mach number of 0.79 and magnetic pressure dominating both ion and electron pressure. Magnetic mapping suggests the region was a steady flow emerging on rapidly expanding coronal magnetic field lines lying above a pseudostreamer. The sub-Alfvénic nature of the flow may be due to suppressed magnetic reconnection at the base of the pseudostreamer, as evidenced by unusually low densities in this region and the magnetic mapping.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "U"
},
{
"math_id": 1,
"text": "V_{\\text{A}}"
}
]
| https://en.wikipedia.org/wiki?curid=71420582 |
71428073 | Elementary modes | Elementary modes may be considered minimal realizable flow patterns through a biochemical network that can sustain a steady state. This means that elementary modes cannot be decomposed further into simpler pathways. All possible flows through a network can be constructed from linear combinations of the elementary modes.
The set of elementary modes for a given network is unique (up to an arbitrary scaling factor). Given the fundamental nature of elementary modes in relation to uniqueness and non-decomposability, the term `pathway' can be defined as an elementary mode. Note that the set of elementary modes will change as the set of expressed enzymes change during transitions from one cell state to another. Mathematically, the set of elementary modes is defined as the set of flux vectors, formula_0, that satisfy the steady state condition,
formula_1
where formula_2 is the stoichiometry matrix, formula_3 is the vector of rates, formula_4 the vector of steady state floating (or internal) species and formula_5, the vector of system parameters.
An important condition is that the rate of each irreversible reaction must be non-negative, formula_6.
A more formal definition is given by:
An elementary mode, formula_7, is defined as a vector of fluxes, formula_8, such that the three conditions listed in the following criteria are satisfied.
Example.
Consider a simple branched pathway with all three steps irreversible. Such a pathway will admit two elementary modes which are indicated in thicked (or red) reaction lines.
Because both formula_11 and formula_12 are irreversible, and elementary mode lying on both these reactions is not possible since it would mean one reactions going against its thermodynamic direction. Each mode in this system satisfies the three conditions described above. The first condition is steady state, that is for each mode formula_7, it has to be true that formula_9.
Algebraically the two modes are given by:
formula_13
By substituting each of these vectors into formula_14, it is easy to show that condition one is satisfied. For condition two we must ensure that all reactions that are irreversible have positive entries in the corresponding elements of the elementary modes. Since all three reactions in the branch are irreversible and all entries in the elementary modes are positive, condition two is satisfied.
Finally, to satisfy condition three, we must ask whether we can decompose the two elementary modes into other paths that can sustain a steady state while using the same non-zero entries in the elementary mode. In this example, it is impossible to decompose the elementary modes any further without disrupting the ability to sustain a steady state. Therefore, with all three conditions satisfied, we can conclude that the two vectors shown above are elementary modes.
All possible flows through a network can be constructed from linear combinations of the elementary modes, that is:
formula_15
such that the entire space of flows through a network can be described. formula_16 must be greater than or equal to zero to ensure that irreversible steps aren't inadvertently made to go in the reverse direction. For example, the following is a possible steady-state flow in the branched pathway.
formula_17
If one of the outflow steps in the simple branched pathway is made reversible, an additional elementary mode becomes available, representing the flow between the two outflow branches. An additional mode emerges because, with only the first two modes, it is impossible to represent a flow between the two branches because the scaling factor, formula_16, cannot be negative (which would be required to reverse the flow).
Definition of a Pathway.
The Wikipedia page Metabolic pathway defines a pathway as "a metabolic pathway is a linked series of chemical reactions occurring within a cell". This means that any sequence of reactions can be labeled a metabolic pathway. However, as metabolism was being uncovered, groups of reactions were assigned specific labels, such as glycolysis, Krebs Cycle, or Serine biosynthesis. Often the categorization was based on common chemistry or identification of an input and output. For example, serine biosynthesis starts at 3-phosphoglycerate and ends at serine. This is a somewhat ad hoc means for defining pathways, particularly when pathways are dynamic structures, changing as environmental result in changes in gene expression. For example, the Kreb Cycle is often not cyclic as depicted in textbooks. In E. coli and other bacteria, it is only cyclic during aerobic growth on acetate or fatty acids. Instead, under anaerobiosis, its enzymes function as two distinct biosynthetic pathways producing succinyl-CoA and α-ketoglutarate.
It has therefore been proposed to define a pathway as either a single elementary mode or some combination of elementary modes. The added advantage is that the set of elementary modes is unique and non-decomposable to simpler pathways. A single elementary mode can therefore be thought of as an elementary pathway. Note that the set of elementary modes will change as the set of expressed enzymes change during transitions from one cell state to another.
Elementary modes, therefore, provide an unambiguous definition of a pathway.
Comment on Condition Three.
Condition three relates to the non-decomposability of an elementary mode and is partly what makes elementary modes interesting. The two other important features as indicated before are pathway uniqueness and thermodynamic plausibility. Decomposition implies that it is possible to represent a mode as a combination of two or more other modes. For example, a mode formula_18 might be composed from two other modes, formula_19 and formula_20:
formula_21
If a mode can be decomposed, does it mean that the mode is not an elementary mode? Condition three provides a rule to determine whether a decomposition means that a given mode is an elementary mode or not. If it is only possible to decompose a given mode by introducing enzymes that are not used in the mode, then the mode is elementary. That is, is there more than one way to generate a pathway (i.e., something that can sustain a steady state) with the enzymes currently used in the mode? If so, then the mode is not elementary. To illustrate this subtle condition, consider the pathway shown in below.
This pathway represents a stylized rendition of glycolysis. Step three and six are reversible and correspond to triose phosphate isomerase and glycerol 3-phosphate dehydrogenase, respectively.
The network has four elementary flux modes, which are shown in the figure below.
The elementary flux mode vectors are shown below:
formula_22
Note that it is possible to have negative entries in the set of elementary modes because they will correspond to the reversible steps. Of interest is the observation that the fourth vector, formula_23 (where formula_24 represents the transpose) can be formed from the sum of the first and second vectors. This suggests that the fourth vector is not an elementary mode.
formula_25
However, this decomposition only works because we have introduced a new enzyme, formula_26 (triose phosphate isomerase) which is not used in formula_27. It is, in fact impossible to decompose formula_27 into pathways that can sustain a steady state with only the five steps, formula_28, used in the elementary mode. We conclude therefore that formula_27 is an elementary mode.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathbf{v}"
},
{
"math_id": 1,
"text": " \\mathbf{N}\\ \\mathbf{v} (\\mathbf{x}, \\mathbf{p}) = 0"
},
{
"math_id": 2,
"text": "\\mathbf{N}"
},
{
"math_id": 3,
"text": "\\mathbf{v}"
},
{
"math_id": 4,
"text": " \\mathbf{x}"
},
{
"math_id": 5,
"text": "\\mathbf{p}"
},
{
"math_id": 6,
"text": "\\mathbf{v}_{irr} \\geq 0"
},
{
"math_id": 7,
"text": "\\mathbf{v}_i"
},
{
"math_id": 8,
"text": "v_1, v_2, \\ldots"
},
{
"math_id": 9,
"text": "\\mathbf{N} \\mathbf{v}_i = 0"
},
{
"math_id": 10,
"text": "v_i \\geq 0"
},
{
"math_id": 11,
"text": " v_2"
},
{
"math_id": 12,
"text": " v_3 "
},
{
"math_id": 13,
"text": "\n\\mathbf{v}_1 = \n\\begin{bmatrix}\n1 \\\\\n1 \\\\\n0\n\\end{bmatrix} \\quad\n\\mathbf{v}_2 =\n\\begin{bmatrix}\n1 \\\\\n0 \\\\\n1\n\\end{bmatrix}\n"
},
{
"math_id": 14,
"text": "\\mathbf{N} \\mathbf{v}_i = 0 "
},
{
"math_id": 15,
"text": "\n\\begin{align}\n\\mathbf{v} &= \\sum \\lambda_i \\mathbf{v}_i \\\\ \n\\mbox{where}& \\\\ \n\\lambda &\\geq 0 \\\\ \n\\end{align}\n"
},
{
"math_id": 16,
"text": "\\lambda_i"
},
{
"math_id": 17,
"text": " v = 2.5 \\begin{bmatrix} 1 \\\\ 1 \\\\ 0\\end{bmatrix} + 0.5 \\begin{bmatrix} 1 \\\\ 0 \\\\ 1 \\end{bmatrix} = \\begin{bmatrix} 3.0 \\\\ 2.5 \\\\ 0.5 \\end{bmatrix} "
},
{
"math_id": 18,
"text": "\\mathbf{e}_1"
},
{
"math_id": 19,
"text": "\\mathbf{e}_2"
},
{
"math_id": 20,
"text": "\\mathbf{e}_3"
},
{
"math_id": 21,
"text": " \\mathbf{e}_1 = \\lambda_1 \\mathbf{e}_2 + \\lambda_2 \\mathbf{e}_3 "
},
{
"math_id": 22,
"text": "\n\\begin{array}{l}\n\\qquad\\ \\mathbf{e}_1 \\quad \\ \\ \\mathbf{e}_2 \\quad \\mathbf{e}_3 \\quad \\mathbf{e}_4 \\\\\n\\begin{bmatrix}\n& \\phantom{-}0 & \\phantom{-}1 & \\phantom{-}1 & \\phantom{-}1 \\\\\n& \\phantom{-}0 & \\phantom{-}1 & \\phantom{-}1 & \\phantom{-}1 \\\\\n& \\phantom{-}1 & -1 & \\phantom{-}1 & \\phantom{-}0 \\\\\n& \\phantom{-}1 & \\phantom{-}0 & \\phantom{-}2 & \\phantom{-}1 \\\\\n& \\phantom{-}1 & \\phantom{-}0 & \\phantom{-}2 & \\phantom{-}1 \\\\\n& -1 & \\phantom{-}2 & \\phantom{-}0 & \\phantom{-}1 \\\\\n\\end{bmatrix} \\\\\n\\end{array}\n"
},
{
"math_id": 23,
"text": " e_4 = [ 1\\ 1\\ 0\\ 1\\ 1\\ 1\\ ]^T"
},
{
"math_id": 24,
"text": "T"
},
{
"math_id": 25,
"text": "\n \\mathbf{e}_4 \n\\begin{bmatrix}\n\\phantom{-}1 \\phantom{-} \\\\\n\\phantom{-}1 \\phantom{-} \\\\\n\\phantom{-}0 \\phantom{-} \\\\\n\\phantom{-}1 \\phantom{-}\\\\\n\\phantom{-}1 \\phantom{-}\\\\\n\\phantom{-}1 \\phantom{-}\\\\\n\\end{bmatrix} =\n\n\\mathbf{e}_1 \n\\begin{bmatrix}\n\\phantom{-}0 \\phantom{-} \\\\\n\\phantom{-}0 \\phantom{-}\\\\\n\\phantom{-}1 \\phantom{-}\\\\\n\\phantom{-}1 \\phantom{-}\\\\\n\\phantom{-}1 \\phantom{-}\\\\\n-1 \\phantom{-}\\\\\n\\end{bmatrix} +\n\n\\mathbf{e}_2 \n\\begin{bmatrix}\n\\phantom{-}1 \\phantom{-}\\\\\n\\phantom{-}1 \\phantom{-}\\\\\n -1 \\phantom{-}\\\\\n\\phantom{-}0 \\phantom{-}\\\\\n\\phantom{-}0 \\phantom{-}\\\\\n\\phantom{-}2 \\phantom{-}\\\\\n\\end{bmatrix}\n"
},
{
"math_id": 26,
"text": "E_4"
},
{
"math_id": 27,
"text": "\\mathbf{e}_4"
},
{
"math_id": 28,
"text": "e_1, e_2, e_4, e_5\\ \\mbox{and}\\ e_6"
}
]
| https://en.wikipedia.org/wiki?curid=71428073 |
7143 | Code-division multiple access | Channel access method used by various radio communication technologies
Code-division multiple access (CDMA) is a channel access method used by various radio communication technologies. CDMA is an example of multiple access, where several transmitters can send information simultaneously over a single communication channel. This allows several users to share a band of frequencies (see bandwidth). To permit this without undue interference between the users, CDMA employs spread spectrum technology and a special coding scheme (where each transmitter is assigned a code).
CDMA optimizes the use of available bandwidth as it transmits over the entire frequency range and does not limit the user's frequency range.
It is used as the access method in many mobile phone standards. IS-95, also called "cdmaOne", and its 3G evolution CDMA2000, are often simply referred to as "CDMA", but UMTS, the 3G standard used by GSM carriers, also uses "wideband CDMA", or W-CDMA, as well as TD-CDMA and TD-SCDMA, as its radio technologies. Many carriers (such as AT&T, UScellular and Verizon) shut down 3G CDMA-based networks in 2022 and 2024, rendering handsets supporting only those protocols unusable for calls, even to 911.
It can be also used as a channel or medium access technology, like ALOHA for example or as a permanent pilot/signalling channel to allow users to synchronize their local oscillators to a common system frequency, thereby also estimating the channel parameters permanently.
In these schemes, the message is modulated on a longer spreading sequence, consisting of several chips (0es and 1es). Due to their very advantageous auto- and crosscorrelation characteristics, these spreading sequences have also been used for radar applications for many decades, where they are called Barker codes (with a very short sequence length of typically 8 to 32).
For space-based communication applications, CDMA has been used for many decades due to the large path loss and Doppler shift caused by satellite motion. CDMA is often used with binary phase-shift keying (BPSK) in its simplest form, but can be combined with any modulation scheme like (in advanced cases) quadrature amplitude modulation (QAM) or orthogonal frequency-division multiplexing (OFDM), which typically makes it very robust and efficient (and equipping them with accurate ranging capabilities, which is difficult without CDMA). Other schemes use subcarriers based on binary offset carrier modulation (BOC modulation), which is inspired by Manchester codes and enable a larger gap between the virtual center frequency and the subcarriers, which is not the case for OFDM subcarriers.
History.
The technology of code-division multiple access channels has long been known.
United States.
In the US, one of the earliest descriptions of CDMA can be found in the summary report of Project Hartwell on "The Security of Overseas Transport", which was a summer research project carried out at the Massachusetts Institute of Technology from June to August 1950. Further research in the context of jamming and anti-jamming was carried out in 1952 at Lincoln Lab.
Soviet Union.
In the Soviet Union (USSR), the first work devoted to this subject was published in 1935 by Dmitry Ageev. It was shown that through the use of linear methods, there are three types of signal separation: frequency, time and compensatory. The technology of CDMA was used in 1957, when the young military radio engineer Leonid Kupriyanovich in Moscow made an experimental model of a wearable automatic mobile phone, called LK-1 by him, with a base station. LK-1 has a weight of 3 kg, 20–30 km operating distance, and 20–30 hours of battery life. The base station, as described by the author, could serve several customers. In 1958, Kupriyanovich made the new experimental "pocket" model of mobile phone. This phone weighed 0.5 kg. To serve more customers, Kupriyanovich proposed the device, which he called "correlator." In 1958, the USSR also started the development of the "Altai" national civil mobile phone service for cars, based on the Soviet MRT-1327 standard. The phone system weighed . It was placed in the trunk of the vehicles of high-ranking officials and used a standard handset in the passenger compartment. The main developers of the Altai system were VNIIS (Voronezh Science Research Institute of Communications) and GSPI (State Specialized Project Institute). In 1963 this service started in Moscow, and in 1970 Altai service was used in 30 USSR cities.
Steps in CDMA modulation.
CDMA is a spread-spectrum multiple-access technique. A spread-spectrum technique spreads the bandwidth of the data uniformly for the same transmitted power. A spreading code is a pseudo-random code in the time domain that has a narrow ambiguity function in the frequency domain, unlike other narrow pulse codes. In CDMA a locally generated code runs at a much higher rate than the data to be transmitted. Data for transmission is combined by bitwise XOR (exclusive OR) with the faster code. The figure shows how a spread-spectrum signal is generated. The data signal with pulse duration of formula_0 (symbol period) is XORed with the code signal with pulse duration of formula_1 (chip period). (Note: bandwidth is proportional to formula_2, where formula_3 = bit time.) Therefore, the bandwidth of the data signal is formula_4 and the bandwidth of the spread spectrum signal is formula_5. Since formula_1 is much smaller than formula_0, the bandwidth of the spread-spectrum signal is much larger than the bandwidth of the original signal. The ratio formula_6 is called the spreading factor or processing gain and determines to a certain extent the upper limit of the total number of users supported simultaneously by a base station.
Each user in a CDMA system uses a different code to modulate their signal. Choosing the codes used to modulate the signal is very important in the performance of CDMA systems. The best performance occurs when there is good separation between the signal of a desired user and the signals of other users. The separation of the signals is made by correlating the received signal with the locally generated code of the desired user. If the signal matches the desired user's code, then the correlation function will be high and the system can extract that signal. If the desired user's code has nothing in common with the signal, the correlation should be as close to zero as possible (thus eliminating the signal); this is referred to as cross-correlation. If the code is correlated with the signal at any time offset other than zero, the correlation should be as close to zero as possible. This is referred to as auto-correlation and is used to reject multi-path interference.
An analogy to the problem of multiple access is a room (channel) in which people wish to talk to each other simultaneously. To avoid confusion, people could take turns speaking (time division), speak at different pitches (frequency division), or speak in different languages (code division). CDMA is analogous to the last example where people speaking the same language can understand each other, but other languages are perceived as noise and rejected. Similarly, in radio CDMA, each group of users is given a shared code. Many codes occupy the same channel, but only users associated with a particular code can communicate.
In general, CDMA belongs to two basic categories: synchronous (orthogonal codes) and asynchronous (pseudorandom codes).
Code-division multiplexing (synchronous CDMA).
The digital modulation method is analogous to those used in simple radio transceivers. In the analog case, a low-frequency data signal is time-multiplied with a high-frequency pure sine-wave carrier and transmitted. This is effectively a frequency convolution (Wiener–Khinchin theorem) of the two signals, resulting in a carrier with narrow sidebands. In the digital case, the sinusoidal carrier is replaced by Walsh functions. These are binary square waves that form a complete orthonormal set. The data signal is also binary and the time multiplication is achieved with a simple XOR function. This is usually a Gilbert cell mixer in the circuitry.
Synchronous CDMA exploits mathematical properties of orthogonality between vectors representing the data strings. For example, the binary string "1011" is represented by the vector (1, 0, 1, 1). Vectors can be multiplied by taking their dot product, by summing the products of their respective components (for example, if u = ("a", "b") and v = ("c", "d"), then their dot product u·v = "ac" + "bd"). If the dot product is zero, the two vectors are said to be "orthogonal" to each other. Some properties of the dot product aid understanding of how W-CDMA works. If vectors a and b are orthogonal, then formula_7 and:
formula_8
formula_9
formula_10
formula_11
Each user in synchronous CDMA uses a code orthogonal to the others' codes to modulate their signal. An example of 4 mutually orthogonal digital signals is shown in the figure below. Orthogonal codes have a cross-correlation equal to zero; in other words, they do not interfere with each other. In the case of IS-95, 64-bit Walsh codes are used to encode the signal to separate different users. Since each of the 64 Walsh codes is orthogonal to all other, the signals are channelized into 64 orthogonal signals. The following example demonstrates how each user's signal can be encoded and decoded.
Example.
Start with a set of vectors that are mutually orthogonal. (Although mutual orthogonality is the only condition, these vectors are usually constructed for ease of decoding, for example columns or rows from Walsh matrices.) An example of orthogonal functions is shown in the adjacent picture. These vectors will be assigned to individual users and are called the "code", "chip code", or "chipping code". In the interest of brevity, the rest of this example uses codes v with only two bits.
Each user is associated with a different code, say v. A 1 bit is represented by transmitting a positive code v, and a 0 bit is represented by a negative code −v. For example, if v = ("v"0, "v"1) = (1, −1) and the data that the user wishes to transmit is (1, 0, 1, 1), then the transmitted symbols would be
(v, −v, v, v) = ("v"0, "v"1, −"v"0, −"v"1, "v"0, "v"1, "v"0, "v"1) = (1, −1, −1, 1, 1, −1, 1, −1).
For the purposes of this article, we call this constructed vector the "transmitted vector".
Each sender has a different, unique vector v chosen from that set, but the construction method of the transmitted vector is identical.
Now, due to physical properties of interference, if two signals at a point are in phase, they add to give twice the amplitude of each signal, but if they are out of phase, they subtract and give a signal that is the difference of the amplitudes. Digitally, this behaviour can be modelled by the addition of the transmission vectors, component by component.
If sender0 has code (1, −1) and data (1, 0, 1, 1), and sender1 has code (1, 1) and data (0, 0, 1, 1), and both senders transmit simultaneously, then this table describes the coding steps:
Because signal0 and signal1 are transmitted at the same time into the air, they add to produce the raw signal
(1, −1, −1, 1, 1, −1, 1, −1) + (−1, −1, −1, −1, 1, 1, 1, 1) = (0, −2, −2, 0, 2, 0, 2, 0).
This raw signal is called an interference pattern. The receiver then extracts an intelligible signal for any known sender by combining the sender's code with the interference pattern. The following table explains how this works and shows that the signals do not interfere with one another:
Further, after decoding, all values greater than 0 are interpreted as 1, while all values less than zero are interpreted as 0. For example, after decoding, data0 is (2, −2, 2, 2), but the receiver interprets this as (1, 0, 1, 1). Values of exactly 0 mean that the sender did not transmit any data, as in the following example:
Assume signal0 = (1, −1, −1, 1, 1, −1, 1, −1) is transmitted alone. The following table shows the decode at the receiver:
When the receiver attempts to decode the signal using sender1's code, the data is all zeros; therefore the cross-correlation is equal to zero and it is clear that sender1 did not transmit any data.
Asynchronous CDMA.
When mobile-to-base links cannot be precisely coordinated, particularly due to the mobility of the handsets, a different approach is required. Since it is not mathematically possible to create signature sequences that are both orthogonal for arbitrarily random starting points and which make full use of the code space, unique "pseudo-random" or "pseudo-noise" sequences called spreading sequences are used in "asynchronous" CDMA systems. A spreading sequence is a binary sequence that appears random but can be reproduced in a deterministic manner by intended receivers. These spreading sequences are used to encode and decode a user's signal in asynchronous CDMA in the same manner as the orthogonal codes in synchronous CDMA (shown in the example above). These spreading sequences are statistically uncorrelated, and the sum of a large number of spreading sequences results in "multiple access interference" (MAI) that is approximated by a Gaussian noise process (following the central limit theorem in statistics). Gold codes are an example of a spreading sequence suitable for this purpose, as there is low correlation between the codes. If all of the users are received with the same power level, then the variance (e.g., the noise power) of the MAI increases in direct proportion to the number of users. In other words, unlike synchronous CDMA, the signals of other users will appear as noise to the signal of interest and interfere slightly with the desired signal in proportion to number of users.
All forms of CDMA use the spread-spectrum spreading factor to allow receivers to partially discriminate against unwanted signals. Signals encoded with the specified spreading sequences are received, while signals with different sequences (or the same sequences but different timing offsets) appear as wideband noise reduced by the spreading factor.
Since each user generates MAI, controlling the signal strength is an important issue with CDMA transmitters. A CDM (synchronous CDMA), TDMA, or FDMA receiver can in theory completely reject arbitrarily strong signals using different codes, time slots or frequency channels due to the orthogonality of these systems. This is not true for asynchronous CDMA; rejection of unwanted signals is only partial. If any or all of the unwanted signals are much stronger than the desired signal, they will overwhelm it. This leads to a general requirement in any asynchronous CDMA system to approximately match the various signal power levels as seen at the receiver. In CDMA cellular, the base station uses a fast closed-loop power-control scheme to tightly control each mobile's transmit power.
In 2019, schemes to precisely estimate the required length of the codes in dependence of Doppler and delay characteristics have been developed. Soon after, machine learning based techniques that generate sequences of a desired length and spreading properties have been published as well. These are highly competitive with the classic Gold and Welch sequences. These are not generated by linear-feedback-shift-registers, but have to be stored in lookup tables.
Advantages of asynchronous CDMA over other techniques.
Efficient practical utilization of the fixed frequency spectrum.
In theory CDMA, TDMA and FDMA have exactly the same spectral efficiency, but, in practice, each has its own challenges – power control in the case of CDMA, timing in the case of TDMA, and frequency generation/filtering in the case of FDMA.
TDMA systems must carefully synchronize the transmission times of all the users to ensure that they are received in the correct time slot and do not cause interference. Since this cannot be perfectly controlled in a mobile environment, each time slot must have a guard time, which reduces the probability that users will interfere, but decreases the spectral efficiency.
Similarly, FDMA systems must use a guard band between adjacent channels, due to the unpredictable Doppler shift of the signal spectrum because of user mobility. The guard bands will reduce the probability that adjacent channels will interfere, but decrease the utilization of the spectrum.
Flexible allocation of resources.
Asynchronous CDMA offers a key advantage in the flexible allocation of resources i.e. allocation of spreading sequences to active users. In the case of CDM (synchronous CDMA), TDMA, and FDMA the number of simultaneous orthogonal codes, time slots, and frequency slots respectively are fixed, hence the capacity in terms of the number of simultaneous users is limited. There are a fixed number of orthogonal codes, time slots or frequency bands that can be allocated for CDM, TDMA, and FDMA systems, which remain underutilized due to the bursty nature of telephony and packetized data transmissions. There is no strict limit to the number of users that can be supported in an asynchronous CDMA system, only a practical limit governed by the desired bit error probability since the SIR (signal-to-interference ratio) varies inversely with the number of users. In a bursty traffic environment like mobile telephony, the advantage afforded by asynchronous CDMA is that the performance (bit error rate) is allowed to fluctuate randomly, with an average value determined by the number of users times the percentage of utilization. Suppose there are 2"N" users that only talk half of the time, then 2"N" users can be accommodated with the same "average" bit error probability as "N" users that talk all of the time. The key difference here is that the bit error probability for "N" users talking all of the time is constant, whereas it is a "random" quantity (with the same mean) for 2"N" users talking half of the time.
In other words, asynchronous CDMA is ideally suited to a mobile network where large numbers of transmitters each generate a relatively small amount of traffic at irregular intervals. CDM (synchronous CDMA), TDMA, and FDMA systems cannot recover the underutilized resources inherent to bursty traffic due to the fixed number of orthogonal codes, time slots or frequency channels that can be assigned to individual transmitters. For instance, if there are "N" time slots in a TDMA system and 2"N" users that talk half of the time, then half of the time there will be more than "N" users needing to use more than "N" time slots. Furthermore, it would require significant overhead to continually allocate and deallocate the orthogonal-code, time-slot or frequency-channel resources. By comparison, asynchronous CDMA transmitters simply send when they have something to say and go off the air when they do not, keeping the same signature sequence as long as they are connected to the system.
Spread-spectrum characteristics of CDMA.
Most modulation schemes try to minimize the bandwidth of this signal since bandwidth is a limited resource. However, spread-spectrum techniques use a transmission bandwidth that is several orders of magnitude greater than the minimum required signal bandwidth. One of the initial reasons for doing this was military applications including guidance and communication systems. These systems were designed using spread spectrum because of its security and resistance to jamming. Asynchronous CDMA has some level of privacy built in because the signal is spread using a pseudo-random code; this code makes the spread-spectrum signals appear random or have noise-like properties. A receiver cannot demodulate this transmission without knowledge of the pseudo-random sequence used to encode the data. CDMA is also resistant to jamming. A jamming signal only has a finite amount of power available to jam the signal. The jammer can either spread its energy over the entire bandwidth of the signal or jam only part of the entire signal.
CDMA can also effectively reject narrow-band interference. Since narrow-band interference affects only a small portion of the spread-spectrum signal, it can easily be removed through notch filtering without much loss of information. Convolution encoding and interleaving can be used to assist in recovering this lost data. CDMA signals are also resistant to multipath fading. Since the spread-spectrum signal occupies a large bandwidth, only a small portion of this will undergo fading due to multipath at any given time. Like the narrow-band interference, this will result in only a small loss of data and can be overcome.
Another reason CDMA is resistant to multipath interference is because the delayed versions of the transmitted pseudo-random codes will have poor correlation with the original pseudo-random code, and will thus appear as another user, which is ignored at the receiver. In other words, as long as the multipath channel induces at least one chip of delay, the multipath signals will arrive at the receiver such that they are shifted in time by at least one chip from the intended signal. The correlation properties of the pseudo-random codes are such that this slight delay causes the multipath to appear uncorrelated with the intended signal, and it is thus ignored.
Some CDMA devices use a rake receiver, which exploits multipath delay components to improve the performance of the system. A rake receiver combines the information from several correlators, each one tuned to a different path delay, producing a stronger version of the signal than a simple receiver with a single correlation tuned to the path delay of the strongest signal.
Frequency reuse is the ability to reuse the same radio channel frequency at other cell sites within a cellular system. In the FDMA and TDMA systems, frequency planning is an important consideration. The frequencies used in different cells must be planned carefully to ensure signals from different cells do not interfere with each other. In a CDMA system, the same frequency can be used in every cell, because channelization is done using the pseudo-random codes. Reusing the same frequency in every cell eliminates the need for frequency planning in a CDMA system; however, planning of the different pseudo-random sequences must be done to ensure that the received signal from one cell does not correlate with the signal from a nearby cell.
Since adjacent cells use the same frequencies, CDMA systems have the ability to perform soft hand-offs. Soft hand-offs allow the mobile telephone to communicate simultaneously with two or more cells. The best signal quality is selected until the hand-off is complete. This is different from hard hand-offs utilized in other cellular systems. In a hard-hand-off situation, as the mobile telephone approaches a hand-off, signal strength may vary abruptly. In contrast, CDMA systems use the soft hand-off, which is undetectable and provides a more reliable and higher-quality signal.
Collaborative CDMA.
A novel collaborative multi-user transmission and detection scheme called collaborative CDMA has been investigated for the uplink that exploits the differences between users' fading channel signatures to increase the user capacity well beyond the spreading length in the MAI-limited environment. The authors show that it is possible to achieve this increase at a low complexity and high bit error rate performance in flat fading channels, which is a major research challenge for overloaded CDMA systems. In this approach, instead of using one sequence per user as in conventional CDMA, the authors group a small number of users to share the same spreading sequence and enable group spreading and despreading operations. The new collaborative multi-user receiver consists of two stages: group multi-user detection (MUD) stage to suppress the MAI between the groups and a low-complexity maximum-likelihood detection stage to recover jointly the co-spread users' data using minimal Euclidean-distance measure and users' channel-gain coefficients. An enhanced CDMA version known as interleave-division multiple access (IDMA) uses the orthogonal interleaving as the only means of user separation in place of signature sequence used in CDMA system.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T_b"
},
{
"math_id": 1,
"text": "T_c"
},
{
"math_id": 2,
"text": "1/T"
},
{
"math_id": 3,
"text": "T"
},
{
"math_id": 4,
"text": "1/T_b"
},
{
"math_id": 5,
"text": "1/T_c"
},
{
"math_id": 6,
"text": "T_b/T_c"
},
{
"math_id": 7,
"text": "\\mathbf{a}\\cdot\\mathbf{b} = 0"
},
{
"math_id": 8,
"text": "\n\\mathbf{a}\\cdot(\\mathbf{a} + \\mathbf{b}) = \\|\\mathbf{a}\\|^2,\\ \\text{since}\\ \\mathbf{a}\\cdot\\mathbf{a} + \\mathbf{a}\\cdot\\mathbf{b} = \\|\\mathbf{a}\\|^2 + 0, \n"
},
{
"math_id": 9,
"text": "\n\\mathbf{a}\\cdot(-\\mathbf{a} + \\mathbf{b}) = -\\|\\mathbf{a}\\|^2,\\ \\text{since}\\ {-\\mathbf{a}}\\cdot\\mathbf{a} + \\mathbf{a}\\cdot\\mathbf{b} = -\\|\\mathbf{a}\\|^2 + 0,\n"
},
{
"math_id": 10,
"text": "\n\\mathbf{b}\\cdot(\\mathbf{a} + \\mathbf{b}) = \\|\\mathbf{b}\\|^2,\\ \\text{since}\\ \\mathbf{b}\\cdot\\mathbf{a} + \\mathbf{b}\\cdot\\mathbf{b} = 0 + \\|\\mathbf{b}\\|^2,\n"
},
{
"math_id": 11,
"text": "\n\\mathbf{b}\\cdot(\\mathbf{a} - \\mathbf{b}) = -\\|\\mathbf{b}\\|^2,\\ \\text{since}\\ \\mathbf{b}\\cdot\\mathbf{a} - \\mathbf{b}\\cdot\\mathbf{b} = 0 - \\|\\mathbf{b}\\|^2.\n"
}
]
| https://en.wikipedia.org/wiki?curid=7143 |
7143378 | Motion field | In computer vision, the motion field is an ideal representation of motion in three-dimensional space (3D) as it is projected onto a camera image. Given a simplified camera model, each point formula_0 in the image is the projection of some point in the 3D scene but the position of the projection of a fixed point in space can vary with time. The motion field can formally be defined as the time derivative of the image position of all image points given that they correspond to fixed 3D points. This means that the motion field can be represented as a function which maps image coordinates to a 2-dimensional vector. The motion field is an ideal description of the projected 3D motion in the sense that it can be formally defined but in practice it is normally only possible to determine an approximation of the motion field from the image data.
Introduction.
A camera model maps each point formula_1 in 3D space to a 2D image point formula_0 according to some mapping functions formula_2:
formula_3
Assuming that the scene depicted by the camera is dynamic; it consists of objects moving relative each other, objects which deform, and possibly also the camera is moving relative to the scene, a fixed point in 3D space is mapped to varying points in the image. Differentiating the previous expression with respect to time gives
formula_4
Here
formula_5
is the motion field and the vector u is dependent both on the image position formula_0 as well as on the time "t". Similarly,
formula_6
is the motion of the corresponding 3D point and its relation to the motion field is given by
formula_7
where formula_8 is the image position dependent formula_9 matrix
formula_10
This relation implies that the motion field, at a specific image point, is invariant to 3D motions which lies in the null space of formula_8. For example, in the case of a pinhole camera all 3D motion components which are directed to or from the camera focal point cannot be detected in the motion field.
Special cases.
The motion field formula_11 is defined as:
formula_12
where
formula_13.
where
Relation to optical flow.
The motion field is an ideal construction, based on the idea that it is possible to determine the motion of each image point, and above it is described how this 2D motion is related to 3D motion. In practice, however, the true motion field can only be approximated based on measurements on image data. The problem is that in most cases each image point has an individual motion which therefore has to be locally measured by means of a neighborhood operation on the image data. As consequence, the correct motion field cannot be determined for certain types of neighborhood and instead an approximation, often referred to as the optical flow, has to be used. For example, a neighborhood which has a constant intensity may correspond to a non-zero motion field, but the optical flow is zero since no local image motion can be measured. Similarly, a neighborhood which is intrinsic 1-dimensional (for example, an edge or line) can correspond to an arbitrary motion field, but the optical flow can only capture the normal component of the motion field. There are also other effects, such as image noise, 3D occlusion, temporal aliasing, which are inherent to any method for measuring optical flow and causes the resulting optical flow to deviate from the true motion field.
In short, the motion field cannot be correctly measured for all image points, and the optical flow is an approximation of the motion field. There are several different ways to compute the optical flow based on different criteria of how an optical estimation should be made. | [
{
"math_id": 0,
"text": " (y_{1}, y_{2}) "
},
{
"math_id": 1,
"text": " (x_{1}, x_{2}, x_{3}) "
},
{
"math_id": 2,
"text": " m_{1}, m_{2} "
},
{
"math_id": 3,
"text": " \\begin{pmatrix} y_{1} \\\\ y_{2} \\end{pmatrix} = \\begin{pmatrix} m_{1}(x_{1}, x_{2}, x_{3}) \\\\ m_{2}(x_{1}, x_{2}, x_{3}) \\end{pmatrix} "
},
{
"math_id": 4,
"text": " \\begin{pmatrix} \\frac{d y_{1}}{d t} \\\\[2mm] \\frac{d y_{2}}{d t} \\end{pmatrix} = \\begin{pmatrix} \\frac{d m_{1}(x_{1}, x_{2}, x_{3})}{d t} \\\\[2mm] \\frac{d m_{2}(x_{1}, x_{2}, x_{3})}{d t} \\end{pmatrix} = \\begin{pmatrix} \\frac{d m_{1}}{d x_{1}} & \\frac{d m_{1}}{d x_{2}} & \\frac{d m_{1}}{d x_{3}} \\\\[2mm] \\frac{d m_{2}}{d x_{1}} & \\frac{d m_{2}}{d x_{2}} & \\frac{d m_{2}}{d x_{3}} \\end{pmatrix} \\, \\begin{pmatrix} \\frac{d x_{1}}{d t} \\\\[2mm] \\frac{d x_{2}}{d t} \\\\[2mm] \\frac{d x_{3}}{d t} \\end{pmatrix} "
},
{
"math_id": 5,
"text": " \\mathbf{u} = \\begin{pmatrix} \\frac{d y_{1}}{d t} \\\\[2mm] \\frac{d y_{2}}{d t} \\end{pmatrix} "
},
{
"math_id": 6,
"text": " \\mathbf{x'} = \\begin{pmatrix} \\frac{d x_{1}}{d t} \\\\[2mm] \\frac{d x_{2}}{d t} \\\\[2mm] \\frac{d x_{3}}{d t} \\end{pmatrix} "
},
{
"math_id": 7,
"text": " \\mathbf{u} = \\mathbf{M} \\, \\mathbf{x}' "
},
{
"math_id": 8,
"text": " \\mathbf{M} "
},
{
"math_id": 9,
"text": " 2 \\times 3 "
},
{
"math_id": 10,
"text": " \\mathbf{M} = \\begin{pmatrix} \\frac{d m_{1}}{d x_{1}} & \\frac{d m_{1}}{d x_{2}} & \\frac{d m_{1}}{d x_{3}} \\\\[2mm] \\frac{d m_{2}}{d x_{1}} & \\frac{d m_{2}}{d x_{2}} & \\frac{d m_{2}}{d x_{3}} \\end{pmatrix} "
},
{
"math_id": 11,
"text": "\\mathbf{v}"
},
{
"math_id": 12,
"text": "\\mathbf{v} = f\\frac{Z\\mathbf{V} - V_z\\mathbf{P}}{Z^2}"
},
{
"math_id": 13,
"text": "\\mathbf{V}=-\\mathbf{T}-\\mathbf{\\omega}\\times\\mathbf{P}"
},
{
"math_id": 14,
"text": "\\mathbf{P}"
},
{
"math_id": 15,
"text": "\\mathbf{V}"
},
{
"math_id": 16,
"text": "\\mathbf{T}"
},
{
"math_id": 17,
"text": "\\mathbf{\\omega}"
}
]
| https://en.wikipedia.org/wiki?curid=7143378 |
71435 | Universal Turing machine | Type of Turing machine
In computer science, a universal Turing machine (UTM) is a Turing machine capable of computing any computable sequence, as described by Alan Turing in his seminal paper "On Computable Numbers, with an Application to the Entscheidungsproblem". Common sense might say that a universal machine is impossible, but Turing proves that it is possible. He suggested that we may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditions &NoBreak;&NoBreak;; which will be called "m-configurations". He then described the operation of such machine, as described below, and argued:
<templatestyles src="Template:Blockquote/styles.css" />It is my contention that these operations include all those which are used in the computation of a number.
Alan Turing introduced the idea of such a machine in 1936–1937.
Introduction.
Davis makes a persuasive argument that Turing's conception of what is now known as "the stored-program computer", of placing the "action table"—the instructions for the machine—in the same "memory" as the input data, strongly influenced John von Neumann's conception of the first American discrete-symbol (as opposed to analog) computer—the EDVAC. Davis quotes "Time" magazine to this effect, that "everyone who taps at a keyboard ... is working on an incarnation of a Turing machine", and that "John von Neumann [built] on the work of Alan Turing".
Davis makes a case that Turing's Automatic Computing Engine (ACE) computer "anticipated" the notions of microprogramming (microcode) and RISC processors. Knuth cites Turing's work on the ACE computer as designing "hardware to facilitate subroutine linkage"; Davis also references this work as Turing's use of a hardware "stack".
As the Turing machine was encouraging the construction of computers, the UTM was encouraging the development of the fledgling computer sciences. An early, if not the first, assembler was proposed "by a young hot-shot programmer" for the EDVAC. Von Neumann's "first serious program ... [was] to simply sort data efficiently". Knuth observes that the subroutine return embedded in the program itself rather than in special registers is attributable to von Neumann and Goldstine. Knuth furthermore states that
<templatestyles src="Template:Blockquote/styles.css" />The first interpretive routine may be said to be the "Universal Turing Machine" ... Interpretive routines in the conventional sense were mentioned by John Mauchly in his lectures at the Moore School in 1946 ... Turing took part in this development also; interpretive systems for the Pilot ACE computer were written under his direction.
Davis briefly mentions operating systems and compilers as outcomes of the notion of program-as-data.
Mathematical theory.
With this encoding of action tables as strings, it becomes possible, in principle, for Turing machines to answer questions about the behaviour of other Turing machines. Most of these questions, however, are undecidable, meaning that the function in question cannot be calculated mechanically. For instance, the problem of determining whether an arbitrary Turing machine will halt on a particular input, or on all inputs, known as the Halting problem, was shown to be, in general, undecidable in Turing's original paper. Rice's theorem shows that any non-trivial question about the output of a Turing machine is undecidable.
A universal Turing machine can calculate any recursive function, decide any recursive language, and accept any recursively enumerable language. According to the Church–Turing thesis, the problems solvable by a universal Turing machine are exactly those problems solvable by an "algorithm" or an "effective method of computation", for any reasonable definition of those terms. For these reasons, a universal Turing machine serves as a standard against which to compare computational systems, and a system that can simulate a universal Turing machine is called Turing complete.
An abstract version of the universal Turing machine is the universal function, a computable function which can be used to calculate any other computable function. The UTM theorem proves the existence of such a function.
Efficiency.
Without loss of generality, the input of Turing machine can be assumed to be in the alphabet {0, 1}; any other finite alphabet can be encoded over {0, 1}. The behavior of a Turing machine "M" is determined by its transition function. This function can be easily encoded as a string over the alphabet {0, 1} as well. The size of the alphabet of "M", the number of tapes it has, and the size of the state space can be deduced from the transition function's table. The distinguished states and symbols can be identified by their position, e.g. the first two states can by convention be the start and stop states. Consequently, every Turing machine can be encoded as a string over the alphabet {0, 1}. Additionally, we convene that every invalid encoding maps to a trivial Turing machine that immediately halts, and that every Turing machine can have an infinite number of encodings by padding the encoding with an arbitrary number of (say) 1's at the end, just like comments work in a programming language. It should be no surprise that we can achieve this encoding given the existence of a Gödel number and computational equivalence between Turing machines and μ-recursive functions. Similarly, our construction associates to every binary string "α", a Turing machine "Mα".
Starting from the above encoding, in 1966 F. C. Hennie and R. E. Stearns showed that given a Turing machine "Mα" that halts on input "x" within "N" steps, then there exists a multi-tape universal Turing machine that halts on inputs "α", "x" (given on different tapes) in "CN" log "N", where "C" is a machine-specific constant that does not depend on the length of the input "x", but does depend on "M"'s alphabet size, number of tapes, and number of states. Effectively this is an formula_0 simulation, using Donald Knuth's Big O notation. The corresponding result for space-complexity rather than time-complexity is that we can simulate in a way that uses at most "CN" cells at any stage of the computation, an formula_1 simulation.
Smallest machines.
When Alan Turing came up with the idea of a universal machine he had in mind the simplest computing model powerful enough to calculate all possible functions that can be calculated. Claude Shannon first explicitly posed the question of finding the smallest possible universal Turing machine in 1956. He showed that two symbols were sufficient so long as enough states were used (or vice versa), and that it was always possible to exchange states for symbols. He also showed that no universal Turing machine of one state could exist.
Marvin Minsky discovered a 7-state 4-symbol universal Turing machine in 1962 using 2-tag systems. Other small universal Turing machines have since been found by Yurii Rogozhin and others by extending this approach of tag system simulation. If we denote by ("m", "n") the class of UTMs with "m" states and "n" symbols the following tuples have been found: (15, 2), (9, 3), (6, 4), (5, 5), (4, 6), (3, 9), and (2, 18). Rogozhin's (4, 6) machine uses only 22 instructions, and no standard UTM of lesser descriptional complexity is known.
However, generalizing the standard Turing machine model admits even smaller UTMs. One such generalization is to allow an infinitely repeated word on one or both sides of the Turing machine input, thus extending the definition of universality and known as "semi-weak" or "weak" universality, respectively. Small weakly universal Turing machines that simulate the Rule 110 cellular automaton have been given for the (6, 2), (3, 3), and (2, 4) state-symbol pairs. The proof of universality for Wolfram's 2-state 3-symbol Turing machine further extends the notion of weak universality by allowing certain non-periodic initial configurations. Other variants on the standard Turing machine model that yield small UTMs include machines with multiple tapes or tapes of multiple dimension, and machines coupled with a finite automaton.
Machines with no internal states.
If multiple heads are allowed on a Turing machine then no internal states are required; as "states" can be encoded in the tape. For example, consider a tape with 6 colours: 0, 1, 2, 0A, 1A, 2A. Consider a tape such as 0,0,1,2,2A,0,2,1 where a 3-headed Turing machine is situated over the triple (2,2A,0). The rules then convert any triple to another triple and move the 3-heads left or right. For example, the rules might convert (2,2A,0) to (2,1,0) and move the head left. Thus in this example, the machine acts like a 3-colour Turing machine with internal states A and B (represented by no letter). The case for a 2-headed Turing machine is very similar. Thus a 2-headed Turing machine can be Universal with 6 colours. It is not known what the smallest number of colours needed for a multi-headed Turing machine is or if a 2-colour Universal Turing machine is possible with multiple heads. It also means that rewrite rules are Turing complete since the triple rules are equivalent to rewrite rules. Extending the tape to two dimensions with a head sampling a letter and its 8 neighbours, only 2 colours are needed, as for example, a colour can be encoded in a vertical triple pattern such as 110.
Also, if the distance between the two heads is variable (the tape has "slack" between the heads), then it can simulate any Post tag system, some of which are universal.
Example of coding.
For those who would undertake the challenge of designing a UTM exactly as Turing specified see the article by Davies in . Davies corrects the errors in the original and shows what a sample run would look like. He successfully ran a (somewhat simplified) simulation.
The following example is taken from . For more about this example, see Turing machine examples.
Turing used seven symbols { A, C, D, R, L, N, ; } to encode each 5-tuple; as described in the article Turing machine, his 5-tuples are only of types N1, N2, and N3. The number of each "m‑configuration" (instruction, state) is represented by "D" followed by a unary string of A's, e.g. "q3" = DAAA. In a similar manner, he encodes the symbols blank as "D", the symbol "0" as "DC", the symbol "1" as DCC, etc. The symbols "R", "L", and "N" remain as is.
After encoding each 5-tuple is then "assembled" into a string in order as shown in the following table:
Finally, the codes for all four 5-tuples are strung together into a code started by ";" and separated by ";" i.e.:
<templatestyles src="Block indent/styles.css"/>;DADDCRDAA;DAADDRDAAA;DAAADDCCRDAAAA;DAAAADDRDA
This code he placed on alternate squares—the "F-squares" – leaving the "E-squares" (those liable to erasure) empty. The final assembly of the code on the tape for the U-machine consists of placing two special symbols ("e") one after the other, then the code separated out on alternate squares, and lastly the double-colon symbol "::" (blanks shown here with "." for clarity):
<templatestyles src="Block indent/styles.css"/>ee.;.D.A.D.D.C.R.D.A.A.;.D.A.A.D.D.R.D.A.A.A.;.D.A.A.A.D.D.C.C.R.D.A.A.A.A.;.D.A.A.A.A.D.D.R.D.A.::...
The U-machine's action-table (state-transition table) is responsible for decoding the symbols. Turing's action table keeps track of its place with markers "u", "v", "x", "y", "z" by placing them in "E-squares" to the right of "the marked symbol" – for example, to mark the current instruction z is placed to the right of ";" x is keeping the place with respect to the current "m‑configuration" DAA. The U-machine's action table will shuttle these symbols around (erasing them and placing them in different locations) as the computation progresses:
<templatestyles src="Block indent/styles.css"/>ee.; .D.A.D.D.C.R.D.A.A. ; zD.A.AxD.D.R.D.A.A.A.;.D.A.A.A.D.D.C.C.R.D.A.A.A.A.;.D.A.A.A.A.D.D.R.D.A.::...
Turing's action-table for his U-machine is very involved.
Roger Penrose provides examples of ways to encode instructions for the Universal machine using only binary symbols { 0, 1 }, or { blank, mark | }. Penrose goes further and writes out his entire U-machine code. He asserts that it truly is a U-machine code, an enormous number that spans almost 2 full pages of 1's and 0's.
Asperti and Ricciotti described a multi-tape UTM defined by composing elementary machines with very simple semantics, rather than explicitly giving its full action table. This approach was sufficiently modular to allow them to formally prove the correctness of the machine in the Matita proof assistant.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
Footnotes.
<templatestyles src="Reflist/styles.css" />
Other works cited.
<templatestyles src="Refbegin/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{O}\\left ( N \\log {N}\\right )"
},
{
"math_id": 1,
"text": "\\mathcal{O}(N)"
}
]
| https://en.wikipedia.org/wiki?curid=71435 |
71453228 | Deformation index | Parameter used in engineering
The deformation index is a parameter that specifies the mode of control under which time-varying deformation or loading processes occur in a solid. It is useful for evaluating the interaction of elastic stiffness with viscoelastic or fatigue behavior.
If deformation is maintained constant while load is varied, the process is said to be deformation controlled. Similarly, if load is held constant while deformation is varied, the process is said to be load controlled. Between the extremes of deformation and load control, there is a spectrum of intermediate modes of control including energy control.
For example, between two rubber compounds with the same viscoelastic behavior but different stiffnesses, which compound is preferred for a given application? In a strain controlled application, the lower stiffness rubber would operate at smaller stress and therefore produce less viscous heating. But in a stress controlled application, the higher stiffness rubber would operate at small strains thereby producing less viscous heating. In an energy controlled application, the two compounds might give the same amount of viscous heating. The right selection for minimizing viscous heating therefore depends on the mode of control.
Definition.
Futamura's deformation index formula_0 can be defined as follows. formula_1 is the parameter whose value is controlled (ie held constant). formula_2 is Young's modulus of linear elasticity. formula_3 is the strain. formula_4 is the stress.
formula_5
Particular choices of formula_0 yield particular modes of control and determine the units of formula_1. For formula_6, we get strain control: formula_7. For formula_8, we get energy control: formula_9. For formula_10, we get stress control: formula_11.
History.
The parameter was originally proposed by Shingo Futamura, who won the Melvin Mooney Distinguished Technology Award in recognition of this development. Futamura was concerned with predicting how changes in viscoelastic dissipation were affected by changes to compound stiffness. Later, he extended applicability of the approach to simplify finite element calculations of the coupling of thermal and mechanical behavior in a tire. William Mars adapted Futamura's concept for application in fatigue analysis.
Analogy to polytropic process.
Given that the deformation index may be written in a similar algebraic form, it may be said that the deformation index is in a certain sense analogous to the polytropic index for a polytropic process.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "E"
},
{
"math_id": 3,
"text": "\\epsilon"
},
{
"math_id": 4,
"text": "\\sigma"
},
{
"math_id": 5,
"text": "p= \\epsilon E^{ \\frac{n}{2}}= \\sigma E^{ \\frac{n}{2}-1}"
},
{
"math_id": 6,
"text": "n=0"
},
{
"math_id": 7,
"text": "p= \\epsilon = \\sigma E^{ -1}"
},
{
"math_id": 8,
"text": "n=1"
},
{
"math_id": 9,
"text": "w = \\frac{1}{2} p^2 = \\frac{1}{2} \\epsilon^2 E= \\frac{1}{2} \\sigma^2 E^{ -1}"
},
{
"math_id": 10,
"text": "n=2"
},
{
"math_id": 11,
"text": "p= \\epsilon E= \\sigma "
}
]
| https://en.wikipedia.org/wiki?curid=71453228 |
714601 | Dalitz plot | The Dalitz plot is a two-dimensional plot often used in particle physics to represent the relative frequency of various (kinematically distinct) manners in which the products of certain (otherwise similar) three-body decays may move apart.
The phase-space of a decay of a pseudoscalar into three spin-0 particles can be completely described using two variables. In a traditional Dalitz plot, the axes of the plot are the squares of the invariant masses of two pairs of the decay products. (For example, if particle A decays to particles 1, 2, and 3, a Dalitz plot for this decay could plot m212 on the x-axis and m223 on the y-axis.) If there are no angular correlations between the decay products then the distribution of these variables is flat. However symmetries may impose certain restrictions on the distribution. Furthermore, three-body decays are often dominated by resonant processes, in which the particle decays into two decay products, with one of those decay products immediately decaying into two additional decay products. In this case, the Dalitz plot will show a non-uniform distribution, with a peak around the mass of the resonant decay. In this way, the Dalitz plot provides an excellent tool for studying the dynamics of three-body decays.
Dalitz plots play a central role in the discovery of new particles in current high-energy physics experiments, including Higgs boson research, and are tools in exploratory efforts that might open avenues beyond the Standard Model.
R.H. Dalitz introduced this technique in 1953 to study decays of K mesons (which at that time were still referred to as "tau-mesons"). It can be adapted to the analysis of four-body decays as well. A specific form of a four-particle Dalitz plot (for non-relativistic kinematics), which is based on a tetrahedral coordinate system, was first applied to study the few-body dynamics in atomic four-body fragmentation processes.
Square Dalitz plot.
Modeling of the common representation of the Dalitz plot can be complicated due to its nontrivial shape. One can however introduce such kinematic variables so that Dalitz plot gets a rectangular (or squared) shape:
formula_0;
formula_1;
where formula_2 is the invariant mass of particles 1 and 2 in a given decay event; formula_3 and formula_4 are its maximal and minimal kinematically allowed values, while formula_5 is the angle between particles 1 and 3 in the rest frame of particles 1 and 2. This technique is commonly called "Square Dalitz plot" (SDP).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " m'(1,2) = \\frac{1}{\\pi} \\arccos\\left(2 * \\frac{m(1,2)-m(1,2)_{min}}{m(1,2)_{max}-m(1,2)_{min}} -1\\right) "
},
{
"math_id": 1,
"text": " \\theta'(1,2) = \\frac{1}{\\pi} \\theta(1,2) "
},
{
"math_id": 2,
"text": "m(1,2)"
},
{
"math_id": 3,
"text": "m(1,2)_{max}"
},
{
"math_id": 4,
"text": "m(1,2)_{min}"
},
{
"math_id": 5,
"text": " \\theta(1,2) "
}
]
| https://en.wikipedia.org/wiki?curid=714601 |
71462156 | Protactinium(IV) bromide | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Protactinium(IV) bromide is an inorganic compound. It is an actinide halide, composed of protactinium and bromine. It is radioactive, and has the chemical formula of PaBr4. It may be due to the brown color of bromine that causes the appearance of protactinium(IV) bromide to be brown crystals. Its crystal structure is tetragonal. Protactinium(IV) bromide is sublimed in a vacuum at 400 °C. The protactinium(IV) halide closest in structure to protactinium(IV) bromide is protactinium(IV) chloride.
Preparation.
Protactinium(IV) bromide can be prepared by reacting protactinium(V) bromide with hydrogen gas or aluminium:
formula_0
formula_1
Properties.
Protactinium(IV) bromide reacts with antimony trioxide to form protactinium bromate:
formula_2
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ 3 PaBr_5 + Al \\ \\xrightarrow{400-450^oC}\\ 3 PaBr_4 + AlBr_3 }"
},
{
"math_id": 1,
"text": "\\mathsf{ 2 PaBr_5 + H_2 \\ \\xrightarrow{800^oC}\\ 2 PaBr_4 + 2 HBr }"
},
{
"math_id": 2,
"text": "\\mathsf{ 3 PaBr_4 + Sb_2O_3 \\ \\xrightarrow{150-200^oC}\\ 3 PaOBr_2 + 2 SbBr_3 }"
}
]
| https://en.wikipedia.org/wiki?curid=71462156 |
71465792 | Pycnonuclear fusion | Type of nuclear fusion that occurs at high densities & low temperatures
Pycnonuclear fusion () is a type of nuclear fusion reaction which occurs due to zero-point oscillations of nuclei around their equilibrium point bound in their crystal lattice. In quantum physics, the phenomenon can be interpreted as overlap of the wave functions of neighboring ions, and is proportional to the overlapping amplitude. Under the conditions of above-threshold ionization, the reactions of neutronization and pycnonuclear fusion can lead to the creation of absolutely stable environments in superdense substances.
The term "pycnonuclear" was coined by A.G.W. Cameron in 1959, but research showing the possibility of nuclear fusion in extremely dense and cold compositions was published by W. A. Wildhack in 1940.
Astrophysics.
Pycnonuclear reactions can occur anywhere and in any matter, but under standard conditions, the speed of the reaction is exceedingly low, and thus, have no significant role outside of extremely dense systems, neutron-rich and free electron-rich environments, such as the inner crust of a neutron star. A feature of pycnonuclear reactions is that the rate of the reaction is directly proportional to the density of the space that the reaction is occurring in, but is almost fully independent of the temperature of the environment.
Pycnonuclear reactions are observed in neutron stars or white dwarfs, with evidence present of them occurring in lab-generated deuterium-tritium plasma. Some speculations also relate the fact that Jupiter emits more radiation than it receives from the Sun with pycnonuclear reactions or cold fusion.
White dwarfs.
In white dwarfs, the core of the star is cold, under which conditions, so, if treated "classically," the nuclei that arrange themselves into a crystal lattice are in their ground state. The zero-point oscillations of nuclei in the crystal lattice with energy at the energy formula_0 at Gamow's peak equal to formula_1 can overcome the Coulomb barrier, actuating pycnonuclear reactions. A semi-analytical model indicates that in white dwarfs, a thermonuclear runaway can occur at much earlier ages than that of the universe, as the pycnonuclear reactions in the cores of white dwarfs exceed the luminosity of the white dwarfs, allowing C-burning to occur, which catalyzes the formation of type Ia supernovas in accreting white dwarfs, whose mass is equal to the Chandrasekhar mass.
Some studies indicate that the contribution of pycnonuclear reactions towards instability of white dwarfs is only significant in carbon white dwarfs, while in oxygen white dwarfs, such instability is caused mostly due to electron capture. Although other authors disagree that the pycnonuclear reactions can act as major long-term heating sources for "massive" (1.25 M☉) white dwarfs, as their density would not suffice for a high rate of pycnonuclear reactions.
While most studies indicate that at the end of their lifecycle, white dwarfs slowly decay into black dwarfs, where pycnonuclear reactions slowly turn their cores into <chem>^56Fe</chem>, according to some versions, a collapse of black dwarfs is possible: M.E. Caplan (2020) theorizes that in the most massive black dwarfs (1.25 M☉), due to their declining electron fraction resulting from <chem>^56Fe</chem> production, they will exceed the Chandrasekhar limit in the very far future, speculating that their lifetime and delay time can stretch to up to 101100 years.
Neutron stars.
As the neutron stars undergo accretion, the density in the crust increases, passing the electron capture threshold. As the electron capture threshold (formula_2 g cm−3) is exceeded, it allows for the formation of light nuclei from the process of double electron capture (<chem>^40Mg + 2e -> ^34Ne + 6n + </chem>), forming the light neon nuclei and free neutrons, which further increases the density of the crust. As the density increases, the crystal lattices of neutron-rich nuclei are forced closer together due to gravitational collapse of accreting material, and at a point where the nuclei are pushed so close together that their zero-point oscillations allow them to break through the Coulomb barrier, fusion occurs. While the main site of pycnonuclear fusion within neutron stars is the inner crust, pycnonuclear reactions between light nuclei can occur even in the plasma ocean. Since the core of neutron stars was approximated to be formula_3 g cm−3, at such extreme densities, pycnonuclear reactions play a large role as demonstrated by Haensel & Zdunik, who showed that at densities of formula_4 g cm−3, they serve as a major heat source. In the fusion processes of the inner crust, the burning of neutron-rich nuclei (<chem>^{34}Ne + ^{34}Ne -> ^68Ca</chem>) releases a lot of heat, allowing pycnonuclear fusion to perform as a major energy source, possibly even acting as an energy basin for gamma-ray bursts.
Further studies have established that most magnetars are found at densities of formula_5g cm−3, indicating that pycnonuclear reactions along with subsequent electron capture reactions could serve as major heat sources.
Triple-alpha reaction.
In Wolf–Rayet stars, the triple-alpha reaction is accommodated by the low-energy of <chem>^8Be</chem> resonance. However, in neutron stars the temperature in the core is so low that the triple-alpha reactions can occur via the pycnonuclear pathway.
Mathematical model.
As the density increases, the Gamow peak increases in height and shifts towards lower energy, while the potential barriers are depressed. If the potential barriers are depressed by the amount of formula_0, the Gamow peak is shifted across the origin, making the reactions density-dependent, as the Gamow peak energy is much larger than the thermal energy. The material becomes a degenerate gas at such densities. Harrison proposed that models fully independent of temperature be called "cryonuclear".
Pycnonuclear reactions can proceed in two ways: direct (<chem>^{34}Ne + ^{34}Ne</chem> or <chem>^{40}Mg + ^{40}Mg</chem>) or through chain of electron capture reactions (<chem>^25N + ^40Mg</chem>).
Uncertainties.
The current consensus on the "rate" of pycnonuclear reactions is not coherent. There are currently a lot of uncertainties to consider when modelling the rate of pycnonuclear reactions, especially in spaces with high numbers of free particles. The primary focus of current research is on the effects of crystal lattice deformation and the presence of free neutrons on the reaction rate. Every time fusion occurs, nuclei are removed from the crystal lattice - creating a defect. The difficulty of approximating this model lies within the fact that the further changes occurring to the lattice and the effect of various deformations on the rate are thus far unknown. Since neighbouring lattices can affect the rate of reaction too, negligence of such deformations could lead to major discrepancies. Another confounding variable would be the presence of free neutrons in the crusts of neutron stars. The presence of free neutrons could potentially affect the Coulomb barrier, making it either taller or thicker. A study published by D.G. Yakovlev in 2006 has shown that the rate calculation of the "first" pycnonuclear fusion of two <chem>^{34}Ne</chem> nuclei in the crust of a neutron star can have an uncertainty magnitude of up to "seven". In this study, Yakovlev also highlighted the uncertainty in the threshold of pycnonuclear fusion (e.g., at what density it starts), giving the approximate density required for the start of pycnonuclear fusion of formula_6g cm−3, arriving at a similar conclusion as Haesnel and Zdunik. According to Haesnel and Zdunik, additional uncertainty of rate calculations in neutron stars can also be due to uneven distribution of the crustal heating, which can impact the thermal states of neutron stars before and after accretion.
In white dwarfs and neutron stars, the nuclear reaction rates can not only be affected by pycnonuclear reactions but also by the plasma screening of the Coulomb interaction. A Ukrainian Electrodynamic Research Laboratory "Proton-21", established that by forming a thin electron plasma layer on the surface of the target material, and, thus, forcing the self-compression of the target material at low temperatures, they could stimulate the process of pycnonuclear fusion. The startup of the process was due to the self-contracting plasma "scanning" the entire volume of the target material, screening the Coulomb field.
Screening, Quantum Diffusion & Nuclear Fusion Regimes.
Before delving into the mathematical model, it is important to understand that pycnonuclear fusion, in its essence, occurs due to two main events:
Both of these effects are heavily affected by screening. The term screening is generally used by nuclear physicists when referring to plasmas of particularly high density. In order for the pycnonuclear fusion to occur, the two particles must overcome the electrostatic repulsion between them - the energy required for this is called the Coulomb barrier. Due to the presence of other charged particles (mainly electrons) next to the reacting pair, they exert the effect of shielding - as the electrons create an electron cloud around the positively charged ions - effectively reducing the electrostatic repulsion between them, lowering the Coulomb barrier. This phenomenon of shielding is referred to as "screening", and in cases where it is particularly strong, it is called "strong screening". Consequently, in cases where the plasma has a strong screening effect, the rate of pycnonuclear fusion is substantially enhanced.
Quantum tunnelling is the foundation of the quantum physical approach to pycnonuclear fusion. It is closely intertwined with the screening effect, as the "transmission coefficient" formula_7 depends on the height of the potential barrier, the mass of the particles, and their relative velocity (since the total energy of the system depends on the kinetic energy). From this follows that the transmission coefficient is very sensitive to the effects of screening. Thus, the effect of screening not only contributes to the reduction of the potential barrier that allows for "classical" fusion to occur via the overlap of the wave functions of the zero-point oscillations of the particles, but also to the increase of the transmission coefficient, both of which increase the rate of pycnonuclear fusion.
On top of the other various jargon related to pycnonuclear fusion, the papers also introduce various regimes, that define the rate of pycnonuclear fusion. Specifically, they identify the zero-temperature, intermediate, and thermally-enhanced regimes as their main ones.
One-Component Plasma (OCP).
The pioneers to the derivation of the rate of pycnonuclear fusion in one-component plasma (OCP) were Edwin Salpeter and David Van-Horn, with their article published in 1969. Their approach used a semiclassical method to solve the Schrödinger equation by using the Wentzel-Kramers-Brillouin (WKB) approximation, and Wigner-Seitz (WS) spheres. Their model is heavily simplified, and whilst it is primitive, is required to understand other approaches which largely extrapolated on the works of Salpeter & Van Horn. They employed the WS spheres to simplify the OCP into regions containing one ion each, with the ions situated on the vertices of a BCC crystal lattice. Then, using the WKB approximation, they resolved the effect of quantum tunnelling on the fusing nuclei. Extrapolating this to the entire lattice allowed them to arrive at their formula for the rate of pycnonuclear fusion:
formula_8
where formula_9 is the density of the plasma, formula_10 is the mean molecular weight per electron (atomic nucleus), formula_11 is a constant equal to formula_12 and serves as a conversion factor from atomic mass units to grams, and formula_13 represents the thermal average of the pairwise reaction probability.
However, the big fault of the method proposed by Salpeter & Van-Horn is that they neglected the "dynamic model" of the lattice. This was improved upon by Schramm and Koonin in 1990. In their model, they found that the dynamic model cannot be neglected, but it is possible that the effects caused by the dynamicity can be cancelled out.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E_0"
},
{
"math_id": 1,
"text": "E_0 \\thicksim \\hbar w"
},
{
"math_id": 2,
"text": "\\rho = 1.455 * 10^{12}"
},
{
"math_id": 3,
"text": "3*10^{14}"
},
{
"math_id": 4,
"text": "\\rho = 10^{12} - 10^{13}"
},
{
"math_id": 5,
"text": "\\rho = 10^{10}-10^{11}"
},
{
"math_id": 6,
"text": "\\rho_{pyc} \\thickapprox 10^{12} - 10^{13} "
},
{
"math_id": 7,
"text": "T\n"
},
{
"math_id": 8,
"text": "P = {8 \\over 2}{\\rho \\over \\mu_AH} \\langle p \\rangle_{A_v}"
},
{
"math_id": 9,
"text": "\\rho"
},
{
"math_id": 10,
"text": "\\mu_A"
},
{
"math_id": 11,
"text": "H"
},
{
"math_id": 12,
"text": "1.66044*10^{-24}"
},
{
"math_id": 13,
"text": "\\langle p \\rangle_{A_v}"
}
]
| https://en.wikipedia.org/wiki?curid=71465792 |
71468 | Joule per mole | SI derived unit for energy per amount of material
The joule per mole (symbol: J·mol−1 or J/mol) is the unit of energy per amount of substance in the International System of Units (SI), such that energy is measured in joules, and the amount of substance is measured in moles.
It is also an SI derived unit of molar thermodynamic energy defined as the energy equal to one joule in one mole of substance. For example, the Gibbs free energy of a compound in the area of thermochemistry is often quantified in units of kilojoules per mole (symbol: kJ·mol−1 or kJ/mol), with 1 kilojoule = 1000 joules.
Physical quantities measured in J·mol−1 usually describe quantities of energy transferred during phase transformations or chemical reactions. Division by the number of moles facilitates comparison between processes involving different quantities of material and between similar processes involving different types of materials. The precise meaning of such a quantity is dependent on the context (what substances are involved, circumstances, etc.), but the unit of measurement is used specifically to describe certain existing phenomena, such as in thermodynamics it is the unit of measurement that describes molar energy.
Since 1 mole = 6.02214076×1023 particles (atoms, molecules, ions etc.), 1 joule per mole is equal to 1 joule divided by 6.02214076×1023 particles, ≈1.660539×10−24 joule per particle. This very small amount of energy is often expressed in terms of an even larger unit such as the kJ·mol−1, because of the typical order of magnitude for energy changes in chemical processes. For example, heats of fusion and vaporization are usually of the order of 10 kJ·mol−1, bond energies are of the order of 100 kJ·mol−1, and ionization energies of the order of 1000 kJ·mol−1. For this reason, it is common within the field of chemistry to quantify the enthalpy of reaction in units of kJ·mol−1.
Other units sometimes used to describe reaction energetics are kilocalories per mole (kcal·mol−1), electron volts per particle (eV), and wavenumbers in inverse centimeters (cm−1). 1 kJ·mol−1 is approximately equal to 1.04×10-2 eV per particle, 0.239 kcal·mol−1, or 83.6 cm−1. At room temperature (25 °C, or 298.15 K) 1 kJ·mol−1 is approximately equal to 0.4034 formula_0.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k_B T"
}
]
| https://en.wikipedia.org/wiki?curid=71468 |
71470682 | Reverse-search algorithm | Reverse-search algorithms are a class of algorithms for generating all objects of a given size, from certain classes of combinatorial objects. In many cases, these methods allow the objects to be generated in polynomial time per object, using only enough memory to store a constant number of objects (polynomial space). (Generally, however, they are not classed as polynomial-time algorithms, because the number of objects they generate is exponential.) They work by organizing the objects to be generated into a spanning tree of their state space, and then performing a depth-first search of this tree.
Reverse-search algorithms were introduced by David Avis and Komei Fukuda in 1991, for problems of generating the vertices of convex polytopes and the cells of arrangements of hyperplanes. They were formalized more broadly by Avis and Fukuda in 1996.
Principles.
A reverse-search algorithm generates the combinatorial objects in a state space, an implicit graph whose vertices are the objects to be listed and whose edges represent certain "local moves" connecting pairs of objects, typically by making small changes to their structure. It finds each objects using a depth-first search in a rooted spanning tree of this state space, described by the following information:
From this information it is possible to find the children of any given node in the tree, reversing the links given by the parent subroutine: they are simply the neighbors whose parent is the given node. It is these reversed links to child nodes that the algorithm searches.
A classical depth-first search of this spanning tree would traverse the tree recursively, starting from the root, at each node listing all of the children and making a recursive call for each one. Unlike a depth-first search of a graph with cycles, it is not necessary to maintain the set of already-visited nodes to avoid repeated visits; such repetition is not possible in a tree. However, this recursive algorithm may still require a large amount of memory for its call stack, in cases when the tree is very deep. Instead, reverse search traverses the spanning tree in the same order while only storing two objects: the current object of the traversal, and the previously traversed object. Initially, the current object is set to the root of the tree, and there is no previous object. From this information, it is possible to determine the next step of the traversal by the following case analysis:
This algorithm involves listing the neighbors of an object once for each step in the search. However, if there are formula_0 objects to be listed, then the search performs formula_1 steps, so the number of times it generates neighbors of objects is within a factor of two of the number of times the recursive depth-first search would do the same thing.
Applications.
Examples of the problems to which reverse search has been applied include the following combinatorial generation problems:
If a formula_2-dimensional convex polytope is defined as an intersection of half-spaces, then its vertices can be described as the points of intersection of formula_2 or more hyperplanes bounding the halfspaces; it is a simple polytope if no vertex is the intersection of more than formula_2 of these hyperplanes. The vertex enumeration problem is the problem of listing all of these vertices. The edges of the polytope connect pairs of vertices that have formula_3 hyperplanes in common, so the vertices and edges form a state space in which each vertex has formula_2 neighbors. The simplex algorithm from the theory of linear programming finds a vertex maximizing a given linear function of the coordinates, by walking from vertex to vertex, choosing at each step a vertex with a greater value of the function; there are several standard choices of "pivot rule" that specify more precisely which vertex to choose. Any such pivot rule can be interpreted as defining the parent function of a spanning tree of the polytope, whose root is the optimal vertex. Applying reverse search to this data generates all vertices of the polytope. A similar algorithm can also enumerate all bases of a linear program, without requiring that it defines a polytope that is simple.
A hyperplane arrangement decomposes Euclidean space into cells, each described by a "sign vector" that describes whether its points belong to one of the hyperplanes (sign 0), are on one side of the hyperplane (sign +1), or are on the other side (sign −1). The cells form a connected state space under local moves that change a single sign by one unit, and it is possible to check that this operation produces a valid cell by solving a linear programming feasibility problem. A spanning tree can be constructed for any choice of root cell by defining a parent operator that makes the first possible change that would bring the sign vector closer to that of the root. Using reverse search for this state space and parent operator produces an algorithm for listing all cells in polynomial time per cell.
The triangulations of a planar point set are connected by "flip" moves that remove one diagonal from a triangulation and replace it by another. If the Delaunay triangulation is chosen as the root, then every triangulation can be flipped to the Delaunay triangulation by steps in which the triangulation of some subset of four points is replaced by its Delaunay triangulation. Choosing the first Delaunay flip as the parent of each triangulation, and applying local search, produces an algorithm for listing all triangulations in polynomial time per triangulation.
The connected subgraphs, and connected induced subgraphs, of a given connected graph form a state space whose local moves are the addition or removal of a single edge or vertex of the graph, respectively. A spanning tree of this state space can be obtained by adding the first edge or vertex (in some ordering of the edges or vertices) whose addition produces another connected subgraph; its root is the whole graph. Applying local search to this state space and parent operator produces an algorithm for listing all connected subgraphs in polynomial time per subgraph.
Other applications include algorithms for generating the following structures:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "2N-1"
},
{
"math_id": 2,
"text": "d"
},
{
"math_id": 3,
"text": "d-1"
}
]
| https://en.wikipedia.org/wiki?curid=71470682 |
714710 | Weber (unit) | SI derived unit of magnetic flux
<templatestyles src="Template:Infobox/styles-images.css" />
In physics, the weber ( ; symbol: Wb) is the unit of magnetic flux in the International System of Units (SI). The unit is derived (through Faraday's law of induction) from the relationship 1 Wb
1 V⋅s (volt-second). A magnetic flux density of 1 Wb/m2 (one weber per square metre) is one tesla.
The weber is named after the German physicist Wilhelm Eduard Weber (1804–1891).
Definition.
The weber may be defined in terms of Faraday's law, which relates a changing magnetic flux through a loop to the electric field around the loop. A change in flux of one weber per second will induce an electromotive force of one volt (produce an electric potential difference of one volt across two open-circuited terminals).
Officially:{
That is:
formula_0
One weber is also the total magnetic flux across a surface of one square meter perpendicular to a magnetic flux density of one tesla; that is,
formula_1
Expressed only in SI base units, 1 weber is:
formula_2
The weber is used in the definition of the henry as 1 weber per ampere, and consequently can be expressed as the product of those units:
formula_3
The weber is commonly expressed in a multitude of other units:
formula_4
where Ω is ohm, C is coulomb, J is joule, and N is newton.
The weber is named after Wilhelm Eduard Weber. As with every SI unit named for a person, its symbol starts with an upper case letter (Wb), but when written in full, it follows the rules for capitalisation of a common noun; i.e., "weber" becomes capitalised at the beginning of a sentence and in titles but is otherwise in lower case.
History.
In 1861, the British Association for the Advancement of Science (known as "The BA") established a committee under William Thomson (later Lord Kelvin) to study electrical units. In a February 1902 manuscript, with handwritten notes of Oliver Heaviside, Giovanni Giorgi proposed a set of rational units of electromagnetism including the weber, noting that "the product of the volt into the second has been called the "weber" by the B. A."
The International Electrotechnical Commission began work on terminology in 1909 and established Technical Committee 1 in 1911, its oldest established committee, "to sanction the terms and definitions used in the different electrotechnical fields and to determine the equivalence of the terms used in the different languages."
<templatestyles src="Template:Blockquote/styles.css" />
In 1930, TC1 decided that the magnetic field strength (H) is of a different nature from the magnetic flux density (B), and took up the question of naming the units for these fields and related quantities, among them the integral of magnetic flux density.
In 1935, TC 1 recommended names for several electrical units, including the weber for the practical unit of magnetic flux (and the maxwell for the CGS unit).
<templatestyles src="Template:Blockquote/styles.css" />
Also in 1935, TC1 passed responsibility for "electric and magnetic magnitudes and units" to the new TC24. This "led eventually to the universal adoption of the Giorgi system, which unified electromagnetic units with the MKS dimensional system of units, the whole now known simply as the SI system ()."
In 1938, TC24 "recommended as a connecting link [from mechanical to electrical units] the permeability of free space with the value of "μ"0 = 4π×10-7 H/m". This group also recognized that any one of the practical units already in use (ohm, ampere, volt, henry, farad, coulomb, and weber), could equally serve as the fourth fundamental unit. "After consultation, the ampere was adopted as the fourth unit of the Giorgi system in Paris in 1950."
Multiples.
Like other SI units, the weber can modified by adding a prefix that multiplies it by a power of 10.
Notes and references.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{Wb} = \\mathrm{V}{\\cdot}\\mathrm{s}."
},
{
"math_id": 1,
"text": "\\mathrm{Wb} = \\mathrm{T}{\\cdot}\\mathrm{m}^2."
},
{
"math_id": 2,
"text": "\\mathrm{Wb} = \\dfrac{\\mathrm{kg}{\\cdot}\\mathrm{m}^2}{\\mathrm{s}^2{\\cdot}\\mathrm{A}}."
},
{
"math_id": 3,
"text": "\\mathrm{Wb} = \\mathrm{H}{\\cdot}\\mathrm{A}."
},
{
"math_id": 4,
"text": "\\mathrm{Wb} \n=\\Omega {\\cdot} \\text{C}\n=\\dfrac{\\mathrm{J}}{\\mathrm{A}}\n=\\dfrac{\\mathrm{N}{\\cdot}\\mathrm{m}}{\\mathrm{A}},\n"
}
]
| https://en.wikipedia.org/wiki?curid=714710 |
7147157 | Absolutely integrable function | In mathematics, an absolutely integrable function is a function whose absolute value is integrable, meaning that the integral of the absolute value over the whole domain is finite.
For a real-valued function, since
formula_0
where
formula_1
both formula_2 and formula_3 must be finite. In Lebesgue integration, this is exactly the requirement for any measurable function "f" to be considered integrable, with the integral then equaling formula_4, so that in fact "absolutely integrable" means the same thing as "Lebesgue integrable" for measurable functions.
The same thing goes for a complex-valued function. Let us define
formula_5
formula_6
formula_7
formula_8
where formula_9 and formula_10 are the real and imaginary parts of formula_11. Then
formula_12
so
formula_13
This shows that the sum of the four integrals (in the middle) is finite if and only if the integral of the absolute value is finite, and the function is Lebesgue integrable only if all the four integrals are finite. So having a finite integral of the absolute value is equivalent to the conditions for the function to be "Lebesgue integrable". | [
{
"math_id": 0,
"text": "\\int |f(x)| \\, dx = \\int f^+(x) \\, dx + \\int f^-(x) \\, dx"
},
{
"math_id": 1,
"text": "f^+(x) = \\max (f(x),0), \\ \\ \\ f^-(x) = \\max(-f(x),0),"
},
{
"math_id": 2,
"text": "\\int f^+(x) \\, dx"
},
{
"math_id": 3,
"text": "\\int f^-(x) \\, dx"
},
{
"math_id": 4,
"text": "\\int f^+(x) \\, dx - \\int f^-(x) \\, dx"
},
{
"math_id": 5,
"text": "f^+(x) = \\max(\\Re f(x),0)"
},
{
"math_id": 6,
"text": "f^-(x) = \\max(-\\Re f(x),0)"
},
{
"math_id": 7,
"text": "f^{+i}(x) = \\max(\\Im f(x),0)"
},
{
"math_id": 8,
"text": "f^{-i}(x) = \\max(-\\Im f(x),0)"
},
{
"math_id": 9,
"text": "\\Re f(x)"
},
{
"math_id": 10,
"text": "\\Im f(x)"
},
{
"math_id": 11,
"text": "f(x)"
},
{
"math_id": 12,
"text": "|f(x)| \\le f^+(x) + f^-(x) + f^{+i}(x) + f^{-i}(x) \\le \\sqrt{2}\\,|f(x)|"
},
{
"math_id": 13,
"text": "\\int |f(x)| \\, dx \\le \\int f^+(x) \\, dx + \\int f^-(x) \\, dx + \\int f^{+i}(x) \\, dx + \\int f^{-i}(x) \\, dx \\le \\sqrt{2}\\int|f(x)| \\, dx"
}
]
| https://en.wikipedia.org/wiki?curid=7147157 |
7147287 | Aspherical space | In topology, a branch of mathematics, an aspherical space is a topological space with all homotopy groups formula_0 equal to 0 when formula_1.
If one works with CW complexes, one can reformulate this condition: an aspherical CW complex is a CW complex whose universal cover is contractible. Indeed, contractibility of a universal cover is the same, by Whitehead's theorem, as asphericality of it. And it is an application of the exact sequence of a fibration that higher homotopy groups of a space and its universal cover are same. (By the same argument, if "E" is a path-connected space and formula_2 is any covering map, then "E" is aspherical if and only if "B" is aspherical.)
Each aspherical space "X" is, by definition, an Eilenberg–MacLane space of type formula_3, where formula_4 is the fundamental group of "X". Also directly from the definition, an aspherical space is a classifying space for its fundamental group (considered to be a topological group when endowed with the discrete topology).
Symplectically aspherical manifolds.
In the context of symplectic manifolds, the meaning of "aspherical" is a little bit different. Specifically, we say that a symplectic manifold (M,ω) is symplectically aspherical if and only if
formula_6
for every continuous mapping
formula_7
where formula_8 denotes the first Chern class of an almost complex structure which is compatible with ω.
By Stokes' theorem, we see that symplectic manifolds which are aspherical are also symplectically aspherical manifolds. However, there do exist symplectically aspherical manifolds which are not aspherical spaces.
Some references drop the requirement on "c"1 in their definition of "symplectically aspherical." However, it is more common for symplectic manifolds satisfying only this weaker condition to be called "weakly exact." | [
{
"math_id": 0,
"text": "\\pi_n(X)"
},
{
"math_id": 1,
"text": "n\\not = 1"
},
{
"math_id": 2,
"text": "p\\colon E \\to B"
},
{
"math_id": 3,
"text": "K(G,1)"
},
{
"math_id": 4,
"text": "G = \\pi_1(X)"
},
{
"math_id": 5,
"text": "\\Gamma\\backslash G/K"
},
{
"math_id": 6,
"text": "\\int_{S^2}f^*\\omega=\\langle c_1(TM),f_*[S^2]\\rangle=0"
},
{
"math_id": 7,
"text": "f\\colon S^2 \\to M,"
},
{
"math_id": 8,
"text": "c_1(TM)"
}
]
| https://en.wikipedia.org/wiki?curid=7147287 |
71478 | Exotic matter | Physics term for multiple concepts
There are several proposed types of exotic matter:
Negative mass.
Negative mass would possess some strange properties, such as accelerating in the direction opposite of applied force. Despite being inconsistent with the expected behavior of "normal" matter, negative mass is mathematically consistent and introduces no violation of conservation of momentum or energy. It is used in certain speculative theories, such as on the construction of artificial wormholes and the Alcubierre drive. The closest known real representative of such exotic matter is the region of pseudo-negative-pressure density produced by the Casimir effect.
Complex mass.
A hypothetical particle with complex rest mass would always travel faster than the speed of light. Such particles are called tachyons. There is no confirmed existence of tachyons.
formula_0
If the rest mass formula_1 is complex this implies that the denominator is complex because the total energy is observable and thus must be real. Therefore, the quantity under the square root must be negative, which can only happen if "v" is greater than "c". As noted by Gregory Benford "et al.," special relativity implies that tachyons, if they existed, could be used to communicate backwards in time (see tachyonic antitelephone). Because time travel is considered to be non-physical, tachyons are believed by physicists either not to exist, or else to be incapable of interacting with normal matter.
In quantum field theory, complex mass would induce tachyon condensation.
Materials at high pressure.
At high pressure, materials such as sodium chloride (NaCl) in the presence of an excess of either chlorine or sodium were transformed into compounds "forbidden" by classical chemistry, such as Na3Cl and NaCl3. Quantum mechanical calculations predict the possibility of other compounds, such as NaCl7, Na3Cl2 and Na2Cl. The materials are thermodynamically stable at high pressures. Such compounds may exist in natural environments that exist at high pressure, such as the deep ocean or inside planetary cores. The materials have potentially useful properties. For instance, Na3Cl is a two-dimensional metal, made of layers of pure sodium and salt that can conduct electricity. The salt layers act as insulators while the sodium layers act as conductors.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E = \\frac{m\\cdot c^2}{\\sqrt{1 - \\frac{\\left|\\mathbf{v}\\right|^2}{c^2}}}"
},
{
"math_id": 1,
"text": "m"
}
]
| https://en.wikipedia.org/wiki?curid=71478 |
71478579 | NLTS conjecture | Quantum computational theorem on problem complexity
In quantum information theory, the no low-energy trivial state (NLTS) conjecture is a precursor to a quantum PCP theorem (qPCP) and posits the existence of families of Hamiltonians with all low-energy states of non-trivial complexity. It was formulated by Michael Freedman and Matthew Hastings in 2013. An NLTS proof would be a consequence of one aspect of qPCP problems – the inability to certify an approximation of local Hamiltonians via NP completeness. In other words, an NLTS proof would be one consequence of the QMA complexity of qPCP problems. On a high level, if proved, NLTS would be one property of the non-Newtonian complexity of quantum computation. NLTS and qPCP conjectures posit the near-infinite complexity involved in predicting the outcome of quantum systems with many interacting states. These calculations of complexity would have implications for quantum computing such as the stability of entangled states at higher temperatures, and the occurrence of entanglement in natural systems. There is currently a proof of NLTS conjecture published in preprint.
NLTS property.
The NLTS property is the underlying set of constraints that forms the basis for the NLTS conjecture.
Definitions.
Local hamiltonians.
A "k"-local Hamiltonian (quantum mechanics) formula_0 is a Hermitian matrix acting on "n" qubits which can be represented as the sum of formula_1 Hamiltonian terms acting upon at most formula_2 qubits each:
formula_3
The general "k"-local Hamiltonian problem is, given a "k"-local Hamiltonian formula_0, to find the smallest eigenvalue formula_4 of formula_0. formula_4 is also called the ground-state energy of the Hamiltonian.
The "family of local Hamiltonians" thus arises out of the "k"-local problem. Kliesch states the following as a definition for local Hamiltonians in the context of NLTS:
Let "I" ⊂ N be an index set. A family of local Hamiltonians is a set of Hamiltonians {"H"("n")}, "n" ∈ "I", where each "H"("n") is defined on "n" finite-dimensional subsystems (in the following taken to be qubits), that are of the form
formula_5
where each "H""m"("n") acts non-trivially on "O"(1) qubits. Another constraint is the operator norm of "H""m"("n") is bounded by a constant independent of "n" and each qubit is only involved in a constant number of terms "H""m"("n").
Topological order.
In physics, "topological order" is a kind of order in the zero-temperature phase of matter (also known as quantum matter). In the context of NLTS, Kliesch states: "a family of local gapped Hamiltonians is called "topologically ordered" if any ground states cannot be prepared from a product state by a constant-depth circuit".
NLTS property.
Kliesch defines the NLTS property thus:
Let "I" be an infinite set of system sizes. A family of local Hamiltonians {"H"("n")}, "n" ∈ "I" has the NLTS property if there exists "ε" > 0 and a function "f" : N → N such that
NLTS conjecture.
There exists a family of local Hamiltonians with the NLTS property.
Quantum PCP conjecture.
Proving the NLTS conjecture is an obstacle for resolving the qPCP conjecture, an even harder theorem to prove. The qPCP conjecture is a quantum analogue of the classical PCP theorem. The classical PCP theorem states that satisfiability problems like 3SAT are NP-hard when estimating the maximal number of clauses that can be simultaneously satisfied in a hamiltonian system. In layman's terms, classical PCP describes the near-infinite complexity involved in predicting the outcome of a system with many resolving states, such as a water bath full of hundreds of magnets. qPCP increases the complexity by trying to solve PCP for quantum states. Though it hasn't been proven yet, a positive proof of qPCP would imply that quantum entanglement in Gibbs states could remain stable at higher-energy states above absolute zero.
NLETS proof.
NLTS on its own is difficult to prove, though a simpler no low-error trivial states (NLETS) theorem has been proven, and that proof is a precursor for NLTS.
NLETS is defined as:
Let "k" > 1 be some integer, and {"H""n"}"n" ∈ N be a family of "k"-local Hamiltonians. {"H""n"}"n" ∈ N is NLETS if there exists a constant "ε" > 0 such that any "ε"-impostor family "F" = {"ρ""n"}"n" ∈ N of {"H""n"}"n" ∈ N is non-trivial.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "H = \\sum_{i=1}^m H_i."
},
{
"math_id": 4,
"text": "\\lambda"
},
{
"math_id": 5,
"text": "H^{(n)} = \\sum_n H_m^{(n)},"
}
]
| https://en.wikipedia.org/wiki?curid=71478579 |
7147956 | Differential calculus over commutative algebras | In mathematics the differential calculus over commutative algebras is a part of commutative algebra based on the observation that most concepts known from classical differential calculus can be formulated in purely algebraic terms. Instances of this are:
formula_11
where the bracket formula_12 is defined as the commutator
formula_13
Denoting the set of formula_14th order linear differential operators from an formula_5-module formula_15 to an formula_5-module formula_16 with formula_17 we obtain a bi-functor with values in the category of formula_5-modules. Other natural concepts of calculus such as jet spaces, differential forms are then obtained as representing objects of the functors formula_18 and related functors.
Seen from this point of view calculus may in fact be understood as the theory of these functors and their representing objects.
Replacing the real numbers formula_1 with any commutative ring, and the algebra formula_19 with any commutative algebra the above said remains meaningful, hence differential calculus can be developed for arbitrary commutative algebras. Many of these concepts are widely used in algebraic geometry, differential geometry and secondary calculus. Moreover, the theory generalizes naturally to the setting of graded commutative algebra, allowing for a natural foundation of calculus on supermanifolds, graded manifolds and associated concepts like the Berezin integral.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "\\R"
},
{
"math_id": 2,
"text": "A = C^\\infty (M),"
},
{
"math_id": 3,
"text": "A,"
},
{
"math_id": 4,
"text": "\\Gamma"
},
{
"math_id": 5,
"text": "A"
},
{
"math_id": 6,
"text": "E\\rightarrow M"
},
{
"math_id": 7,
"text": "F \\rightarrow M"
},
{
"math_id": 8,
"text": "\\Delta : \\Gamma (E) \\to \\Gamma (F)"
},
{
"math_id": 9,
"text": "k + 1"
},
{
"math_id": 10,
"text": "f_0, \\ldots, f_k \\in A"
},
{
"math_id": 11,
"text": "\\left[f_k \\left[f_{k-1} \\left[\\cdots\\left[f_0, \\Delta\\right] \\cdots \\right]\\right]\\right] = 0"
},
{
"math_id": 12,
"text": "[f, \\Delta] : \\Gamma(E)\\to \\Gamma(F)"
},
{
"math_id": 13,
"text": "[f,\\Delta](s) = \\Delta(f \\cdot s) - f \\cdot \\Delta(s)."
},
{
"math_id": 14,
"text": "k"
},
{
"math_id": 15,
"text": "P"
},
{
"math_id": 16,
"text": "Q"
},
{
"math_id": 17,
"text": "\\mathrm{Diff}_k(P, Q)"
},
{
"math_id": 18,
"text": "\\mathrm{Diff}_k"
},
{
"math_id": 19,
"text": "C^\\infty(M)"
}
]
| https://en.wikipedia.org/wiki?curid=7147956 |
71480557 | Convergence proof techniques | Convergence proof techniques are canonical components of mathematical proofs that sequences or functions converge to a finite limit when the argument tends to infinity.
There are many types of series and modes of convergence requiring different techniques. Below are some of the more common examples. This article is intended as an introduction aimed to help practitioners explore appropriate techniques. The links below give details of necessary conditions and generalizations to more abstract settings. The convergence of series is already covered in the article on convergence tests.
Convergence in R"n".
It is common to want to prove convergence of a sequence formula_0 or function formula_1, where formula_2 and formula_3 refer to the natural numbers and the real numbers, and convergence is with respect to the Euclidean norm, formula_4.
Useful approaches for this are as follows.
First principles.
The analytic definition of convergence of formula_5 to a limit formula_6 is that for all formula_7 there exists a formula_8 such for all formula_9, formula_10. The most basic proof technique is to find such a formula_8 and prove the required inequality. If the value of formula_6 is not known in advance, the techniques below may be useful.
Contraction mappings.
In many cases, the function whose convergence is of interest has the form formula_11 for some transformation formula_12. For example, formula_12 could map formula_13 to formula_14 for some conformable matrix formula_15. Alternatively, formula_12 may be an element-wise operation, such as replacing each element of formula_13 by the square root of its magnitude.
In such cases, if the problem satisfies the conditions of Banach fixed-point theorem (the domain is a non-empty complete metric space) then it is sufficient to prove that formula_16 for some constant formula_17 which is fixed for all formula_18 and formula_19. Such a formula_12 is called a contraction mapping. The composition of two contraction mappings is a contraction mapping, so if formula_20, then it is sufficient to show that formula_21 and formula_22 are both contraction mappings.
Example.
Famous example of the use of this approach include
Non-expansion mappings.
If both above inequalities are weak (formula_26), the mapping is a non-expansion mapping.
It is not sufficient for formula_12 to be a non-expansion mapping. For example, formula_27 is a non-expansion mapping, but the sequence does not converge.
However, the composition of a contraction mapping and a non-expansion mapping (or vice versa) is a contraction mapping.
Contraction mappings on limited domains.
If formula_12 is not a contraction mapping on its entire domain, but it is on its codomain (the image of the domain), that is also sufficient for convergence.
This also applies for decompositions.
For example, consider formula_28. The function formula_29 is not a contraction mapping, but it is on the restricted domain formula_30, which is the codomain of formula_31 for real arguments. Since formula_31 is a non-expansion mapping, this implies formula_12 is a contraction mapping.
Convergent subsequences.
Every bounded sequence in formula_32 has a convergent subsequence, by the Bolzano–Weierstrass theorem. If these all have the same limit, then the original sequence converges to that limit. If it can be shown that all of the subsequences of formula_5 have the same limit, such as by showing that there is a unique fixed point of the transformation formula_12, then the initial sequence must also converge to that limit.
Monotonicity (Lyapunov functions).
Every bounded monotonic sequence in formula_32 converges to a limit.
This approach can also be applied to sequences that are not monotonic. Instead, it is possible to define a function formula_33 such that formula_34 is monotonic in formula_35. If the formula_36 satisfies the conditions to be a Lyapunov function then formula_5 is convergent. Lyapunov's theorem is normally stated for ordinary differential equations, but can also be applied to sequences of iterates by replacing derivatives with discrete differences.
The basic requirements on formula_36 are that
In many cases, a Lyapunov function of the form formula_46 can be found, although more complex forms are also used.
For delay differential equations, a similar approach applies with Lyapunov functions replaced by Lyapunov functionals also called Lyapunov-Krasovskii functionals.
If the inequality in the condition 1 is weak, LaSalle's invariance principle may be used.
Convergence of sequences of functions.
To consider the convergence of sequences of functions, it is necessary to define a distance between functions to replace the Euclidean norm. These often include
See also
Convergence of random variables.
Random variables are more complicated than simple elements of formula_53. (Formally, a random variable is a mapping formula_54 from an event space formula_55 to a value space formula_36. The value space may be formula_53, such as the roll of a dice, and such a random variable is often spoken of informally as being in formula_53, but convergence of sequence of random variables corresponds to convergence of the sequence of "functions", or the "distributions", rather than the sequence of "values".)
There are multiple types of convergence, depending on how the "distance" between functions is measured.
Each has its own proof techniques, which are beyond the current scope of this article.
See also
Topological convergence.
For all of the above techniques, some form the basic analytic definition of convergence above applies. However, topology has its own definition of convergence. For example, in a non-hausdorff space, it is possible for a sequence to converge to multiple different limits.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f:\\mathbb{N}\\rightarrow \\mathbb{R}^n"
},
{
"math_id": 1,
"text": "f:\\mathbb{R}\\rightarrow \\mathbb{R}^n"
},
{
"math_id": 2,
"text": "\\mathbb{N}"
},
{
"math_id": 3,
"text": "\\mathbb{R}"
},
{
"math_id": 4,
"text": "||\\cdot||_2"
},
{
"math_id": 5,
"text": "f"
},
{
"math_id": 6,
"text": "f_{\\infty}"
},
{
"math_id": 7,
"text": "\\epsilon"
},
{
"math_id": 8,
"text": "k_0"
},
{
"math_id": 9,
"text": "k > k_0"
},
{
"math_id": 10,
"text": "\\|f(k) - f_{\\infty}\\| < \\epsilon"
},
{
"math_id": 11,
"text": "f(k+1) = T(f(k))"
},
{
"math_id": 12,
"text": "T"
},
{
"math_id": 13,
"text": "f(k)"
},
{
"math_id": 14,
"text": "f(k+1)=A f(k)"
},
{
"math_id": 15,
"text": "A"
},
{
"math_id": 16,
"text": "\\|T(x) - T(y)\\| < \\|k(x - y)\\|"
},
{
"math_id": 17,
"text": "|k| < 1"
},
{
"math_id": 18,
"text": "x"
},
{
"math_id": 19,
"text": "y"
},
{
"math_id": 20,
"text": "T = T_1 \\circ T_2"
},
{
"math_id": 21,
"text": "T_1"
},
{
"math_id": 22,
"text": "T_2"
},
{
"math_id": 23,
"text": "T(x) = Ax + B"
},
{
"math_id": 24,
"text": "B"
},
{
"math_id": 25,
"text": "(I-A)^{-1}B"
},
{
"math_id": 26,
"text": "k \\le 1"
},
{
"math_id": 27,
"text": "T(x) = -x"
},
{
"math_id": 28,
"text": "T(x) = \\cos(\\sin(x))"
},
{
"math_id": 29,
"text": "\\cos"
},
{
"math_id": 30,
"text": "[-1, 1]"
},
{
"math_id": 31,
"text": "\\sin"
},
{
"math_id": 32,
"text": "\\mathbb R^n"
},
{
"math_id": 33,
"text": "V:\\mathbb{R}^n\\rightarrow \\mathbb{R}"
},
{
"math_id": 34,
"text": "V(f(n))"
},
{
"math_id": 35,
"text": "n"
},
{
"math_id": 36,
"text": "V"
},
{
"math_id": 37,
"text": "V(f(n+1)) - V(f(n)) < 0"
},
{
"math_id": 38,
"text": "f(n) \\ne 0"
},
{
"math_id": 39,
"text": "V(0) = 0"
},
{
"math_id": 40,
"text": "\\dot{V}(x) < 0"
},
{
"math_id": 41,
"text": "x \\ne 0"
},
{
"math_id": 42,
"text": "V(x) > 0"
},
{
"math_id": 43,
"text": "x\\ne 0"
},
{
"math_id": 44,
"text": "V(x)"
},
{
"math_id": 45,
"text": "\\|x\\|"
},
{
"math_id": 46,
"text": "V(x) = x^T A x"
},
{
"math_id": 47,
"text": "\\|g\\|_f = \\int_{x \\in A} \\|g(x)\\| dx"
},
{
"math_id": 48,
"text": "||f(n)-f_\\infty||_f \\rightarrow 0"
},
{
"math_id": 49,
"text": "f_n(x) \\rightarrow f_\\infty(x)"
},
{
"math_id": 50,
"text": "f(x)"
},
{
"math_id": 51,
"text": "\\lim_{n\\to\\infty}\\,\\sup\\{\\,\\left|f_n(x)-f_\\infty(x)\\right| : x \\in A \\,\\}=0,"
},
{
"math_id": 52,
"text": "f_n"
},
{
"math_id": 53,
"text": "\\mathbb{R}^n"
},
{
"math_id": 54,
"text": "x:\\Omega\\rightarrow V"
},
{
"math_id": 55,
"text": "\\Omega"
},
{
"math_id": 56,
"text": "x_n:\\Omega\\rightarrow V"
}
]
| https://en.wikipedia.org/wiki?curid=71480557 |
71482441 | Dust astronomy | Branch of astronomy
Dust astronomy is a subfield of astronomy that uses the information contained in individual cosmic dust particles ranging from their dynamical state to its isotopic, elemental, molecular, and mineralogical composition in order to obtain information on the astronomical objects occurring in outer space. Dust astronomy overlaps with the fields of Planetary science, Cosmochemistry, and Astrobiology.
Eberhard Grün et al. stated in the 2002 Kuiper prize lecture "Dust particles, like photons, carry information from remote sites in space and time. From knowledge of the dust particles' birthplace and their bulk properties, we can learn about the remote environment out of which the particles were formed. This approach is called Dust Astronomy which is carried out by means of a dust telescope on a dust observatory in space".
History.
Early observations.
Three phenomena that relate (we know today) to cosmic dust were noticed by humans for millennia: Zodiacal light, comets, and meteors (cf. Historical comet observations in China). Early astronomers were interested in understanding these phenomena.
Zodiacal light or "false dawn" can be seen in the western sky after the evening twilight has disappeared, or in the eastern sky just before the morning twilight appears. This phenomenon was investigated by the astronomer Giovanni Domenico Cassini in 1683. He explained Zodiacal light by interplanetary matter (dust) around the Sun according to Hugo Fechtig, Christoph Leinert, and Otto E. Berg in the book "Interplanetary Dust".
In the past, unexpected appearances of comets were seen as bad omens that signaled disaster and upheaval, as described in the Observational history of comets. However, in 1705, Edmond Halley used Isaac Newton's laws of motion to analyze several earlier cometary sightings. He observed that the comets of 1531, 1607, and 1682 had very similar orbital elements, and he theorized that they were all the same comet. Halley predicted that this comet would return in 1758-59, but he died before it did. The comet, now known as Halley's Comet and officially designated 1P/Halley, ultimately did return on schedule.
A meteor, or "shooting star" is a streak of light caused by a meteoroid entering the Earth's atmosphere at a speed of several tens of kilometers per second, at an altitude of about 100 km. At this speed the meteoroid heats up and leaves a trail of excited atoms and ions which emit light as they de-excite. In some cultures, meteors were thought to be an atmospheric phenomenon, like lightning. While only a few meteors can typically be seen in one hour on a moonless night, during certain times of the year, meteor showers with over 100 meteors per hour can be observed. Italian astronomer Giovanni Schiaparelli concluded in 1866 that the Perseid meteors were fragments of Comet Swift–Tuttle, based on their orbital similarities.
The physical relation between the three disparate phenomena was demonstrated by the American astronomer Fred Lawrence Whipple who in the 1950th, proposed the "icy conglomerate" model of comet composition. This model could explain how comets release meteoroids and dust, which in turn feed and maintain the Zodiacal dust cloud.
Compositional analyses of extraterrestrial material.
For a long time, the only extraterrestrial material accessible for study were meteorites that had been collected on the Earth's surface. Meteorites were considered solid fragments from other astronomical objects such as planets, asteroids, comets, or moons. Most meteorites are chondrite meteorites that are named for the small, round particles they contain.
Carbonaceous chondrites are especially primitive; they have retained many of their chemical properties since they accreted 4.6 billion years ago.
Other meteorites have been modified by either melting or planetary differentiation of the parent body. Analyzing the composition of meteorites provides a glimpse into the formation and evolution of the Solar System. Therefore, meteorite analyses have been the cornerstone of cosmochemistry.
The first extraterrestrial samples – other than meteorites – were 380 kg of lunar samples brought back in the seventies by the Apollo missions and at about the same time 300 g were returned by the uncrewed "Luna" spacecraft. Recently, in 2020 "Chang'e 5" collected 1.7 kg of lunar material. From the isotopic, elemental, molecular, and mineralogical compositions important conclusions about e.g. the origin of the Moon like the giant-impact hypothesis were drawn.
Thousands of grains were collected during fly by of comet 81P/Wild by "Stardust" that returned the samples to Earth in 2006. Their analysis provided insight into the early Solar System.
Also some probable interstellar grains were collected during interplanetary cruise of Stardust and were returned by the same mission.
Asteroids and meteorites have been linked via their Asteroid spectral types and similarities in the visible and near-infrared, which implies that asteroids and meteorites derived from the same parent bodies.
The first asteroid samples were collected by the JAXA "Hayabusa" missions. "Hayabusa" encountered asteroid 25143 Itokawa in November 2005, picked up 10 to 100 micron sized particles from the surface, and returned them to Earth in June 2010. "Hayabusa 2" mission collected about 5 g surface and sub-surface material from asteroid 162173 Ryugu a primitive C-type asteroid and returned it in 2020.
Sample return missions are very expensive and can address only a small number of astronomical objects. Therefore, less expensive methods to collect and analyse extraterrestrial materials have been looked for. Cosmic dust surviving atmospheric entry can be collected by high (~20 km) flying aircraft. Donald E. Brownlee identified reliably the extraterrestrial nature of such collected dust particles by their chondritic composition. A large portion of the collected particles may have a cometary origin while others come from asteroids. These stratospheric dust samples can be requested for further research from a catalogue that provides SEM photos together with their EDS spectra.
Methods.
Since the beginning of space age the study of space dust rapidly expanded. Freed from peeking through narrow infrared windows in the atmosphere infrared astronomy mapped out cold and dark dust clouds everywhere in the universe. Also, in situ detection and analysis of cosmic dust came in the focus of space agencies (cf. Space dust measurement).
In situ dust analyzers.
Numerous spacecraft have detected micron-sized cosmic dust particles across the planetary system. Some of these spacecraft had dust composition analyzers that used impact ionization to determine the composition of ions generated from the cosmic dust particle.
Already the first dust composition analyzer, the Helios Micrometeoroid Analyzer, searched for variations of the compositional and physical properties of micrometeoroids. The spectra did not demonstrate any clustering of single minerals. The continuous transition from low to high ion masses indicates that individual grains are a mixture of various minerals and carbonaceous compounds.
The more advanced dust mass analyzers on the 1986 comet Halley missions Vega 1, Vega 2, and Giotto recorded an abundance of small particles. In addition to silicates, many of these particles were rich in light elements such as H, C, N, and O. This indicates that Halley dust is even more primitive than carbonaceous chondrites.
The identification of organic constituents suggests that the majority of the particles consist of a predominantly chondritic core with a refractory organic mantle.
The Cassini Cosmic Dust Analyzer (CDA) analyzed dust throughout its interplanetary cruise to Saturn and within the Saturn system. During Cassini's flyby of Jupiter CDA detected several 100 dust impacts within 100 million km from Jupiter. The spectra of these particles revealed sodium chloride (NaCl) as the major particle constituent, along with sulphurous and potassium-bearing components that demonstrated their relation to Jupiter's volcanic moon, Io.
Saturn's E ring particles consist predominantly of water ice
but in the vicinity of Saturn's moon Enceladus CDA found mostly salt-rich ice particles that were ejected by active ice geysers on the surface of this moon. This finding led to the belief that an underground salt-water ocean is the source for all matter observed in the plumes.
At large distance from Saturn CDA identified and analyzed interstellar grains passing through the Saturn system. These analyses suggested magnesium-rich grains of silicate and oxide composition, some with iron inclusions.
The detection of electric dust charges by CDA provided means for contact-free detection and analysis of dust grains in space.
This discovery led to the development of a trajectory sensor that allows us to determine the trajectory of a charged dust particle prior to impact onto an impact target.
Such a dust trajectory sensor can be combined with an aerogel dust collector in order to form an active dust collector
or with a large-area dust composition analyzer in order to form a dust telescope
With its capabilities CDA can be considered a prototype dust telescope.
Dust telescopes.
In situ methods of dust astronomy like dust composition analyzers aim for the exploitation of the cosmochemical information contained in individual cosmic dust particles.
Not so costly as sample return missions are rendezvous missions to a comet or asteroid like the Rosetta space probe to comet 67P/Churyumov–Gerasimenko. Rosetta characterized collected comet dust by sophisticated dust analyzers like the dust detector "GIADA", a high-resolution secondary ion mass spectrometer "COSIMA",
an atomic force microscope "MIDAS",
and the mass spectrometers of "ROSINA".
Several large-area dust composition analyzers and dust telescopes are in preparation in order to study astronomical objects or interplanetary dust from comets and asteroids and interstellar dust.
The Surface Dust Analyser (SUDA) on board the Europa Clipper mission will map the composition of Europa's surface and search for cryovolcanic plumes. The instrument is capable of identifying biosignatures and other complex molecules in ice ejecta.
The DESTINY+ Dust Analyzer (DDA) will fly on the Japanese-German space mission DESTINY+ to asteroid 3200 Phaethon.
Phaethon is the parent object of the December Geminids meteor stream.
DDA's will study Phaeton's dust environment during the encounter and will analyze interstellar and interplanetary dust on cruise to Phaethon
The Interstellar Dust Experiment (IDEX) will fly on the Interstellar Mapping and Acceleration Probe (IMAP) at the Sun–Earth L1 Lagrange point. IDEX will provide the mass distribution and elemental composition of interstellar and interplanetary dust particles.
Sources of cosmic dust.
The ultimate source of cosmic dust are stars in which the elements – out of which stardust is composed of – are produced by fusion of hydrogen and helium or by explosive nucleosynthesis in supernovae. This stardust from various stellar sources is mixed in the interstellar medium and thermally processed in star forming regions. Solar System objects like comets and asteroids contain this material in more or less further processed form. Geologically active satellites like Io or Enceladus emit dust that condensed out of vapor from the molten interior of these planetary bodies.
Stars.
After the Big Bang existed only the chemical elements Hydrogen, Helium, and Lithium.
All other elements we know and that can be found in cosmic dust have been formed in Supernovae and stars.
Therefore, the ultimate sources of dust are stars. Elements from carbon (atomic number Z = 6) to plutonium (Z = 94) are produced by nucleosynthesis in stellar cores and in Supernova explosions. Stellar nucleosynthesis in the most massive stars creates many elements, with the abundance peak at iron (Z = 26) and nickel (Z = 28).
Stellar evolution depends strongly on mass of the star. Star masses range from ~0.1 to ~100 solar masses. Their lifetimes range from 106 years for the biggest stars to 1012 years for the smallest stars. Towards the end of their life mature stars may expand into red giants with dense stellar winds forming circumstellar envelopes in which molecules and dust particles can form. More massive stars shed their outer shells while their cores collapse into neutron stars or black holes. The elemental, isotopic, and mineralogical composition of all this stardust reflects the composition of the outer shell of the corresponding parent star.
Already in 1860 Angelo Secchi identified carbon stars as a separate class of stars. Carbon stars are characterized by their dominant spectral Swan bands from the molecule C2 and their ruby red colour caused by soot-like substances. Also silicon carbide has been observed in the outflows of carbon stars.
Since the advent of infrared astronomy dust in stellar outflows became observable. Bands at 10 and 18 microns wavelength were observed around many late-type giant stars indicating the presence of silicate dust in circumstellar envelopes. Oxides of the metals Al, Mg, Fe and others are suspected to be emitted from oxygen-rich stars.
Dust is observed in Supernova remnants like the Crab nebula and in contemporary Supernovae explosions These observations indicate that most dust in the interstellar medium is created by Supernovae.
Traces of star dust have been found in presolar grains contained in meteorites. Star dust grains are identified by their unique isotopic composition that is different from that in the Solar System's matter as well as from the galactic average. Presolar grains formed within outflowing and cooling gases from earlier presolar stars and have an isotopic composition unique to that parent star. These isotopic signatures are often fingerprints of very specific astrophysical nuclear reactions that took place within the parent star.
Unusual isotopic signatures of neon and xenon
have been found in extraterrestrial diamond grains
and silicon carbide grains. The silicon isotopes within the SiC grains have isotopic ratios like those expected in red-giant stars.
Some presolar grains are composed primarily of 44Ca which is presumably the remains of the extinct radionuclide 44Ti, a titanium isotope that was formed in abundance in Type II supernovae.
Interstellar medium and star formation regions.
The interstellar medium is a melting pot of gas and dust emitted from stars. The composition of the interstellar medium is the result of nucleosynthesis in stars since the Big Bang and is represented by the abundance of the chemical elements. It consists of three phases: (1) dense, cold, and dusty Dark nebulas, (2) diffuse clouds, and (3) hot coronal gas. Dark nebula are Molecular clouds that contain molecular hydrogen and other molecules that have formed in gas phase and on dust grain surfaces. Any gas atom or molecule that hits a cold dust grain will be adsorbed and may recombine with other adsorbed atoms or molecules or with molecules of the dust grain or may just be deposited at the grain surface. Diffuse clouds are warm, neutral, or ionized envelopes of molecular clouds. Both are observable in the galactic disk. Hot coronal gas is heated by supernova explosions and energetic stellar winds. This environment is destructive for molecules and small dust particles and extends into the galactic corona.
In the Milky Way cold dark nebula are concentrated in spiral arms and around the Galactic Center. Dark nebulae are dark because naked interstellar dust or dust covered with condensed gases absorb visible light by extinction and remit infrared and submillimetre radiation. Infrared emission from the dust cools the clouds down to 10 to 20 K. The largest dark nebula are giant molecular clouds that contain 10 thousand to 10 million solar masses and are 5 to 200 parsecs (pc) in size. The smallest are Bok globules of a few to 50 solar masses and ~1 pc across.
When a dense cloud becomes cold enough and the gas pressure is insufficient to support it, the cloud will undergo gravitational collapse and fragments into smaller clouds of about stellar mass. Such star formation will result in a gravitationally bound open cluster of stars or an unbound stellar association. In each collapsing cloud gas and dust is drawn inward toward the center of gravity. The heat generated by the collapse in a protostellar cloud will heat up an accretion disk that feeds the central protostar. The most massive stars evolve fast into luminous O and B stars that ultimately disperse the surrounding gas and dust by radiation pressure and strong stellar winds into the diffuse interstellar medium.
Solar mass-type stars take more time and develop a protoplanetary disk consisting of gas and dust with strong radial density and temperature gradients; with highest values close to the central protostar. At temperatures below 1300 K fine-grained minerals condensed from the hot gas; like the calcium–aluminium-rich inclusions found in carbonaceous chondrite meteorites. There is another important temperature limit in the protoplanetary disk at ~150 K, the snow line; outside which it is cold enough for volatile compounds such as water, ammonia, methane, carbon dioxide, carbon monoxide, and nitrogen to condense into solid ice grains.
Inside the snow line the terrestrial planets have formed; outside of which the gas giants and their icy moons have formed.
In the protoplanetary disk dust and gas evolve to planets in three phases.
In the first phase micron-sized dust is carried by the gas and collisions between dust particles occur by Brownian motion at low speed. Through ballistic agglomeration dust (and ice) grains grow to cm-sized aggregates.
In the second phase cm-sized pebbles grow to km-sized planetesimals.
(cf. section on Dust accretion). It comprises the formation of chondrules in the region of the terrestrial planets. Theories of chondrule formation include solar nebula lightning; nebular shocks, and meteoroid collisions.
In this phase dust decouples from the gas and move on Kepler orbits around the central protostar slowly settling near the middle plane of the disk. In this dense layer particles can grow by gravitational instability and streaming instability to km-sized planetesimals.
The third phase is the runaway accretion of planetesimals by self-gravitation to form planetary embryos that eventually merge into planets.
During this planet formation stage the central star becomes a T Tauri star at which it is powered by gravitational energy released as the star contracts until hydrogen fusion begins. T Tauri stars have extremely powerful stellar winds that clear the remaining gas and dust form the protoplanetary disk and the growth of planetary objects stops.
Local interstellar medium.
The Sun is located 8,300 pc from the center of the galaxy on the inner edge of the Orion Arm within the diffuse Local Interstellar Cloud (LIC) of the Local Bubble. The Local Bubble was created by supernovae explosions in the nearest (~130 pc) star formation region of the Scorpius–Centaurus association. Several partially ionized warm "clouds" of interstellar gas are located within a few parsecs of the Sun. Their hydrogen density is about 5 times higher than that of the Local Bubble.
For the last several ten thousand years the Sun passed through the LIC but within a few 1000 years the Sun will enter the nearby G cloud.
Interstellar dust grains smaller than 10 microns couple to the LIC gas via the interstellar magnetic field over a scale length <1 pc.
The LIC is a warm tenuous partially ionized cloud ("T" ≈ 7000 K, "n"H + "n"H+ ≈ 0.3 cm−3) surrounding the Solar System.
It streams at ≈ 26 km/s around the Solar System.
The heliopause is 100 to 150 AU from the Sun in the upstream direction that separates the interstellar medium from the heliosphere. Only neutral atoms and dust particles >0.1 micron can penetrate the heliopause and enter the heliosphere.
The Ulysses instruments GAS and DUST discovered flows of interstellar helium and interstellar dust particles passing through the inner Solar System.
Both flow directions in the ecliptic coordinate system are very similar at ecliptic longitude "l" ≈ 74°, ecliptic latitude "b" ≈ -5°. Ulysses monitored the dust flow over 16 years and found a strong variation with the solar cycle that is due to the variations in the interplanetary magnetic field which followed the 22-year solar dynamo cycle.
The first compositional analyses of interstellar dust particles are available from the "Cassini" Cosmic Dust Analyzer and the interstellar dust collection by the Stardust mission. The moderate resolution spectra of interstellar dust suggest magnesium-rich grains of silicate and oxide composition, some with iron inclusions.
Future high mass resolution dust telescope analyses will provide a sharper view on the composition of interstellar dust.
Samples from the Stardust mission found seven probable interstellar grains; their detailed investigation is ongoing.
Future collections with an active dust collector may improve the quality and quantity of interstellar dust collections.
Trans-Neptunian objects and comets.
Trans-Neptunian objects, TNOs, are small Solar System bodies and dwarf planets that orbit the Sun at greater average distances than Neptune's orbit at 30 AU. They include Kuiper belt and scattered disc objects and Oort cloud comets. These icy planetesimals and dwarf planets orbit the Sun inside and beyond the heliosphere in the interstellar medium at distances out to ~100,000 AU.
In order to explain the number of observed short period comets Fernández proposed a comet belt outside Neptune's orbit that led to the subsequent discovery of many TNOs and, especially, Kuiper belt objects.
The Kuiper belt extends between Neptune's orbit at 35 AU and ~55 AU. The most massive classical Kuiper belt objects have semi-major axis between 39 AU and 48 AU corresponding to the 2:3 and 1:2 resonances with Neptune. The Kuiper belt is thought to consist of planetesimals and dwarf planets from the original protoplanetary disc in which the orbits of Kuiper belt objects have been strongly influenced by Jupiter and Neptune. Mutual collisions in today's Kuiper belt generate dust that has been observed by the Venetia Burney Student Dust Counter on the New Horizons space probe.
By the action of Pointing-Robertson drag and planetary scattering this dust can reach within 107 to 108 years the inner planetary system.
The sparsely populated scattered disk extends beyond the Kuiper belt out to ~100 AU.
Scattered disk objects are still close enough to Neptune to be perturbed by Neptune's gravitation. This interaction can send them outward into the Oort cloud or inward into the Centaur population.
The scattered disc is believed to be the source region of the centaurs and the short-period comets observed in the inner planetary system.
The hypothesized Oort cloud is thought to be a spherical cloud of icy bodies extending from outside the Kuiper belt and the scattered disk to halfway to the nearest star.
During planet formation interactions of protoplanetary disk objects with the already developed Jupiter and Neptune resulted in the scattered disc and the Oort cloud.
While the Sun was in its birth cluster it may have shared comets from the outskirts protoplanetary discs of other stars.
In the scattering processes during planet formation many planetesimals may have become unbound to solar gravitation and became interstellar objects just like ʻOumuamua the first interstellar object detected passing through the Solar System.
From the Oort cloud long-period comets are disturbed towards the Sun by gravitational perturbations caused by passing stars. Long-period comets have highly eccentric orbits and periods ranging from 200 years to millions of years and their orbital inclination is roughly isotropic.
Most comets (several thousands) observed by ground-based observers or automated observatories (e.g. Pan-STARRS) or by near-Earth spacecraft (e.g. SOHO) are long-period comets that had only one apparition.
Comet Halley and other Halley type comets (HTCs) have periods of 20 to 200 years and inclinations from 0 to 180 degrees. HTCs are believed to derive from long-period comets.
Once a Kuiper belt or scattered disk object is scattered by Neptune into an orbit with a perihelion distance well inside Neptune's orbit its orbit becomes unstable because it will eventually cross the orbits of one or more of the giant planets. Such objects are called Centaurs. Centaur orbits have dynamic lifetimes of only a few million years.
Some centaur orbits will evolve into Jupiter-crossing orbits and become Jupiter family comets, or collide with the Sun or a planet, or they may be ejected into interstellar space.
Centaurs like 2060 Chiron and 29P/Schwassmann-Wachmann display comet-like dust comas.
During their inward migration the top layers (~100 m) of the comet's surface heat up and lose much of the volatile ices CO, N2). CO2-ice sublimates at about Jupiter distance (e.g. 29P/Schwassmann-Wachmann).
Most periodic comets are Jupiter-family comets (JFCs) that have orbital periods less than 12 years and aphelia close to Jupiter. JFCs originate from Centaurs. Inside three AU distance from the Sun water ice sublimation becomes the dominant driver of activity but also other volatile ices like CO2 ice play an important role in cometary activity. The sublimated gases carry micron-sized dust grains to form an observable coma and tail during their perihelion passage. Infrared observations show that many JFCs exhibit a debris trail of up to cm-sized particles along the comet's orbit.
When the Earth passes through a comet trail a meteor shower is observed.
The dynamical lifetimes of JFCs is few 105 years before they are eliminated from the Solar System by Jupiter or they collide with a planet or the Sun. However, their active lifetimes are ~10 time shorter because volatile ices vanished from the upper surface layers. They may reawaken again, e.g. when their orbits become much closer to the Sun. Comet Encke is such a case. Its orbit is decoupled from Jupiter; its aphelion distance is only 4.1 AU. It must have been dormant for long time until it reached its present orbit.
As of 2022 eight comets have been visited by spacecraft with remote sensing and fields and particles instrumentation but only for comets 1P/Halley, 81P/Wild 2 and 67P/Churyumov–Gerasimenko additional compositional analyses were obtained from dust composition analyzers.
Close range measurements of dust from 1P/Comet Halley by the PIA and PUMA dust analyzers onboard the Giotto and Vega spacecraft showed that dust particles had mostly chondritic composition but were rich in light elements such as H, C, N and O.
The Stardust cometary samples were a mix of different components that included presolar grains like SiC grains and high temperature solar nebula condensates like calcium–aluminium-rich inclusions (CAIs) found in primitive meteorites.
The COSIMA dust composition analyzers on board Rosetta mission measured the D/H ratio in cometary organics and found that it is between the value on Earth and that in solar-like protostellar regions.
The ROSINA gas analyser on Rosetta found that sublimating ice particles are emitted from the active areas on the nucleus.
Rosetta observations found that 67P/Churyumov–Gerasimenko has a density of only 540 kg/m−3 - much less than any solid material or water ice, therefore, this cometary material is highly porous (~70%). Most of the sub-mm dust particles collected by Rosetta instruments consisted of aggregates of smaller micrometer-sized subunits that may themselves were aggregates of ~100 nm particles.
The temperature at a cometary surface is generally near the local blackbody temperature; which suggests the existence of an inactive dust mantle covering large parts of the surface of the nucleus. Therefore, sublimation of ices from the cometary surface and the consequent emission of the embedded dust is not a simple process. The heat from solar illumination has to reach the lower lying ices and the cohesive dust mantle has to be broken. This process has been observed in lab simulations.
Large outbursts of gas and dust caused by landslides
and even explosions have been observed by "Rosetta" during its rendezvous with 67P/Churyumov–Gerasimenko.
Sublimation of subsurface supervolatile ices reside at depth much larger than 10 m below the surface. When the solar heat wave reaches this depth it may cause runaway sublimation and subsequent disintegration of the whole nucleus, like in the case of 73P/Schwassmann-Wachmann. In September 1995, this comet began to disintegrate and to release fragments and large amounts of debris and dust along its orbit.
Other processes leading to splitting of comets are tidal stresses and spin-up disruption of the nucleus. Cometary splitting is a rather common phenomenon at a rate of ~1 per 100 years per comet. This large rate suggests that splitting may be an important destructive process for cometary nuclei and the generation of cometary debris.
Asteroids.
Asteroids are remnants of the protoplanetary disc in a region where gravitational perturbations by Jupiter prevented the accretion of planetesimals into planets.
The orbit distribution of asteroids is controlled by Jupiter. The greatest concentration of asteroids (main-belt asteroids) have semimajor axes between at 2.06 and 3.27 AU where the strong 4:1 and 2:1 orbital resonances with Jupiter (Kirkwood gaps) lie. Their orbits have eccentricities less than 0.33 and inclinations below 30°.
At Jupiter distance are the three specific dynamic groups of asteroids. The Trojans share the orbit of Jupiter. They are divided into the Greeks at L4 (ahead of Jupiter) and the Trojans at L5 (trailing Jupiter). The Hilda asteroids are a dynamical group beyond the asteroid belt but within Jupiter's orbit, in a 3:2 orbital resonance with Jupiter.
Inside the asteroid belt are Earth-crossing asteroids, that have orbits that pass close to that of Earth.
Sizes of asteroids range from the large dwarf planet Ceres at ~1000 km diameter down to m-sized objects, below which they are called meteoroids or dust. The size distribution of asteroids smaller than ~100 km in size follows the steady state collisional fragmentation distribution of Dohnanyi.
Most asteroids formed inside the snow line from mostly chondritic planetesimals and protoplanets over 4.54 billion years ago. Once these protoplanets reached a size of several 100 km heating by radioactivity, impacts, and gravitational pressure melted parts of protoplanets and planetary differentiation set in. Heavier elements (iron and nickel) sank to the center, whereas lighter elements (stony materials) rose to the surface. Further collisions in the asteroid belt destroyed such parent objects and left fragments of very different composition and spectral types in emission, color, and albedo. C-type asteroids are the most common variety (~75%) of known asteroids. They are volatile-rich and have very low albedo because their composition includes a large amount of carbon. Reddish M-type asteroids are considered to be remnant cores of early protoplanets, while S-type asteroids (17%) of moderate albedo are fragments of the siliceous crust. These asteroid types are the parents of the respective meteorite classes.
Recently Active asteroid have been observed that eject dust and produce transient, comet-like comae and tails. Potential causes of activity are sublimation of asteroidal ice, impact ejection, rotational instabilities, electrostatic repulsion, and thermal fracture.
In the early 1970s the Pioneer 10 and 11 traversed the asteroid belt en route to Jupiter and Saturn. The dust instruments on board, both the penetration detectors and the Zodiacal light instruments did not find an enhanced dust density in the asteroid belt.
In 1983 the Infrared Astronomical Satellite (IRAS) mapped the infrared sky brightness and several solar system dust bands were found in the data. These dust bands were interpreted to be debris produced by recent collisional disruptions of main-belt asteroids. Detailed analysis of candidate asteroids revealed that collisions in the Veritas asteroid family at 3.17 AU, the Koronis family at 2.86 AU about 8 Myr ago, and the Karin Cluster formed about 5.7 Myr ago from a collision of progenitor asteroids.
In the early 1990s the Galileo space probe took the first photos of the asteroids 951 Gaspra and 243 Ida.
As of 2022 15 asteroids have been visited by spacecraft with three sample-return missions:
The S-type asteroid 25143 Itokawa has been visited by Hayabusa in 2005 and returned the sample in 2010,
The C-type asteroid 162173 Ryugu has been visited by Hayabusa2 in 2018 and returned the sample in 2020, and
C-type asteroid 101955 Bennu has been visited by OSIRIS-REx in 2018 and sample return is planned for 2023.
Sample analyses confirmed and refined their meteorite connections.
Small Solar System bodies and dust.
Small Solar System objects in interplanetary space range from sub-micrometer-sized dust particles to km-sized comets and asteroids. Fluxes of the smallest interplanetary objects have been determined from lunar microcrater counts and spacecraft measurements
and meteor and NEO observations. Currently, small solar system bodies at 1 AU are in a destructive collisional regime. Meteoroids at Earth distance have a mean mutual collision speed of ~20 km/s. At that speed meteoroids can catastrophically disrupt more than 10 times bigger objects and generate numerous smaller fragments.
Dohnanyi demonstrated that asteroids of <100 km diameter reached a collisional steady-state which means that in each mass interval the number of asteroids destroyed by collisions equals the number of same mass fragments generated by collisions from bigger asteroids. This is the case for a cumulative mass distribution F ~ m-0.837. At 1 AU meteoroids bigger than 1 mm in size are in a collisional steady state. The significant excess of smaller meteoroids is due to the input from comets. Models of the interplanetary dust environment of the Earth result in 80-90% of cometary dust vs. only 10-20% of asteroidal dust.
The shortage of dust particles <1 micron is due to the rapid dispersion by the Poynting-Robertson effect and by direct radiation pressure.
In planetary systems collisions play also an important role in generating dust particles. A good example are the Rings of Jupiter. This ring system was discovered by the Voyager 1 space probe and later studied in detail by the "Galileo" orbiter. It was best seen when the spacecraft was in Jupiter's shadow looking back toward the Sun. Jupiter's ring system is composed of three parts: an outermost gossamer ring, a flat main ring, and an innermost donut-shaped halo which are related to the small inner moons Thebe, Amalthea, Adrastea, and Metis. Bombardment of the moons by interplanetary dust causes the erosion of these satellites and other smaller unseen bodies. The eroded mass is mostly in form of micron-size ejecta particles that escape the gravitation of their source moon and that are seen in the rings.
Due to the low escape speeds of 1 to a few 10 m/s most ejecta particles can leave the gravitation of the satellite and feed the Jupiter rings.
Measurements by the Galileo dust detector during its passage through the gossamer ring found that the dust particles detected in the ring have sizes of 0.5 − 2.5 microns; with only the biggest particles visible in the camera images.
Besides Jovian gravity and the Poynting-Robertson drag micron-sized particles become electrically charged in the energetic Jovian magnetosphere and hence feel the Lorentz force of the powerful magnetic field of Jupiter. All these forces shape the appearance of the rings. Especially, the orbital inclinations of particles in the inner halo are excited by the electromagnetic interaction forcing them to plunge into the Jovian atmosphere.
Even the much bigger Galilean moons are surrounded by ejecta dust clouds of a few 1000 km thickness as observed by the "Galileo" dust detector. Around the Earth Moon the Lunar Dust Experiment (LDEX) on the LADEE mission mapped the dust cloud from 20 to 100 km altitude and found ejecta speeds from 100 m/s to a few km/s; but only a tiny fraction of them escape the gravitation of the Moon.
Also other planets with satellites display a variety of dust ring phenomena. In the massive and dense main rings of Saturn ice particles aggregate to cm-sized and bigger bodies that are continually forming and disintegrating by jostling and tidal force. Just outside Saturn's main rings is the F ring that is shepherded by a pair of moons, Prometheus and Pandora, that interact gravitationally with the ring and act like sinks and donors of dust. Beyond the extended E ring that is fed by cryovolcanism on Enceladus is the Phoebe ring, that is fed meteoroid ejecta from Phoebe that share its retrograde motion. Also Uranus and Neptune have complex ring systems. Besides the narrow main rings of Uranus that are shepherded by satellites there are broad dusty rings. The rings of Neptune consist of narrow and broad dust rings that interact with the inner moons. Even Mars is suspected to have dust rings originating from its moons Phobos and Deimos. Up to now the Mars rings escaped their detection.
Even the Earth is developing a human-made space debris belt of defunct artificial satellites and abandoned launch vehicles. Collisions between these objects could cause a collisional cascade, called Kessler syndrome, in which each collision generates more space debris that increases the likelihood of further collisions.
Volcanoes and geysers.
Venus, Earth, and Mars display signs of ancient or current volcanism. All these planets have a solid crust and a fluid mantle that is heated by internal heat from the planet's formation and the decay of radioactive isotopes. The most explosive volcanic eruptions observed on Earth have plumes of gas and ash up to 40 km height; but no volcanic dust escapes the atmosphere or even the gravitational attraction (Hill sphere) of the Earth. Similar conclusions can be drawn for the suspected active volcanism on Venus.
In smaller planetary bodies heat loss through the surface is larger and hence the internal heat, may not drive active volcanism at the present time. Therefore, it came as a surprise when the twin probes "Voyager 1" and "Voyager 2" flew through the Jovian system in 1979 and photographed plumes of several volcanoes on Jupiter's moon Io. Only weeks before the flyby Peale, Cassen. and Reynolds (1979)
predicted that Io's interior must experience significant tidal heating caused by its orbital resonance with neighbouring moons Europa and Ganymede. Temperature measurements in hotspots by the Galileo spacecraft showed that basaltic magma drives the volcanism on Io.
Umbrella-shaped plumes of volatiles like sulfur, sulfur dioxide, and other pyroclasts are ejected skyward from some of Io's volcanoes. E.g. Io's volcano Tvashtar Paterae erupts material more than 300 kilometres above the surface.
The ejection speed at the vent is up to 1 km/s which is much below the escape speed from Io of 2.5 km/s, therefore, none of this visible dust escapes Io's gravity.
Most of the plume material falls back to the surface as sulphur and sulphur dioxide frost, and pyroclasts.
However, in 1992 during its Jupiter flyby the dust detector on the "Ulysses" mission detected streams of 10 nm-sized dust particles emanating from the Jupiter direction.
Subsequent measurements by the "Galileo" dust detector within the magnetosphere of Jupiter analysed the periodic dust streams and identified Io as source.
Nanometer-sized dust particles that are emitted by Io's volcanoes become electrically charged in the Io plasma torus and feel the strong magnetic field of Jupiter. Positively charged dust particles between 10 and 100 nm radius escape Io's and even Jupiter's gravity and enter interplanetary space.
During the flyby of the Cassini mission of Jupiter the Cosmic Dust Analyzer (CDA) onboard chemically analysed these stream particles and found sodium chloride as well as sulphur and potassium bearing components,
that have also been found by spectroscopic analyses of Io's atmosphere.
Saturn's tenuous E ring was discovered by observations from Earth distance at times of Saturn's ring plane crossings. It has a maximum density at ~4 Saturn radii, formula_0, which coincides with the orbit of Enceladus. Spacecraft observations by "Voyager 1" and "2", and "Cassini" confirmed these observations. The E ring extends between the orbits of Mimas at 3 formula_0 and Titan at 20 formula_0.
The E ring consists of many tiny (micron and sub-micron) particles of water ice with silicates, carbon dioxide, ammonia, and other impurities.
Cassini observations demonstrated that Enceladus and the E ring are genetically related.
During Cassini's close flyby of Enceladus several instruments including the Cosmic Dust Analyzer observed fountains (geysers) of water vapour and micron-sized ice particles in Enceladus' south polar region.
CDA analyses of sodium-salt-rich ice grains in the plumes suggest that the grains formed from a liquid water reservoir that is in contact with rock.
The mechanism that drives and sustains the eruptions is thought to be tidal heating caused by the orbital resonance with Dione that excites Enceladus' orbital eccentricity. The ice grains escaping Enceladus' fountains feed and maintain Saturn's E ring.
Similar water vapor plumes were observed by the Hubble Space Telescope above the south polar region of Europa, one of Jupiter's Galilean moons.
NASA's future Europa Clipper mission (planned launch date 2024) with its Surface Dust Analyser (SUDA)
will analyse small solid particles ejected from Europa by meteoroid impacts and ice particles in potential plumes.
During the "Voyager 2" flyby of Neptune in 1989 active dark plumes were observed on the surface of its moon Triton. These plumes are thought to consist of dust and ice particles carried by invisible nitrogen gas jets.
Cosmic dust dynamics.
Dynamics of dust particles in space are affected by various forces that determine their trajectories, resp. their orbits. These forces depend on the position of the dust particle with respect to massive bodies and the environmental conditions.
Gravity.
In interplanetary space a major force is due to solar gravity that attracts similarly planets and dust particles:
formula_1where "F""G" is the force, "M" = "M"☉ is the Solar mass, and "m" is the mass of the object interacting, r is the distance between the centers of the masses and "G" is the gravitational constant.
Planets and small Solar System bodies including interplanetary dust follow Kepler orbits (ellipses, parabolas, or hyperbolas) around the Sun with their barycenter in the foci. The orbits are characterised by the six orbital elements: semimajor axis (a), eccentricity (e), inclination (i), longitude of the ascending node, argument of periapsis, and true anomaly.
Although small, planets exert gravitational a force on distant objects. If this force is regular and periodic then such an orbital resonance can stabilize or destabilize orbits of planetary objects. Examples are the Kirkwood gaps in the asteroid belt that are caused by Jupiter resonances and the structure of the Kuiper belt that is caused by Neptune resonances.
Close encounters with a planet can occur when the perihelion formula_2 of the small body's orbit is closer and the aphelion
formula_3 is further from the sun than the perturbing planet. This is the necessary condition for orbit scattering to occur; it defines the scattering zone of a planet. In this case a small body or a dust particle can undergo a major orbit perturbation. However, the Tisserand's parameters of the old and the new orbit remains approximately the same.
For a small body with semimajor axis a, orbital eccentricity e, and orbital inclination i, and a perturbing planet with semimajor axis formula_4 the Tisserand's parameter is
formula_5.
Two families of small Solar System bodies lie outside the scattering zones of the giant planets and are remnants of the primordial protoplanetary disc around the Sun: asteroids and the Kuiper belt objects. The Kuiper belt is approx. 100 times more massive than the asteroid belt and is part of the trans-Neptunian objects (TNOs). The other part of TNOs is the scattered disk with objects having orbits in the scattering zone of Neptune. At high eccentricities (or high inclinations) the scattering zones of neighboring planets overlap. Therefore, scattered disk objects can evolve into Centaurs and, eventually, into Jupiter-family comets. Inside the Jupiter scattering disk is the Zodiacal cloud consisting of interplanetary dust that originates from comets and asteroids. Also dust particles from the Kuiper belt find the scattering passage to the inner planetary system.
Inside the Hill sphere of a planet its gravity dominates the gravity of the sun. All planetary moons and rings are located well inside the Hill sphere and orbit the corresponding planet. Gravitational interactions between such satellites can be seen, e.g., in the stable 1:2:4 orbital resonance of Jupiter's moons Ganymede, Europa and Io.
Also subdivisions and structures within the rings of Saturn are caused by resonances with satellites. E.g. the gap between the inner B Ring and the outer A Ring has been cleared by a 2:1 resonance with the moon Mimas.
Also some narrow discrete rings of Saturn, Uranus, and Neptune like Saturn's F ring are shaped and held in place by the gravity of one or two shepherd moons.
Solar radiation pressure effects.
Solar radiation exerts the repulsive radiation pressure force "F""R" on meteoroids and interplanetary dust particles:
formula_7
where formula_8 is the solar luminosity or formula_9is the solar irradiance at heliocentric distance r, formula_10 is the radiation pressure coefficient of the particle, formula_11 is the cross section (for spherical particles formula_12 with particle radius formula_13), formula_14 is the speed of light.
The radiation pressure coefficient, formula_10, depends on optical properties of the particle like absorption, reflection, and light scattering integrated over all wavelengths of the solar spectrum. It can be calculated by using e.g. Mie theorie, discrete dipole approximation, or even microwave analog experiments.
Solar radiation pressure reduces the effective force of gravity on a dust particle and is characterized by the dimensionless parameter formula_15, the ratio of the radiation pressure force formula_16 to the force of gravity formula_17 on the particle:
formula_18
where formula_19 is the density and formula_13 is the size (the radius) of the dust grain.
Cometary particles with formula_15 > 0.1 already have significantly different heliocentric orbits than their parent comet and show up in the dust tail.
Dust particles released from a comet (with eccentricity formula_20) near its perihelion will leave the Solar System on hyperbolic orbits if their beta values exceed formula_21.
Even particles with formula_22 that are released from an asteroid on a circular orbit around the Sun will leave the Solar System on an unbound parabolic orbit.
Small dust particles with formula_23 are called formula_6-meteoroids; they feel a net repulsive force from the Sun.
The trajectories of interstellar dust, which are initially parallel upon entering the Solar System, depend on the particles' formula_15-ratio. Particles with formula_24 are predominantly attracted by solar gravity; their trajectories are bent towards the Sun. The closer they pass by the Sun, the faster the particles are accelerated, and the stronger they deviate from their initial direction. The trajectories of these particles cross behind the Sun, increasing the dust density there; this is referred to as gravitational focusing. Interstellar dust particles with formula_25 are predominantly repulsed by solar radiation pressure. They cannot approach the Sun below a certain distance that depends on how large their formula_6 is. This region that is free of interstellar dust is paraboloidal in shape; it is referred to as the formula_6-cone. At the outer edge of the formula_6-cone the dust density is enhanced.
The solar radiation pressure force on a particle orbiting the Sun acts not only radially but, because of the finite speed of light there is a small force opposite to the particle's orbit motion. This Poynting–Robertson drag causes the particle to loose angular momentum and, hence, to spiral inward to the Sun. The time, formula_26 in years, of a particle with a force ratio, formula_15,
to spiral from an initially circular orbit with radius, formula_27 in AU, is
formula_28
Centimeter-sized particles with formula_15 ~10−4 starting from a circular orbit at Earth distance take about 4 million years to spiral into the sun. This example demonstrates that all dust smaller than ~1 cm in size must have entered recently the inner planetary system in form of cometary, asteroidal, or interstellar dust; no dust is left there from the times of planetary formation.
Dust charging and electromagnetic interactions.
Dust particles in most space environments are exposed to electric charging currents. Dominant processes are collection of electrons and ions from the ambient plasma, the photoelectric effect from UV radiation, and secondary electron emission from energetic ion or electron radiation.
Collection of electrons and ions from the ambient thermal plasma lead to net negative charging because of the much higher thermal electron speed than the ion speed. In contrast to charging in a plasma, photo emission of electrons from the particle by UV radiation leads to positive charging. The impact of energetic ions or electrons with energies >100 eV onto the particle may generate more than one secondary electron and, hence, lead to a positive charging current. The secondary electron yields are dependent on the type and energy of the energetic particle and the particle material.
The balance of all charging currents leads to the equilibrium surface potential of the particle.
The electric charge, "Q", of a dust particle of radius "s" at a surface potential, "U", in space is
formula_29
where "ε"0 is the permittivity of vacuum.
A dust particle of charge Q moving with a velocity v in an electric field E and a magnetic field B experiences the Lorentz force of
formula_30
In SI units, B is measured in teslas (T).
The surface potential of a dust particles and, hence its charge depends on the detailed properties of the ambient environment. For example, an interplanetary dust particle at 1 AU from the Sun is surrounded by solar wind plasma of ~10 eV energy and a density of typically formula_31 protons and electrons per m3. The photoelectron flux is typically formula_32 electrons per m2 and, hence, much larger than the plasma currents. This condition leads to a surface potential of ≈+3 V.
Actual measurements of dust charges by Cassini CDA resulted in a surface potential formula_33 +2 to +7 V.
Since both the solar wind plasma density and the solar UV flux scale with heliocentric distance formula_34 the surface potential of interplanetary dust, formula_33 +5 V, is also typical for other distances from the Sun.
The interplanetary magnetic field is the component of the solar magnetic field that is dragged out from the solar corona by the solar wind. The slow wind (≈ ) is confined to the equatorial regions, while fast wind (≈) is seen over the poles. The rotation of the Sun twists the dipolar magnetic field and corresponding current sheet into an Archimedean spiral. This heliospheric current sheet has a shape similar to a swirled ballerina skirt, and changes in shape through the solar cycle as the Sun's magnetic field reverses about every 11 years. A charged dust particle feels the Lorentz force of the interplanetary magnetic field that passes by at solar wind speed.
At 1 AU from the Sun the average solar wind speed is 450 km/s and the magnetic field strength formula_35 T = 5 nT.
For submicron-sized dust particles this force becomes significant and for particles < 0.1 microns it exceeds solar gravity and the radiation pressure force. For example, interstellar dust particles of ~0.3 microns in size that pass through the heliosphere are either focused or defocused with respect to the solar magnetic equator. A typical measure for how strongly a dust particle is affected by the Lorentz force is its charge-to-mass ratio, formula_36. Because the charge of a particle increases linearly with its size, whereas its mass and volume increase with the cube of its size, small particles typically have a much higher charge-to-mass ratio than large particles and are more strongly affected by the Lorentz force. Nevertheless, interstellar dust particles of all sizes are focused or defocused as long as they are charged. This focusing and defocusing is strongest during and close to the respective solar minimum, which for the defocusing occurred in the years surrounding, for example, 1996 and 2019, and for the focusing occurred in the years surrounding, for example, 1986 and 2008. The current phase of the solar magnetic cycle corresponds to the defocusing of interstellar dust away from the ecliptic plane, which is unfavourable for detecting and measuring interstellar dust. The next focusing phase of the solar magnetic cycle, which is best suited for interstellar dust measurements within the solar system, will occur in the 2030s. Because these phases occur every 22 years, the following focusing phase will be in the 2050s.
Very different conditions exist in planetary magnetospheres. An extreme case is the magnetosphere of Jupiter where the volcanically active moon Io is a strong source of plasma at 6 formula_37, where formula_37 = km is the radius of Jupiter. At this distance is the peak of the plasma density ( m−3) and the plasma energy has a strong minimum at ~1 eV. Outside this distance the plasma energy rises sharply to 80 eV at 8 formula_37. The resulting dust surface potentials range from -30 V in the cold plasma between 4 and 6 formula_37 and +3 V elsewhere.
Jupiter's magnetic field is mostly a dipole, with the magnetic axis tilted by ~10° to Jupiter's rotation axis.
Out to about 10 formula_37 from Jupiter the magnetic field and the plasma co-rotates with the planet. At Io's distance the co-rotating magnetic field passes by Io at a speed of 17 km/s and the magnetic field strength formula_35 T = 2000 nT.
Positively charged dust particles from Io in the size (radius) range from 9 to ~120 nanometers are picked up by the strong magnetic field and accelerated out of the Jovian system at speeds up to 350 km/s. For smaller particles the Lorentz force dominates and they gyrate around the magnetic field lines just like ions and electrons do.
In Saturn's magnetosphere the active moon Enceladus at 4 formula_38 (formula_38 = km is Saturn's radius) is a source of oxygen and water ions at a density of m−3 and an energy 5 eV. Dust particles are charged to a surface potential of -1 and -2 V. Outside 4 formula_38 the ion energy increases to 100 eV and the resulting surface potential rises to +5 V.
Measurements by "Cassini" CDA observed this switch of the dust potential directly.
In the partially ionized local interstellar medium the plasma density is about to m−3 and the thermal energy 0.6 eV. The photoelectron flux of carbon or silicate particles from the average galactic UV radiation is electrons per m2. The resultant surface potential of the dust particles is ~+0.5 V. In the hot but tenuous plasma of the Local Bubble (density m−3, energy 100 eV) dust will be charged to +5 to +10 V surface potential.
In the local interstellar medium a magnetic field strength of ~0.5 nT has been measured by the Voyager spacecraft. In such a magnetic field a charged micron sized dust particle has a gyroradius < 1 pc.
Cosmic dust processes.
Cosmic dust particles in space are affected by various effects that change their physical, and chemical properties.
Dust accretion.
Dust accretion describes the processes of dust agglomeration from nanometer-sized dust, evolving into pebbles several centimeters wide, and eventually coalescing into kilometer-sized planetesimals and full-fledged planets.
Nanometer-sized solid condensates originate within circumstellar envelopes or Supernova ejecta, forming the nuclei of dust particles scattered across the universe. These particles integrate into the ambient interstellar medium (ISM). Despite constituting only ~1% of the gas mass density in the ISM, dust particles become intertwined with surrounding gas clouds through friction. The frictional drag scale, "l""drag" signifies the distance a dust particle of mass "m""d" traverses to accumulate an equivalent mass of interstellar gas (primarily hydrogen):
formula_39 where
"A""d" refers to the particle’s cross section, "n""H" is the local gas density, and "m""H" = formula_40 kg is the atomic mass of hydrogen.
In the low-density (formula_41 H atoms per formula_42) diffuse interstellar medium, dust particles up to micron size couple with gas clouds within a frictional scale of less than 1 pc.
Within the denser, colder interstellar medium found in molecular clouds ("n""H" = formula_43), the growth of grains occurs through the accretion of gas-phase elements, leading to an augmentation in dust mass. Predominant components of icy mantles include H2O, NH3, CO2, CO, CH3OH, OCS, and functional groups of complex organic molecules.
These dust formations act as shields for molecular gases within dense clouds, safeguarding them against dissociation caused by ultraviolet radiation. The visible darkness of these ice mantles contributes to the characteristic appearance of dense clouds, often referred to as dark clouds.
The most condensed areas within molecular clouds initiate gravitational collapse, carrying dust along and giving rise to star-forming regions. These condensations evolve into rotating gas spheres, eventually forming protostars.
As a result of the conservation of angular momentum, the collapsing nebula spins faster and flattens into a protoplanetary disk spanning tens to hundreds of astronomical units (AU) in diameter. Throughout the collapse, the cloud's density escalates towards the center, leading to increased temperatures due to gravitational contraction.
In a protoplanetary disk, both gas and dust densities increase by over a factor of 1000 during collapse according to a model by Hayashi et al., (1985). This model draws parallels to the current Solar System, utilizing the combined planetary mass to estimate the total mass required for their formation. The hot central protostar heats the surrounding dust disk so that, inside the frost line, the condensed ices sublimate, leaving the carbonaceous, silicate, and iron cores of the dust. Outside the frost line icy dust particles form comets and icy planetesimals. Within the disk, the motion of bodies smaller than 1 km is governed more by gas drag than by gravity. Thermal Brownian motion prompts collisions among sub-micron and micron-sized dust particles, while larger particles collide due to radial and transverse velocities induced by non-Keplerian gas rotation. Laboratory experiments spanning the entire parameter spectrum have studied the consequences of mutual dust collisions. These experiments consistently demonstrate that micron-sized dust grains can grow into millimeter-sized aggregates. Outside the frost line icy aggregates can directly grow to comet or icy planetesimal sizes.
Inside the frost line siliceous particles encounter a bouncing barrier. This bouncing barrier ensures that a significant portion of the dust population remains small. Bodies measuring centimeters and larger sizes can accumulate these smaller particles, reaching sizes of around 100 meters within a million years.
The velocities and interactions among planetesimals, the building blocks of planets, play a crucial role in their evolution. Runaway growth occurs when larger planetesimals consume smaller ones within their gravitational pull, eventually leading to the formation of protoplanets.
Collisions.
Collisions among dust particles or bigger meteoroids are the dominant process in space that changes the mass of or destroys meteoroids in space and generates new and smaller fragments that contribute to the population of meteoroids and dust. The typical collision speed of meteoroids in interplanetary space at 1 AU from the sun is ~20 km/s. At that speed the kinetic energy of a meteorite is much higher than its heat of vaporization. Therefore, when such a projectile of mass formula_44 hits a much bigger target object then the projectile and a corresponding part of the target mass vaporize and even get ionized and an impact crater is excavated in the target body by the shock waves released by the impact. The excavated mass formula_45 is
formula_46
where the cratering efficiency factor formula_47 scales with the kinetic energy of the projectile. For impact craters on the moon and on asteroids formula_48.
Thereby, impact craters erode the target body or meteoroids in space.
A target meteoroid of mass formula_49 is catastrophically disrupted if the mass of the largest fragment remaining is smaller than approx. half of the target mass or
formula_50
where formula_44 is the mass of the projectile and the disruption threshold is
formula_51 for rocky material and formula_52 for porous material.
Rocky material represents asteroids and porous material represents comets. Cometary material is porous from nucleus size to micron sized fractal dust it emits.
The collisional lifetime formula_53 of a dust particle in interplanetary space can be determined where the flux of interplanetary dust is known. This flux formula_54 at 1 AU has been derived from lunar microcrater analyses.
formula_55
where formula_56 is the scattering cross section
(formula_57, with particle radius formula_13) in an isotropic flux.
Models of the interplanetary dust cloud require that the lifetimes of interplanetary dust particles are longer than those for rock material and, hence, support the result that at 1 AU ~80% of the interplanetary dust is of cometary origin and only ~20% of asteroidal origin.
Collisional fragmentation leads to a net loss of interplanetary dust particles more massive than ~ kg and a net gain of less massive interplanetary dust particles. Comets are believed to replenish the losses of big interplanetary dust.
Sublimation.
Early infrared observations of the solar corona during an eclipse indicated a dust-free zone inside ~5 solar radii (0.025 AU) from the sun. Outside of this dust-free zone interplanetary dust consisting of silicates and carbonaceous material will sublimate at temperatures up to 2000 K.
Solar System dust particles are not only small solid particles of meteoritic composition but also particles that contain substances that are liquid or gaseous at terrestrial conditions. Comets carry and release grains containing volatiles in the ice phase into the inner solar system. Rosetta instruments detected besides the dominant water (H2O) molecules also carbon dioxide (CO2), great variety of CH-, CHN-, CHS-, CHO-, CHO2- and CHNO-bearing saturated and unsaturated species, and the aromatic compound toluene (CH3–C6H5).
During Cassini's crossing through Saturn's E ring the Cosmic Dust Analyzer (CDA) found that it consists predominantly of water ice, with minor contributions of silicates, carbon dioxide, ammonia, and hydrocarbons.
Analyses of the surface compositions of Pluto and Charon by the New Horizons spacecraft detected a mix of solid nitrogen (N2), methane (CH4), carbon monoxide (CO), ethane (C2H6), and an additional component that imparts color.
Ice particles in the inner planetary system have very short lifetimes. Absorbed solar radiation heats the particle and part of the energy is reradiated back to space and the other part is used to transform the ices into gas that escapes.
formula_58
where formula_59 is the solar irradiance at 1 AU, formula_60 and formula_61 are the albedos of the ice in the visible and infrared between 10 and 20 μm wavelength, respectively, formula_62 the heliocentric distance, formula_63 is the Stefan-Boltzmann constant, formula_64 the temperature, formula_65 the production rate of gas, and formula_66 the latent heat of vaporization. formula_65 of the ice is deduced from the measured vapour pressure of the subliming ices.
At different heliocentric distances interplanetary dust particles have different icy constituents.
Sputtering.
Sputtering, in addition meteoroid bombardment is a significant process involved in space weathering, which alters the physical characteristics of dust particles present in space. When energetic atoms or ions from the surrounding plasma collide with a solid particle in space, atoms or ions are emitted from the particle. The sputter yield denotes the average number of atoms expelled from the target per incident atom or ion. The sputter yield primarily relies on the energy and mass of the incident particles, as well as the mass of the target atoms. Within the interplanetary medium the solar wind plasma primarily consists of electrons, protons and alpha particles, possessing kinetic energies ranging from 0.5 and 10 keV, corresponding to solar wind speeds of 400 to 800 km/s at a distance of 1 AU When compared to impact erosion on the lunar surface, sputtering erosion becomes negligible on scales larger than 1 micron.
In the outer Solar System ices are the dominant surface materials of meteoroids and dust. In addition, the magnetospheres of the giant planets contain heavy ions, like sulphur or oxygen that have a high sputter yield for icy surfaces. E.g. the lifetimes due to sputtering of micron sized dust particles in Saturn's E ring is a few 100 years. During this time the dust particles loose >90% of their mass and spiral from their source at Enceladus (at 4 Saturn radii, formula_38) to the orbit of Titan at 20 formula_38.
The sputtering environment within interstellar clouds is relatively harmless. Charged interstellar dust grains interact with the gas through the magnetic field, and the temperatures are moderate, typically below 10,000 K. The primary areas where sputter erosion occurs in the interstellar medium are at the collision interface between randomly moving clouds, reaching speeds of a few hundred kilometers per second, and in supernova shocks. On average, the lifetimes of carbonaceous grains in the interstellar medium have been calculated to be approximately formula_67 years, while silicate grains have a lifespan of approximately
formula_68 years.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R_{S}"
},
{
"math_id": 1,
"text": "F_G = G \\frac{M m}{r^2}, "
},
{
"math_id": 2,
"text": "q = (1 - e)a"
},
{
"math_id": 3,
"text": "Q = (1 + e)a"
},
{
"math_id": 4,
"text": "a_P"
},
{
"math_id": 5,
"text": "T_P\\ = \\frac{a_P}{a} + 2\\cos i\\sqrt{\\frac{a}{a_P} (1-e^2)}"
},
{
"math_id": 6,
"text": "\\beta"
},
{
"math_id": 7,
"text": "\nF_R = {{L_\\odot Q_{PR} A} \\over {4 \\pi r^2 c}}, \n"
},
{
"math_id": 8,
"text": "{L_\\odot}"
},
{
"math_id": 9,
"text": "L_\\odot \\over {4 \\pi r^2}"
},
{
"math_id": 10,
"text": "Q_{\\rm PR} "
},
{
"math_id": 11,
"text": "A"
},
{
"math_id": 12,
"text": "A =\\pi s^2"
},
{
"math_id": 13,
"text": "s"
},
{
"math_id": 14,
"text": "c"
},
{
"math_id": 15,
"text": " \\beta "
},
{
"math_id": 16,
"text": "F_R"
},
{
"math_id": 17,
"text": "F_G"
},
{
"math_id": 18,
"text": "\n\\beta = { F_{\\rm r} \\over F_{\\rm g} } \n= { 3 L_\\odot Q_{\\rm PR} \\over { 16 \\pi GMc \\rho s } } = 5.7 \\times 10^{-4} {Q_{\\rm PR} \\over { \\rho s }}\n"
},
{
"math_id": 19,
"text": " \\rho "
},
{
"math_id": 20,
"text": "e_{c}"
},
{
"math_id": 21,
"text": " \\beta = 0.5 (1 - e_c) "
},
{
"math_id": 22,
"text": " \\beta = 0.5"
},
{
"math_id": 23,
"text": " \\beta > 1"
},
{
"math_id": 24,
"text": " \\beta<1 "
},
{
"math_id": 25,
"text": " \\beta>1 "
},
{
"math_id": 26,
"text": "T_{PR}"
},
{
"math_id": 27,
"text": "a"
},
{
"math_id": 28,
"text": "\nT_{PR,circ} = 400 \\times {a^{2} \\over {\\beta }}\n"
},
{
"math_id": 29,
"text": " Q = {4 \\pi \\varepsilon_0} {U}{s}, "
},
{
"math_id": 30,
"text": "\\mathbf{F_L} = Q\\,(\\mathbf{E} + \\mathbf{v} \\times \\mathbf{B})"
},
{
"math_id": 31,
"text": "{5\\times 10^6}"
},
{
"math_id": 32,
"text": "{3\\times 10^{16}}"
},
{
"math_id": 33,
"text": "{U \\approx}"
},
{
"math_id": 34,
"text": "r^{-2}"
},
{
"math_id": 35,
"text": "{B\\approx}"
},
{
"math_id": 36,
"text": "Q/m"
},
{
"math_id": 37,
"text": "R_J"
},
{
"math_id": 38,
"text": "R_S"
},
{
"math_id": 39,
"text": " l_{drag} =\\cfrac{m_{d}}{A_d n_H m_H} "
},
{
"math_id": 40,
"text": "1.67\\times 10^{-27}"
},
{
"math_id": 41,
"text": "{10^{5} - 10^{8}}"
},
{
"math_id": 42,
"text": "m^{3}"
},
{
"math_id": 43,
"text": "{10^{8} - 10^{12} m^{-3}}"
},
{
"math_id": 44,
"text": " m_p"
},
{
"math_id": 45,
"text": " m_e"
},
{
"math_id": 46,
"text": "\nm_e \\approx \\Gamma_1 m_p\n"
},
{
"math_id": 47,
"text": " \\Gamma_1 "
},
{
"math_id": 48,
"text": "\\Gamma_1 \\approx 2000"
},
{
"math_id": 49,
"text": " m_T"
},
{
"math_id": 50,
"text": "\nm_T \\approx \\Gamma_2 m_p\n"
},
{
"math_id": 51,
"text": " \\Gamma_2 \\approx 10^6"
},
{
"math_id": 52,
"text": " \\Gamma_2 \\approx 3000 "
},
{
"math_id": 53,
"text": "T_C"
},
{
"math_id": 54,
"text": "F(m)"
},
{
"math_id": 55,
"text": "\nT_C = {1 \\over {F(m/ \\Gamma_2) A_p}}\n"
},
{
"math_id": 56,
"text": "A_p"
},
{
"math_id": 57,
"text": "A_p \\approx 4 \\pi s^{2}"
},
{
"math_id": 58,
"text": "\nG_{SC} (1-A_0){r^2} = \\sigma (1-A_1) {T^4} +Z(T) L(T)\n"
},
{
"math_id": 59,
"text": "G_{SC}"
},
{
"math_id": 60,
"text": "A_0"
},
{
"math_id": 61,
"text": "A_1"
},
{
"math_id": 62,
"text": "r"
},
{
"math_id": 63,
"text": "\\sigma"
},
{
"math_id": 64,
"text": "T"
},
{
"math_id": 65,
"text": "Z(T)"
},
{
"math_id": 66,
"text": "L(T)"
},
{
"math_id": 67,
"text": "{4\\times 10^{8}}"
},
{
"math_id": 68,
"text": "{2\\times 10^{8}}"
}
]
| https://en.wikipedia.org/wiki?curid=71482441 |
71482910 | Papoulis-Marks-Cheung Approach | Theorem in sampling theory
The Papoulis-Marks-Cheung approach is a theorem in multidimensional Shannon sampling theory that shows that the sampling density of a two-dimensional bandlimited function can be reduced to the support of the Fourier transform of the function. Applying a multidimensional generalization of a theorem by Athanasios Papoulis, the approach was first proposed by Robert J. Marks II and Kwang Fai Cheung. The approach has been called "elegant," "remarkably" closed, and "interesting."
The Theorem.
The two-dimensional Fourier transform, or frequency spectrum, of a function formula_0 is formula_1where formula_2 and formula_2 are the spatial frequencies corresponding to formula_3 and formula_4. When formula_3 and formula_4 are lengths, spatial frequency has units of cycles per unit length.
Prelee and Neuhoff describe the Papoulis-Marks-Cheung approach as follows.
<templatestyles src="Template:Blockquote/styles.css" />"Marks and Cheung focused on images with a given spectral support region and an initial base sampling lattice such that the induced spectral replicas of this support region do not overlap. They then showed that cosets of some sublattice could be removed from the base lattice until the sampling density was minimal … or approached minimal ... [This] allows the sampling rate to be reduced until it equals or approaches … [a] minimum." In this context, the limit is the area of the support of the spectrum."
In deriving their result, Marks and Cheung relied on Papoulis' generalized sampling expansion.
Explanation.
The Papoulis-Marks-Cheung approach is best explained by example. Consider from Figure 1 the half circle shown on the right half plane. A signal's spectrum, formula_5, is zero outside the half circle. Inside the circle, the spectrum' is arbitrary but is well behaved. The half-circle, with unit radius, has an area of formula_6 (cycles per unit length) squared.
According to the Papoulis-Marks-Cheung approach, the sampling density for the image formula_0 can be reduced to formula_7 samples per unit area. The Papoulis-Marks-Cheung approach informs how to do this.
To the right in Figure 1 is pictured a rectangular replication of the half circle which occurs when the two-dimensional function is sampled at spatial locations shown in Figure 2.
This replication is a consequence of the multidimensional sampling theorem that shows that the sampling of a two-dimensional signal in the spatial formula_8 domain results in spectrum replication in the Fourier domain. If the uniform sampling density were lower, the replications would overlap and an attempt at reconstruction of the original function would result in image aliasing. The sampling density to achieve this is equal to the area of the rectangular lattice cell of the spectrum replication. The corresponding area of the rectangle used in the replication is equal to formula_9 (cycles per unit length) squared. As confirmed by Figure 2, the sampling density required to achieve the spectral replication is therefore formula_9 samples per unit area. The Papoulis-Marks-Cheung approach says that this sampling density can be reduced to the area of the half circle, namely from formula_9 to formula_7 samples per unit area.
To see how this reduction happens, consider Figure 3 where the formula_10 rectangular lattice cell is divided into formula_11 identical squares. Note that two of these squares lie totally in an area where the spectral replication is identically zero. These squares are shaded light green. Think of each of formula_11 squares as spectra of formula_11 different two-dimensional signals. All of the samples for the signals corresponding to the light green areas are zero and do not have to be considered. The area of the two green squares is formula_12. Since the samples corresponding to these squares do not have to be considered (they are all zero), the overall sampling density is reduced from formula_9 samples per unit area to formula_13 samples per unit area.
The corresponding reduction in sampling density is shown in Figure 4 where the red dots are locations where samples need not be taken. A single cell containing one red dot is shown shaded. The area of the cell is The corresponding reduction in sampling density is shown in Figure 4 where the red dots are locations where samples need not be taken. A single cell containing one red dot is shown shaded. The area of the cell is formula_14 units. The sampling density is therefore, as also seen from the areas of two green squares in Figure 3, reduced by formula_15 samples per unit area.
Extension.
In the previous example, the squares in Figure 3 can be made arbitrarily small and increased in number so that, asymptotically, all of the area equal to zero can be covered. Thus, the sampling density can be reduced to the support of the spectrum, i.e., to the area where the spectrum is not identically zero.
The Papoulis-Marks-Cheung approach can straightforwardly be generalized to higher dimensions. Also, replication geometry need not be rectangular but can be any shape that will tile the entire formula_8 plane such as parallelograms and hexagons.
A more detailed mathematical description of the Papoulis-Marks-Cheung approach is available in the original paper by Marks and Cheung and their derivative work.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(x,y)"
},
{
"math_id": 1,
"text": "F(u_x,y_y) = \\iint\\limits_{x,y} f(x,y){\\rm e}^{-i2\\pi(xu_x+yu_y)}\\operatorname{d}\\!x\\operatorname{d}\\!y"
},
{
"math_id": 2,
"text": "u_x"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "y"
},
{
"math_id": 5,
"text": "F(u_x,u_y)"
},
{
"math_id": 6,
"text": "\\pi / 2 = 1.5708"
},
{
"math_id": 7,
"text": "1.5708"
},
{
"math_id": 8,
"text": "(x,y)"
},
{
"math_id": 9,
"text": "2"
},
{
"math_id": 10,
"text": "1\\times 2"
},
{
"math_id": 11,
"text": "32"
},
{
"math_id": 12,
"text": "2/32 = 1/16 = 0.0625"
},
{
"math_id": 13,
"text": "2-0.0625=1.9375"
},
{
"math_id": 14,
"text": "16"
},
{
"math_id": 15,
"text": "1/16"
}
]
| https://en.wikipedia.org/wiki?curid=71482910 |
7148302 | Ptolemy's inequality | In Euclidean geometry, Ptolemy's inequality relates the six distances determined by four points in the plane or in a higher-dimensional space. It states that, for any four points A, B, C, and D, the following inequality holds:
formula_0
It is named after the Greek astronomer and mathematician Ptolemy.
The four points can be ordered in any of three distinct ways (counting reversals as not distinct) to form three different quadrilaterals, for each of which the sum of the products of opposite sides is at least as large as the product of the diagonals. Thus, the three product terms in the inequality can be additively permuted to put any one of them on the right side of the inequality, so the three products of opposite sides or of diagonals of any one of the quadrilaterals must obey the triangle inequality.
As a special case, Ptolemy's theorem states that the inequality becomes an equality when the four points lie in cyclic order on a circle.
The other case of equality occurs when the four points are collinear in order. The inequality does not generalize from Euclidean spaces to arbitrary metric spaces. The spaces where it remains valid are called the "Ptolemaic spaces"; they include the inner product spaces, Hadamard spaces, and shortest path distances on Ptolemaic graphs.
Assumptions and derivation.
Ptolemy's inequality is often stated for a special case, in which the four points are the vertices of a convex quadrilateral, given in cyclic order. However, the theorem applies more generally to any four points; it is not required that the quadrilateral they form be convex, simple, or even planar.
For points in the plane, Ptolemy's inequality can be derived from the triangle inequality by an inversion centered at one of the four points. Alternatively, it can be derived by interpreting the four points as complex numbers, using the complex number identity:
formula_1
to construct a triangle whose side lengths are the products of sides of the given quadrilateral, and applying the triangle inequality to this triangle. One can also view the points as belonging to the complex projective line, express the inequality in the form that the absolute values of two cross-ratios of the points sum to at least one, and deduce this from the fact that the cross-ratios themselves add to exactly one.
A proof of the inequality for points in three-dimensional space can be reduced to the planar case, by observing that for any non-planar quadrilateral, it is possible to rotate one of the points around the diagonal until the quadrilateral becomes planar, increasing the other diagonal's length and keeping the other five distances constant. In spaces of higher dimension than three, any four points lie in a three-dimensional subspace, and the same three-dimensional proof can be used.
Four concyclic points.
For four points in order around a circle, Ptolemy's inequality becomes an equality, known as Ptolemy's theorem:
formula_2
In the inversion-based proof of Ptolemy's inequality, transforming four co-circular points by an inversion centered at one of them causes the other three to become collinear, so the triangle equality for these three points (from which Ptolemy's inequality may be derived) also becomes an equality. For any other four points, Ptolemy's inequality is strict.
In three dimensions.
Four non-coplanar points A, B, C, and D in 3D form a tetrahedron. In this case, the strict inequality holds:
formula_3.
In general metric spaces.
Ptolemy's inequality holds more generally in any inner product space, and whenever it is true for a real normed vector space, that space must be an inner product space.
For other types of metric space, the inequality may or may not be valid. A space in which it holds is called "Ptolemaic". For instance, consider the four-vertex cycle graph, shown in the figure, with all edge lengths equal to 1. The sum of the products of opposite sides is 2. However, diagonally opposite vertices are at distance 2 from each other, so the product of the diagonals is 4, bigger than the sum of products of sides. Therefore, the shortest path distances in this graph are not Ptolemaic. The graphs in which the distances obey Ptolemy's inequality are called the Ptolemaic graphs and have a restricted structure compared to arbitrary graphs; in particular, they disallow induced cycles of length greater than three, such as the one shown.
The Ptolemaic spaces include all CAT(0) spaces and in particular all Hadamard spaces. If a complete Riemannian manifold is Ptolemaic, it is necessarily a Hadamard space.
Inner product spaces.
Suppose that formula_4 is a norm on a vector space formula_5 Then this norm satisfies Ptolemy's inequality:
formula_6
if and only if there exists an inner product formula_7 on formula_8 such that formula_9 for all vectors formula_10 Another necessary and sufficient condition for there to exist such an inner product is for the norm to satisfy the parallelogram law:
formula_11
If this is the case then this inner product will be unique and it can be defined in terms of the norm by using the polarization identity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\overline{AB}\\cdot \\overline{CD}+\\overline{BC}\\cdot \\overline{DA} \\ge \\overline{AC}\\cdot \\overline{BD}."
},
{
"math_id": 1,
"text": "(A-B)(C-D)+(A-D)(B-C)=(A-C)(B-D)"
},
{
"math_id": 2,
"text": "\\overline{AB}\\cdot \\overline{CD}+\\overline{AD}\\cdot\\overline{BC} = \\overline{AC}\\cdot \\overline{BD}."
},
{
"math_id": 3,
"text": "\\overline{AB}\\cdot \\overline{CD}+\\overline{BC}\\cdot \\overline{DA} > \\overline{AC}\\cdot \\overline{BD}"
},
{
"math_id": 4,
"text": "\\|\\cdot\\|"
},
{
"math_id": 5,
"text": "X."
},
{
"math_id": 6,
"text": "\\|x - y\\| \\, \\|z\\| ~+~ \\|y - z\\| \\, \\|x\\| ~\\geq~ \\|x - z\\| \\, \\|y\\| \\qquad \\text{ for all vectors } x, y, z."
},
{
"math_id": 7,
"text": "\\langle \\cdot, \\cdot \\rangle"
},
{
"math_id": 8,
"text": "X"
},
{
"math_id": 9,
"text": "\\|x\\|^2 = \\langle x,\\ x\\rangle"
},
{
"math_id": 10,
"text": "x \\in X."
},
{
"math_id": 11,
"text": "\\|x+y\\|^2 ~+~ \\|x-y\\|^2 ~=~ 2\\|x\\|^2 + 2\\|y\\|^2 \\qquad \\text{ for all vectors } x, y."
}
]
| https://en.wikipedia.org/wiki?curid=7148302 |
7148738 | Molecular Hamiltonian | Hamiltonian operator for molecules
In atomic, molecular, and optical physics and quantum chemistry, the molecular Hamiltonian is the Hamiltonian operator representing the energy of the electrons and nuclei in a molecule. This operator and the associated Schrödinger equation play a central role in computational chemistry and physics for computing properties of molecules and aggregates of molecules, such as thermal conductivity, specific heat, electrical conductivity, optical, and magnetic properties, and reactivity.
The elementary parts of a molecule are the nuclei, characterized by their atomic numbers, "Z", and the electrons, which have negative elementary charge, −"e". Their interaction gives a nuclear charge of "Z" + "q", where "q" = −"eN", with "N" equal to the number of electrons. Electrons and nuclei are, to a very good approximation, point charges and point masses. The molecular Hamiltonian is a sum of several terms: its major terms are the kinetic energies of the electrons and the Coulomb (electrostatic) interactions between the two kinds of charged particles. The Hamiltonian that contains only the kinetic energies of electrons and nuclei, and the Coulomb interactions between them, is known as the Coulomb Hamiltonian. From it are missing a number of small terms, most of which are due to electronic and nuclear spin.
Although it is generally assumed that the solution of the time-independent Schrödinger equation associated with the Coulomb Hamiltonian will predict most properties of the molecule, including its shape (three-dimensional structure), calculations based on the full Coulomb Hamiltonian are very rare. The main reason is that its Schrödinger equation is very difficult to solve. Applications are restricted to small systems like the hydrogen molecule.
Almost all calculations of molecular wavefunctions are based on the separation of the Coulomb Hamiltonian first devised by Born and Oppenheimer. The nuclear kinetic energy terms are omitted from the Coulomb Hamiltonian and one considers the remaining Hamiltonian as a Hamiltonian of electrons only. The stationary nuclei enter the problem only as generators of an electric potential in which the electrons move in a quantum mechanical way. Within this framework the molecular Hamiltonian has been simplified to the so-called clamped nucleus Hamiltonian, also called electronic Hamiltonian, that acts only on functions of the electronic coordinates.
Once the Schrödinger equation of the clamped nucleus Hamiltonian has been solved for a sufficient number of constellations of the nuclei, an appropriate eigenvalue (usually the lowest) can be seen as a function of the nuclear coordinates, which leads to a potential energy surface. In practical calculations the surface is usually fitted in terms of some analytic functions. In the second step of the Born–Oppenheimer approximation the part of the full Coulomb Hamiltonian that depends on the electrons is replaced by the potential energy surface. This converts the total molecular Hamiltonian into another Hamiltonian that acts only on the nuclear coordinates. In the case of a breakdown of the Born–Oppenheimer approximation—which occurs when energies of different electronic states are close—the neighboring potential energy surfaces are needed, see this article for more details on this.
The nuclear motion Schrödinger equation can be solved in a space-fixed (laboratory) frame, but then the translational and rotational (external) energies are not accounted for. Only the (internal) atomic vibrations enter the problem. Further, for molecules larger than triatomic ones, it is quite common to introduce the harmonic approximation, which approximates the potential energy surface as a quadratic function of the atomic displacements. This gives the harmonic nuclear motion Hamiltonian. Making the harmonic approximation, we can convert the Hamiltonian into a sum of uncoupled one-dimensional harmonic oscillator Hamiltonians. The one-dimensional harmonic oscillator is one of the few systems that allows an exact solution of the Schrödinger equation.
Alternatively, the nuclear motion (rovibrational) Schrödinger equation can be solved in a special frame (an Eckart frame) that rotates and translates with the molecule. Formulated with respect to this body-fixed frame the Hamiltonian accounts for rotation, translation and vibration of the nuclei. Since Watson introduced in 1968 an important simplification to this Hamiltonian, it is often referred to as Watson's nuclear motion Hamiltonian, but it is also known as the Eckart Hamiltonian.
Coulomb Hamiltonian.
The algebraic form of many observables—i.e., Hermitian operators representing observable quantities—is obtained by the following quantization rules:
Classically the electrons and nuclei in a molecule have kinetic energy of the form "p"2/(2 "m") and
interact via Coulomb interactions, which are inversely proportional to the distance "r""ij"
between particle "i" and "j".
formula_2
In this expression r"i" stands for the coordinate vector of any particle (electron or nucleus), but from here on we will reserve capital R to represent the nuclear coordinate, and lower case r for the electrons of the system. The coordinates can be taken to be expressed with respect to any Cartesian frame centered anywhere in space, because distance, being an inner product, is invariant under rotation of the frame and, being the norm of a difference vector, distance is invariant under translation of the frame as well.
By quantizing the classical energy in Hamilton form one obtains the a molecular Hamilton operator that is often referred to as the Coulomb Hamiltonian. This Hamiltonian is a sum of five terms. They are
Here "M"i is the mass of nucleus "i", "Z""i" is the atomic number of nucleus "i", and "m"e is the mass of the electron. The Laplace operator of particle "i" is:formula_8. Since the kinetic energy operator is an inner product, it is invariant under rotation of the Cartesian frame with respect to which "x""i", "y""i", and "z""i" are expressed.
Small terms.
In the 1920s much spectroscopic evidence made it clear that the Coulomb Hamiltonian is missing certain terms. Especially for molecules containing heavier atoms, these terms, although much smaller than kinetic and Coulomb energies, are nonnegligible. These spectroscopic observations led to the introduction of a new degree of freedom for electrons and nuclei, namely spin. This empirical concept was given a theoretical basis by Paul Dirac when he introduced a relativistically correct (Lorentz covariant) form of the one-particle Schrödinger equation. The Dirac equation predicts that spin and spatial motion of a particle interact via spin–orbit coupling. In analogy spin-other-orbit coupling was introduced. The fact that particle spin has some of the characteristics of a magnetic dipole led to spin–spin coupling. Further terms without a classical counterpart are the Fermi-contact term (interaction of electronic density on a finite size nucleus with the nucleus), and nuclear quadrupole coupling (interaction of a nuclear quadrupole with the gradient of an electric field due to the electrons). Finally a parity violating term predicted by the Standard Model must be mentioned. Although it is an extremely small interaction, it has attracted a fair amount of attention in the scientific literature because it gives different energies for the enantiomers in chiral molecules.
The remaining part of this article will ignore spin terms and consider the solution of the eigenvalue (time-independent Schrödinger) equation of the Coulomb Hamiltonian.
The Schrödinger equation of the Coulomb Hamiltonian.
The Coulomb Hamiltonian has a continuous spectrum due to the center of mass (COM) motion of the molecule in homogeneous space. In classical mechanics it is easy to separate off the COM motion of a system of point masses. Classically the motion of the COM is uncoupled from the other motions. The COM moves uniformly (i.e., with constant velocity) through space as if it were a point particle with mass equal to the sum "M"tot of the masses of all the particles.
In quantum mechanics a free particle has as state function a plane wave function, which is a non-square-integrable function of well-defined momentum. The kinetic energy
of this particle can take any positive value. The position of the COM is uniformly probable everywhere, in agreement with the Heisenberg uncertainty principle.
By introducing the coordinate vector X of the center of mass as three of the degrees of freedom of the system and eliminating the coordinate vector of one (arbitrary) particle, so that the number of degrees of freedom stays the same, one obtains by a linear transformation a new set of coordinates ti. These coordinates are linear combinations of the old coordinates of "all" particles (nuclei "and" electrons). By applying the chain rule one can show that
formula_9
The first term of formula_10 is the kinetic energy of the COM motion, which can be treated separately since formula_11 does not depend on X. As just stated, its eigenstates are plane waves. The potential "V"(t) consists of the Coulomb terms expressed in the new coordinates. The first term of formula_11 has the usual appearance of a kinetic energy operator. The second term is known as the mass polarization term. The translationally invariant Hamiltonian formula_11 can be shown to be self-adjoint and to be bounded from below. That is, its lowest eigenvalue is real and finite. Although formula_11 is necessarily invariant under permutations of identical particles (since formula_10 and the COM kinetic energy are invariant), its invariance is not manifest.
Not many actual molecular applications of formula_11 exist; see, however, the seminal work on the hydrogen molecule for an early application. In the great majority of computations of molecular wavefunctions the electronic
problem is solved with the clamped nucleus Hamiltonian arising in the first step of the Born–Oppenheimer approximation.
See Ref. for a thorough discussion of the mathematical properties of the Coulomb Hamiltonian. Also it is discussed in this paper whether one can arrive "a priori" at the concept of a molecule (as a stable system of electrons and nuclei with a well-defined geometry) from the properties of the Coulomb Hamiltonian alone.
Clamped nucleus Hamiltonian.
The clamped nucleus Hamiltonian, which is also often called the electronic Hamiltonian, describes the energy of the electrons in the electrostatic field of the nuclei, where the nuclei are assumed to be stationary with respect to an inertial frame.
The form of the electronic Hamiltonian is
formula_12
The coordinates of electrons and nuclei are expressed with respect to a frame that moves with the nuclei, so that the nuclei are at rest with respect to this frame. The frame stays parallel to a space-fixed frame. It is an inertial frame because the nuclei are assumed not to be accelerated by external forces or torques. The origin of the frame is arbitrary, it is usually positioned on a central nucleus or in the nuclear center of mass. Sometimes it is stated that the nuclei are "at rest in a space-fixed frame". This statement implies that the nuclei are viewed as classical particles, because a quantum mechanical particle cannot be at rest. (It would mean that it had simultaneously zero momentum and well-defined position, which contradicts Heisenberg's uncertainty principle).
Since the nuclear positions are constants, the electronic kinetic energy operator is invariant under translation over any nuclear vector. The Coulomb potential, depending on difference vectors, is invariant as well. In the description of atomic orbitals and the computation of integrals over atomic orbitals this invariance is used by equipping all atoms in the molecule with their own localized frames parallel to the space-fixed frame.
As explained in the article on the Born–Oppenheimer approximation, a sufficient number of solutions of the Schrödinger equation of formula_13 leads to a potential energy surface (PES) formula_14. It is assumed that the functional dependence of "V" on its coordinates is such that
formula_15
for
formula_16
where t and s are arbitrary vectors and Δφ is an infinitesimal angle,
Δφ » Δφ2. This invariance condition on the PES is automatically fulfilled when the PES is expressed in terms of differences of, and angles between, the Ri, which is usually the case.
Harmonic nuclear motion Hamiltonian.
In the remaining part of this article we assume that the molecule is semi-rigid. In the second step of the BO approximation the nuclear kinetic energy "T"n is reintroduced and the Schrödinger equation with Hamiltonian
formula_17
is considered. One would like to recognize in its solution: the motion of the nuclear center of mass (3 degrees of freedom), the overall rotation of the molecule (3 degrees of freedom), and the nuclear vibrations. In general, this is not possible with the given nuclear kinetic energy, because it does not separate explicitly the 6 external degrees of freedom (overall translation and rotation) from the 3"N" − 6 internal degrees of freedom. In fact, the kinetic energy operator here is defined with respect to a space-fixed (SF) frame. If we were to move the origin of the SF frame to the nuclear center of mass, then, by application of the chain rule, nuclear mass polarization terms would appear. It is customary to ignore these terms altogether and we will follow this custom.
In order to achieve a separation we must distinguish internal and external coordinates, to which end Eckart introduced conditions to be satisfied by the coordinates. We will show how these conditions arise in a natural way from a harmonic analysis in mass-weighted Cartesian coordinates.
In order to simplify the expression for the kinetic energy we introduce mass-weighted displacement coordinates
formula_18.
Since
formula_19
the kinetic energy operator becomes,
formula_20
If we make a Taylor expansion of "V" around the equilibrium geometry,
formula_21
and truncate after three terms (the so-called harmonic approximation), we can describe "V" with only the third term. The term "V"0 can be absorbed in the energy (gives a new zero of energy). The second term is vanishing because of the equilibrium condition. The remaining term contains the Hessian matrix F of "V", which is symmetric and may be diagonalized with an orthogonal 3"N" × 3"N" matrix with constant elements:
formula_22
It can be shown from the invariance of "V" under rotation and translation that six of the eigenvectors of F (last six rows of Q) have eigenvalue zero (are zero-frequency modes). They span the "external space". The first 3"N" − 6 rows of Q are—for molecules in their ground state—eigenvectors with non-zero eigenvalue; they are the internal coordinates and form an orthonormal basis for a (3"N" - 6)-dimensional subspace of
the nuclear configuration space R3"N", the "internal space". The zero-frequency eigenvectors are orthogonal to the eigenvectors of non-zero frequency. It can be shown that these orthogonalities are in fact the Eckart conditions. The kinetic energy expressed in the internal coordinates is the internal (vibrational) kinetic energy.
With the introduction of normal coordinates
formula_23
the vibrational (internal) part of the Hamiltonian for the nuclear motion becomes in the "harmonic approximation"
formula_24
The corresponding Schrödinger equation is easily solved, it factorizes into 3"N" − 6 equations for one-dimensional harmonic oscillators. The main effort in this approximate solution of the nuclear motion Schrödinger equation is the computation of the Hessian F of "V" and its diagonalization.
This approximation to the nuclear motion problem, described in 3"N" mass-weighted Cartesian coordinates, became standard in quantum chemistry, since the days (1980s-1990s) that algorithms for accurate computations of the Hessian F became available. Apart from the harmonic approximation, it has as a further deficiency that the external (rotational and translational) motions of the molecule are not accounted for. They are accounted for in a rovibrational Hamiltonian that sometimes is called "Watson's Hamiltonian".
Watson's nuclear motion Hamiltonian.
In order to obtain a Hamiltonian for external (translation and rotation) motions coupled to the internal (vibrational) motions, it is common to return at this point to classical mechanics and to formulate the classical kinetic energy corresponding to these motions of the nuclei. Classically it is easy to separate the translational—center of mass—motion from the other motions. However, the separation of the rotational from the vibrational motion is more difficult and is not completely possible. This ro-vibrational separation was first achieved by Eckart in 1935 by imposing by what is now known as Eckart conditions. Since the problem is described in a frame (an "Eckart" frame) that rotates with the molecule, and hence is a non-inertial frame, energies associated with the fictitious forces: centrifugal and Coriolis force appear in the kinetic energy.
In general, the classical kinetic energy "T" defines the metric tensor g = ("g"ij) associated with the curvilinear coordinates s = ("s"i) through
formula_25
The quantization step is the transformation of this classical kinetic energy into a quantum mechanical operator. It is common to follow Podolsky by writing down the Laplace–Beltrami operator in the same (generalized, curvilinear) coordinates s as used for the classical form. The equation for this operator requires the inverse of the metric tensor g and its determinant. Multiplication of the Laplace–Beltrami operator by formula_26 gives the required quantum mechanical kinetic energy operator. When we apply this recipe to Cartesian coordinates, which have unit metric, the same kinetic energy is obtained as by application of the quantization rules.
The nuclear motion Hamiltonian was obtained by Wilson and Howard in 1936, who followed this procedure, and further refined by Darling and Dennison in 1940. It remained the standard until 1968, when Watson was able to simplify it drastically by commuting through the derivatives the determinant of the metric tensor. We will give the ro-vibrational Hamiltonian obtained by Watson, which often is referred to as the Watson Hamiltonian. Before we do this we must mention
that a derivation of this Hamiltonian is also possible by starting from the Laplace operator in Cartesian form, application of coordinate transformations, and use of the chain rule.
The Watson Hamiltonian, describing all motions of the "N" nuclei, is
formula_27
The first term is the center of mass term
formula_28
The second term is the rotational term akin to the kinetic energy of the rigid rotor. Here
formula_29 is the α component of the body-fixed "rigid rotor angular momentum operator",
see this article for its expression in terms of Euler angles. The operator formula_30 is a component of an operator known
as the "vibrational angular momentum operator" (although it does "not" satisfy angular momentum commutation relations),
formula_31
with the "Coriolis coupling constant":
formula_32
Here "εαβγ" is the Levi-Civita symbol. The terms quadratic in the formula_29 are centrifugal terms, those bilinear in formula_29 and formula_33 are Coriolis terms. The quantities "Q" s, iγ are the components of the normal coordinates introduced above. Alternatively, normal coordinates may be obtained by application of Wilson's GF method. The 3 × 3 symmetric matrix formula_34 is called the "effective reciprocal inertia tensor". If all "q" s were zero (rigid molecule) the Eckart frame would coincide with a principal axes frame (see rigid rotor) and formula_34 would be diagonal, with the equilibrium reciprocal moments of inertia on the diagonal. If all "q" s would be zero, only the kinetic energies of translation and rigid rotation would survive.
The potential-like term "U" is the "Watson term":
formula_35
proportional to the trace of the effective reciprocal inertia tensor.
The fourth term in the Watson Hamiltonian is the kinetic energy associated with the vibrations of the atoms (nuclei) expressed in normal coordinates "q"s, which as stated above, are given in terms of nuclear displacements ρiα by
formula_36
Finally "V" is the unexpanded potential energy by definition depending on internal coordinates only. In the harmonic approximation it takes the form
formula_37
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "-i\\hbar\\boldsymbol{\\nabla}"
},
{
"math_id": 1,
"text": "\\boldsymbol{\\nabla}"
},
{
"math_id": 2,
"text": " r_{ij} \\equiv |\\mathbf{r}_i -\\mathbf{r}_j|\n = \\sqrt{(\\mathbf{r}_i -\\mathbf{r}_j)\\cdot(\\mathbf{r}_i -\\mathbf{r}_j)}\n = \\sqrt{(x_i-x_j)^2 + (y_i-y_j)^2 + (z_i-z_j)^2 } .\n"
},
{
"math_id": 3,
"text": " \\hat{T}_n = - \\sum_i \\frac{\\hbar^2}{2 M_i} \\nabla^2_{\\mathbf{R}_i} "
},
{
"math_id": 4,
"text": "\\hat{T}_e = - \\sum_i \\frac{\\hbar^2}{2 m_e} \\nabla^2_{\\mathbf{r}_i} "
},
{
"math_id": 5,
"text": "\\hat{U}_{en} = - \\sum_i \\sum_j \\frac{Z_i e^2}{4 \\pi \\varepsilon_0 \\left | \\mathbf{R}_i - \\mathbf{r}_j \\right | }"
},
{
"math_id": 6,
"text": "\\hat{U}_{ee} = {1 \\over 2} \\sum_i \\sum_{j \\ne i} \\frac{e^2}{4 \\pi \\varepsilon_0 \\left | \\mathbf{r}_i - \\mathbf{r}_j \\right | } =\n\\sum_i \\sum_{j > i} \\frac{e^2}{4 \\pi \\varepsilon_0 \\left | \\mathbf{r}_i - \\mathbf{r}_j \\right | }\n"
},
{
"math_id": 7,
"text": "\\hat{U}_{nn} = {1 \\over 2} \\sum_i \\sum_{j \\ne i} \\frac{Z_i Z_j e^2}{4 \\pi \\varepsilon_0 \\left | \\mathbf{R}_i - \\mathbf{R}_j \\right | } =\n\\sum_i \\sum_{j > i} \\frac{Z_i Z_j e^2}{4 \\pi \\varepsilon_0 \\left | \\mathbf{R}_i - \\mathbf{R}_j \\right | }. "
},
{
"math_id": 8,
"text": " \\nabla^2_{\\mathbf{r}_i} \\equiv \\boldsymbol{\\nabla}_{\\mathbf{r}_i}\\cdot \\boldsymbol{\\nabla}_{\\mathbf{r}_i}\n= \\frac{\\partial^2}{\\partial x_i^2} + \\frac{\\partial^2}{\\partial y_i^2} + \\frac{\\partial^2}{\\partial z_i^2} "
},
{
"math_id": 9,
"text": "\nH = -\\frac{\\hbar^2}{2M_\\textrm{tot}} \\nabla^2_{\\mathbf{X}} + H'\n\\quad\\text{with }\\quad H'=\n-\\frac{\\hbar^2}{2} \\sum_{i=1}^{N_\\textrm{tot} -1 } \\frac{1}{m_i} \\nabla^2_{i}\n+\\frac{\\hbar^2}{2 M_\\textrm{tot}}\\sum_{i,j=1}^{N_\\textrm{tot} -1 } \\nabla_{i} \\cdot \\nabla_{j} +V(\\mathbf{t}).\n"
},
{
"math_id": 10,
"text": "H"
},
{
"math_id": 11,
"text": "H'"
},
{
"math_id": 12,
"text": " \\hat{H}_\\mathrm{el} = \\hat{T}_e + \\hat{U}_{en}+ \\hat{U}_{ee}+ \\hat{U}_{nn}."
},
{
"math_id": 13,
"text": " H_\\text{el}"
},
{
"math_id": 14,
"text": "V(\\mathbf{R}_1, \\mathbf{R}_2, \\ldots, \\mathbf{R}_N)"
},
{
"math_id": 15,
"text": " V(\\mathbf{R}_1, \\mathbf{R}_2, \\ldots, \\mathbf{R}_N)=V(\\mathbf{R}'_1, \\mathbf{R}'_2, \\ldots, \\mathbf{R}'_N)"
},
{
"math_id": 16,
"text": " \\mathbf{R}'_i =\\mathbf{R}_i + \\mathbf{t} \\;\\;\\text{(translation) and}\\;\\;\n\\mathbf{R}'_i =\\mathbf{R}_i + \\frac{\\Delta\\phi}{|\\mathbf{s}|} \\; ( \\mathbf{s}\\times \\mathbf{R}_i)\n\\;\\;\\text{(infinitesimal rotation)},\n"
},
{
"math_id": 17,
"text": " \\hat{H}_\\mathrm{nuc} = -\\frac{\\hbar^2}{2}\\sum_{i=1}^N\n\\sum_{\\alpha=1}^3 \\frac{1}{M_i} \\frac{\\partial^2}{\\partial R_{i\\alpha}^2} +V(\\mathbf{R}_1,\\ldots,\\mathbf{R}_N) "
},
{
"math_id": 18,
"text": "\\boldsymbol{\\rho}_i \\equiv \\sqrt{M_i} (\\mathbf{R}_i-\\mathbf{R}_i^0)"
},
{
"math_id": 19,
"text": "\n\\frac{\\partial}{\\partial \\rho_{i \\alpha}} = \\frac{\\partial}{\\sqrt{M_i} (\\partial R_{i \\alpha} - \\partial R^0_{i \\alpha})} = \\frac{1}{\\sqrt{M_i}} \\frac{\\partial}{\\partial R_{i \\alpha}} ,\n"
},
{
"math_id": 20,
"text": "T = -\\frac{\\hbar^2}{2} \\sum_{i=1}^N \\sum_{\\alpha=1}^3 \\frac{\\partial^2}{\\partial \\rho_{i\\alpha}^2}."
},
{
"math_id": 21,
"text": "\nV = V_0 + \\sum_{i=1}^N \\sum_{\\alpha=1}^3 \\Big(\\frac{\\partial V}{\\partial \\rho_{i\\alpha}}\\Big)_0\\; \\rho_{i\\alpha} + \\frac{1}{2} \\sum_{i,j=1}^N \\sum_{\\alpha,\\beta=1}^3 \\Big(\n\\frac{\\partial^2 V}{\\partial \\rho_{i\\alpha}\\partial\\rho_{j\\beta}}\\Big)_0 \\;\\rho_{i\\alpha}\\rho_{j\\beta} + \\cdots,\n"
},
{
"math_id": 22,
"text": "\n\\mathbf{Q} \\mathbf{F} \\mathbf{Q}^\\mathrm{T} = \\boldsymbol{\\Phi} \\quad \\text{with}\\quad\n\\boldsymbol{\\Phi} = \\operatorname{diag}(f_1, \\dots, f_{3N-6}, 0,\\ldots,0).\n"
},
{
"math_id": 23,
"text": "q_t \\equiv \\sum_{i=1}^N\\sum_{\\alpha=1}^3 \\; Q_{t, i\\alpha} \\rho_{i\\alpha},"
},
{
"math_id": 24,
"text": "\\hat{H}_\\text{nuc} \\approx \\frac{1}{2} \\sum_{t=1}^{3N-6} \\left[-\\hbar^2 \\frac{\\partial^2}{\\partial q_{t}^2} + f_t q_t^2 \\right] ."
},
{
"math_id": 25,
"text": " 2T = \\sum_{ij} g_{ij} \\dot{s}_i \\dot{s}_j. "
},
{
"math_id": 26,
"text": "-\\hbar^2"
},
{
"math_id": 27,
"text": "\n\\hat{H} =\n-\\frac{\\hbar^2}{2M_\\mathrm{tot}} \\sum_{\\alpha=1}^3 \\frac{\\partial^2}{\\partial X_\\alpha^2}\n+\\frac{1}{2} \\sum_{\\alpha,\\beta=1}^3 \\mu_{\\alpha\\beta} (\\mathcal{P}_\\alpha - \\Pi_\\alpha)(\\mathcal{P}_\\beta - \\Pi_\\beta) +U -\\frac{\\hbar^2}{2} \\sum_{s=1}^{3N-6} \\frac{\\partial^2}{\\partial q_s^2} + V .\n"
},
{
"math_id": 28,
"text": "\n\\mathbf{X} \\equiv \\frac{1}{M_\\mathrm{tot}} \\sum_{i=1}^N M_i \\mathbf{R}_i \\quad\\mathrm{with}\\quad\nM_\\mathrm{tot} \\equiv \\sum_{i=1}^N M_i.\n"
},
{
"math_id": 29,
"text": "\\mathcal{P}_\\alpha"
},
{
"math_id": 30,
"text": "\\Pi_\\alpha\\,"
},
{
"math_id": 31,
"text": "\\Pi_\\alpha = -i\\hbar \\sum_{s,t=1}^{3N-6} \\zeta^{\\alpha}_{st} \\; q_s \\frac{\\partial}{\\partial q_t}"
},
{
"math_id": 32,
"text": "\n\\zeta^{\\alpha}_{st} = \\sum_{i=1}^N \\sum_{\\beta,\\gamma=1}^3 \\epsilon_{\\alpha\\beta\\gamma}\nQ_{s, i\\beta}\\,Q_{t,i\\gamma} \\;\\; \\mathrm{and}\\quad\\alpha=1,2,3.\n"
},
{
"math_id": 33,
"text": "\\Pi_\\beta\\, "
},
{
"math_id": 34,
"text": "\\boldsymbol{\\mu}"
},
{
"math_id": 35,
"text": "U = -\\frac{1}{8} \\sum_{\\alpha=1}^3 \\mu_{\\alpha\\alpha}"
},
{
"math_id": 36,
"text": "q_s = \\sum_{i=1}^N \\sum_{\\alpha=1}^3 Q_{s, i\\alpha} \\rho_{i\\alpha}\\quad\\text{for}\\quad s=1,\\ldots, 3N-6."
},
{
"math_id": 37,
"text": "V \\approx \\frac{1}{2} \\sum_{s=1}^{3N-6} f_s q_s^2."
}
]
| https://en.wikipedia.org/wiki?curid=7148738 |
7149012 | Factorization system | In mathematics, it can be shown that every function can be written as the composite of a surjective function followed by an injective function. Factorization systems are a generalization of this situation in category theory.
Definition.
A factorization system ("E", "M") for a category C consists of two classes of morphisms "E" and "M" of C such that:
"Remark:" formula_9 is a morphism from formula_10 to formula_11 in the arrow category.
Orthogonality.
Two morphisms formula_12 and formula_13 are said to be "orthogonal", denoted formula_14, if for every pair of morphisms formula_3 and formula_4 such that formula_15 there is a unique morphism formula_8 such that the diagram
commutes. This notion can be extended to define the orthogonals of sets of morphisms by
formula_16 and formula_17
Since in a factorization system formula_18 contains all the isomorphisms, the condition (3) of the definition is equivalent to
(3') formula_19 and formula_20
"Proof:" In the previous diagram (3), take formula_21 (identity on the appropriate object) and formula_22.
Equivalent definition.
The pair formula_23 of classes of morphisms of C is a factorization system if and only if it satisfies the following conditions:
Weak factorization systems.
Suppose "e" and "m" are two morphisms in a category C. Then "e" has the "left lifting property" with respect to "m" (respectively "m" has the "right lifting property" with respect to "e") when for every pair of morphisms "u" and "v" such that "ve" = "mu" there is a morphism "w" such that the following diagram commutes. The difference with orthogonality is that "w" is not necessarily unique.
A weak factorization system ("E", "M") for a category C consists of two classes of morphisms "E" and "M" of C such that:
This notion leads to a succinct definition of model categories: a model category is a pair consisting of a category C and classes of (so-called) weak equivalences "W", fibrations "F" and cofibrations "C" so that
A model category is a complete and cocomplete category equipped with a model structure. A map is called a trivial fibration if it belongs to formula_33 and it is called a trivial cofibration if it belongs to formula_34 An object formula_35 is called fibrant if the morphism formula_36 to the terminal object is a fibration, and it is called cofibrant if the morphism formula_37 from the initial object is a cofibration.
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "f=m\\circ e"
},
{
"math_id": 1,
"text": "e\\in E"
},
{
"math_id": 2,
"text": "m\\in M"
},
{
"math_id": 3,
"text": "u"
},
{
"math_id": 4,
"text": "v"
},
{
"math_id": 5,
"text": "vme=m'e'u"
},
{
"math_id": 6,
"text": "e, e'\\in E"
},
{
"math_id": 7,
"text": "m, m'\\in M"
},
{
"math_id": 8,
"text": "w"
},
{
"math_id": 9,
"text": "(u,v)"
},
{
"math_id": 10,
"text": "me"
},
{
"math_id": 11,
"text": "m'e'"
},
{
"math_id": 12,
"text": "e"
},
{
"math_id": 13,
"text": "m"
},
{
"math_id": 14,
"text": "e\\downarrow m"
},
{
"math_id": 15,
"text": "ve=mu"
},
{
"math_id": 16,
"text": "H^\\uparrow=\\{e\\quad|\\quad\\forall h\\in H, e\\downarrow h\\}"
},
{
"math_id": 17,
"text": "H^\\downarrow=\\{m\\quad|\\quad\\forall h\\in H, h\\downarrow m\\}."
},
{
"math_id": 18,
"text": "E\\cap M"
},
{
"math_id": 19,
"text": "E\\subseteq M^\\uparrow"
},
{
"math_id": 20,
"text": "M\\subseteq E^\\downarrow."
},
{
"math_id": 21,
"text": " m:= id ,\\ e' := id "
},
{
"math_id": 22,
"text": " m' := m "
},
{
"math_id": 23,
"text": "(E,M)"
},
{
"math_id": 24,
"text": "m\\in M."
},
{
"math_id": 25,
"text": "E=M^\\uparrow"
},
{
"math_id": 26,
"text": "M=E^\\downarrow."
},
{
"math_id": 27,
"text": "(C \\cap W, F)"
},
{
"math_id": 28,
"text": "(C, F \\cap W)"
},
{
"math_id": 29,
"text": "W"
},
{
"math_id": 30,
"text": "f"
},
{
"math_id": 31,
"text": "g"
},
{
"math_id": 32,
"text": "f,g,g\\circ f"
},
{
"math_id": 33,
"text": "F\\cap W,"
},
{
"math_id": 34,
"text": "C\\cap W."
},
{
"math_id": 35,
"text": "X"
},
{
"math_id": 36,
"text": "X\\rightarrow 1"
},
{
"math_id": 37,
"text": "0\\rightarrow X"
}
]
| https://en.wikipedia.org/wiki?curid=7149012 |
7149361 | Regular dodecahedron | Convex polyhedron with 12 regular pentagonal faces
A regular dodecahedron or pentagonal dodecahedron is a dodecahedron composed of regular pentagonal faces, three meeting at each vertex. It is an example of Platonic solids, described as cosmic stellation by Plato in his dialogues, and it was used as part of Solar System proposed by Johannes Kepler. However, the regular dodecahedron, including the other Platonic solids, has already been described by other philosophers since antiquity.
The regular dodecahedron is the family of truncated trapezohedron because it is the result of truncating axial vertices of a pentagonal trapezohedron. It is also a Goldberg polyhedron because it is the initial polyhedron to construct new polyhedrons by the process of chamfering. It has a relation with other Platonic solids, one of them is the regular icosahedron as its dual polyhedron. Other new polyhedrons can be constructed by using regular dodecahedron.
The regular dodecahedron's metric properties and construction are associated with the golden ratio. The regular dodecahedron can be found in many popular cultures: Roman dodecahedron, the children's story, toys, and painting arts. It can also be found in nature and supramolecules, as well as the shape of the universe. The skeleton of a regular dodecahedron can be represented as the graph called the dodecahedral graph, a Platonic graph. Its property of the Hamiltonian, a path visits all of its vertices exactly once, can be found in a toy called icosian game.
As a Platonic solid.
The regular dodecahedron is a polyhedron with 12 pentagonal faces, 30 edges, and 20 vertices. It is one of the Platonic solids, a set of polyhedrons in which the faces are regular polygons that are congruent and the same number of faces meet at a vertex. This set of polyhedrons is named after Plato. In "Theaetetus", a dialogue of Plato, Plato hypothesized that the classical elements were made of the five uniform regular solids. Plato described the regular dodecahedron, obscurely remarked, "...the god used [it] for arranging the constellations on the whole heaven". Timaeus, as a personage of Plato's dialogue, associates the other four Platonic solids—regular tetrahedron, cube, regular octahedron, and regular icosahedron—with the four classical elements, adding that there is a fifth solid pattern which, though commonly associated with the regular dodecahedron, is never directly mentioned as such; "this God used in the delineation of the universe." Aristotle also postulated that the heavens were made of a fifth element, which he called aithêr ("aether" in Latin, "ether" in American English).
Following its attribution with nature by Plato, Johannes Kepler in his "Harmonices Mundi" sketched each of the Platonic solids, one of them is a regular dodecahedron. In his "Mysterium Cosmographicum", Kepler also proposed the Solar System by using the Platonic solids setting into another one and separating them with six spheres resembling the six planets. The ordered solids started from the innermost to the outermost: regular octahedron, regular icosahedron, regular dodecahedron, regular tetrahedron, and cube.
Many antiquity philosophers described the regular dodecahedron, including the rest of the Platonic solids. Theaetetus gave a mathematical description of all five and may have been responsible for the first known proof that no other convex regular polyhedra exist. Euclid completely mathematically described the Platonic solids in the "Elements", the last book (Book XIII) of which is devoted to their properties. Propositions 13–17 in Book XIII describe the construction of the tetrahedron, octahedron, cube, icosahedron, and dodecahedron in that order. For each solid, Euclid finds the ratio of the diameter of the circumscribed sphere to the edge length. In Proposition 18 he argues that there are no further convex regular polyhedra. Iamblichus states that Hippasus, a Pythagorean, perished in the sea, because he boasted that he first divulged "the sphere with the twelve pentagons".
Relation to the regular icosahedron.
The dual polyhedron of a dodecahedron is the regular icosahedron. One property of the dual polyhedron generally is that the original polyhedron and its dual share the same three-dimensional symmetry group. In the case of the regular dodecahedron, it has the same symmetry as the regular icosahedron, the icosahedral symmetry formula_0.
When a regular dodecahedron is inscribed in a sphere, it occupies more of the sphere's volume (66.49%) than an icosahedron inscribed in the same sphere (60.55%). The resulting of both spheres' volumes initially began from the problem by ancient Greeks, determining which of two shapes has a larger volume: an icosahedron inscribed in a sphere, or a dodecahedron inscribed in the same sphere. The problem was solved by Hero of Alexandria, Pappus of Alexandria, and Fibonacci, among others. Apollonius of Perga discovered the curious result that the ratio of volumes of these two shapes is the same as the ratio of their surface areas. Both volumes have formulas involving the golden ratio but are taken to different powers.
Golden rectangle may also related to both regular icosahedron and regular dodecahedron. The regular icosahedron can be constructed by intersecting three golden rectangles perpendicularly, arranged in two-by-two orthogonal, and connecting each of the golden rectangle's vertices with a segment line. There are 12 regular icosahedron's vertices, considered as the center of 12 regular dodecahedron faces.
Relation to the regular tetrahedron.
As two opposing tetrahedra can be inscribed in a cube, and five cubes can be inscribed in a dodecahedron, ten tetrahedra in five cubes can be inscribed in a dodecahedron: two opposing sets of five, with each set covering all 20 vertices and each vertex in two tetrahedra (one from each set, but not the opposing pair). As quoted by ,
<templatestyles src="Template:Blockquote/styles.css" />"Just as a tetrahedron can be inscribed in a cube, so a cube can be inscribed in a dodecahedron. By reciprocation, this leads to an octahedron circumscribed about an icosahedron. In fact, each of the twelve vertices of the icosahedron divides an edge of the octahedron according to the "golden section". Given the icosahedron, the circumscribed octahedron can be chosen in five ways, giving a compound of five octahedra, which comes under our definition of stellated icosahedron. (The reciprocal compound, of five cubes whose vertices belong to a dodecahedron, is a stellated triacontahedron.) Another stellated icosahedron can at once be deduced, by stellating each octahedron into a stella octangula, thus forming a compound of ten tetrahedra. Further, we can choose one tetrahedron from each stella octangula, so as to derive a compound of five tetrahedra, which still has all the rotation symmetry of the icosahedron (i.e. the icosahedral group), although it has lost the reflections. By reflecting this figure in any plane of symmetry of the icosahedron, we obtain the complementary set of five tetrahedra. These two sets of five tetrahedra are enantiomorphous, i.e. not directly congruent, but related like a pair of shoes. [Such] a figure which possesses no plane of symmetry (so that it is enantiomorphous to its mirror-image) is said to be "chiral"."
Configuration matrix.
The configuration matrix is a matrix in which the rows and columns correspond to the elements of a polyhedron as in the vertices, edges, and faces. The diagonal of a matrix denotes the number of each element that appears in a polyhedron, whereas the non-diagonal of a matrix denotes the number of the column's elements that occur in or at the row's element. The regular dodecahedron can be represented in the following matrix:
formula_1
Relation to the golden ratio.
The golden ratio is the ratio between two numbers equal to the ratio of their sum to the larger of the two quantities. It is one of two roots of a polynomial, expressed as formula_2. The golden ratio can be applied to the regular dodecahedron's metric properties, as well as to construct the regular dodecahedron.
The surface area formula_3 and the volume formula_4 of a regular dodecahedron of edge length formula_5 are:
formula_6
The following Cartesian coordinates define the 20 vertices of a regular dodecahedron centered at the origin and suitably scaled and oriented:
formula_7
If the edge length of a regular dodecahedron is formula_8, the radius of a circumscribed sphere formula_9 (one that touches the regular dodecahedron at all vertices), the radius of an inscribed sphere formula_10 (tangent to each of the regular dodecahedron's faces), and the midradius formula_11 (one that touches the middle of each edge) are:
formula_12
Note that, given a regular dodecahedron of edge length one, formula_9 is the radius of a circumscribing sphere about a cube of edge length formula_13, and formula_10 is the apothem of a regular pentagon of edge length formula_13.
The dihedral angle of a regular dodecahedron between every two adjacent pentagonal faces is formula_14, approximately 116.565°.
Other related geometric objects.
The regular dodecahedron can be interpreted as a truncated trapezohedron. It is the set of polyhedrons that can be constructed by truncating the two axial vertices of a trapezohedron. Here, the regular dodecahedron is constructed by truncating the pentagonal trapezohedron.
The regular dodecahedron can be interpreted as the Goldberg polyhedron. It is a set of polyhedrons containing hexagonal and pentagonal faces. Other than two Platonic solids—tetrahedron and cube—the regular dodecahedron is the initial of Goldberg polyhedron construction, and the next polyhedron is resulted by truncating all of its edges, a process called chamfer. This process can be continuously repeated, resulting in more new Goldberg's polyhedrons. These polyhedrons are classified as the first class of a Goldberg polyhedron.
The stellations of the regular dodecahedron make up three of the four Kepler–Poinsot polyhedra. The first stellation of a regular dodecahedron is constructed by attaching its layer with pentagonal pyramids, forming a small stellated dodecahedron. The second stellation is by attaching the small stellated dodecahedron with wedges, forming a great dodecahedron. The third stellation is by attaching the great dodecahedron with the sharp triangular pyramids, forming a great stellated dodecahedron.
Appearances.
In visual arts.
Regular dodecahedra have been used as dice and probably also as divinatory devices. During the Hellenistic era, small hollow bronze Roman dodecahedra were made and have been found in various Roman ruins in Europe. Its purpose is not certain.
In 20th-century art, dodecahedra appears in the work of M. C. Escher, such as his lithographs "Reptiles" (1943) and "Gravitation" (1952). In Salvador Dalí's painting "The Sacrament of the Last Supper" (1955), the room is a hollow regular dodecahedron. Gerard Caris based his entire artistic oeuvre on the regular dodecahedron and the pentagon, presented as a new art movement coined as Pentagonism.
In toys and popular culture.
In modern role-playing games, the regular dodecahedron is often used as a twelve-sided die, one of the more common polyhedral dice. The Megaminx twisty puzzle is shaped like a regular dodecahedron alongside its larger and smaller order analogues.
In the children's novel "The Phantom Tollbooth", the regular dodecahedron appears as a character in the land of Mathematics. Each face of the regular dodecahedron describes the various facial expressions, swiveling to the front as required to match his mood.
In nature and supramolecules.
The fossil coccolithophore "Braarudosphaera bigelowii" (see figure), a unicellular coastal phytoplanktonic alga, has a calcium carbonate shell with a regular dodecahedral structure about 10 micrometers across.
Some quasicrystals and cages have dodecahedral shape (see figure). Some regular crystals such as garnet and diamond are also said to exhibit "dodecahedral" habit, but this statement actually refers to the rhombic dodecahedron shape.
Shape of the universe.
Various models have been proposed for the global geometry of the universe. These proposals include the Poincaré dodecahedral space, a positively curved space consisting of a regular dodecahedron whose opposite faces correspond (with a small twist). This was proposed by Jean-Pierre Luminet and colleagues in 2003, and an optimal orientation on the sky for the model was estimated in 2008.
In Bertrand Russell's 1954 short story "The Mathematician's Nightmare: The Vision of Professor Squarepunt", the number 5 said: "I am the number of fingers on a hand. I make pentagons and pentagrams. And but for me dodecahedra could not exist; and, as everyone knows, the universe is a dodecahedron. So, but for me, there could be no universe."
Dodecahedral graph.
According to Steinitz's theorem, the graph can be represented as the skeleton of a polyhedron; roughly speaking, a framework of a polyhedron. Such a graph has two properties. It is planar, meaning the edges of a graph are connected to every vertex without crossing other edges. It is also 3-connected graph, meaning that, whenever a graph with more than three vertices, and two of the vertices are removed, the edges remain connected. The skeleton of a regular dodecahedron can be represented as a graph, and it is called the dodecahedral graph, a Platonic graph.
This graph can also be constructed as the generalized Petersen graph formula_15, where the vertices of a decagon are connected to those of two pentagons, one pentagon connected to odd vertices of the decagon and the other pentagon connected to the even vertices. Geometrically, this can be visualized as the 10-vertex equatorial belt of the dodecahedron connected to the two 5-vertex polar regions, one on each side.
The high degree of symmetry of the polygon is replicated in the properties of this graph, which is distance-transitive, distance-regular, and symmetric. The automorphism group has order 120. The vertices can be colored with 3 colors, as can the edges, and the diameter is 5.
The dodecahedral graph is Hamiltonian, meaning a path visits all of its vertices exactly once. The name of this property is named after William Rowan Hamilton, who invented a mathematical game known as the icosian game. The game's object was to find a Hamiltonian cycle along the edges of a dodecahedron.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathrm{I}_\\mathrm{h} "
},
{
"math_id": 1,
"text": " \\begin{bmatrix}\n 20 & 3 & 3 \\\\\n 2 & 30 & 2 \\\\\n 5 & 5 & 12\n\\end{bmatrix} "
},
{
"math_id": 2,
"text": " \\phi = \\frac{1 + \\sqrt{5}}{2} \\approx 1.618 "
},
{
"math_id": 3,
"text": " A "
},
{
"math_id": 4,
"text": " V "
},
{
"math_id": 5,
"text": " a "
},
{
"math_id": 6,
"text": " A = \\frac{15\\phi}{\\sqrt{3 - \\phi}}a^2, \\qquad V = \\frac{5\\phi^3}{6-2\\phi}a^3. "
},
{
"math_id": 7,
"text": " \\begin{align}\n (\\pm 1, \\pm 1, \\pm 1), &\\qquad (0, \\pm \\phi, \\pm 1/\\phi), \\\\\n (\\pm 1/\\phi, 0, \\pm \\phi), &\\qquad (\\pm \\phi, \\pm 1/\\phi, 0).\n\\end{align} "
},
{
"math_id": 8,
"text": "a"
},
{
"math_id": 9,
"text": " r_u "
},
{
"math_id": 10,
"text": " r_i "
},
{
"math_id": 11,
"text": " r_m "
},
{
"math_id": 12,
"text": " \\begin{align}\n r_u &= \\frac{\\phi \\sqrt{3}}{2} a \\approx 1.401a, \\\\\n r_i &= \\frac{\\phi^2}{2 \\sqrt{3-\\phi}}a \\approx 1.114a, \\\\\n r_m &= \\frac{\\phi^2}{2}a \\approx 1.309a.\n\\end{align}"
},
{
"math_id": 13,
"text": " \\phi "
},
{
"math_id": 14,
"text": " 2 \\arctan (\\phi) "
},
{
"math_id": 15,
"text": " G(10,2) "
}
]
| https://en.wikipedia.org/wiki?curid=7149361 |
71495032 | Faltings' annihilator theorem | In abstract algebra (specifically commutative ring theory), Faltings' annihilator theorem states: given a finitely generated module "M" over a Noetherian commutative ring "A" and ideals "I", "J", the following are equivalent:
provided either "A" has a dualizing complex or is a quotient of a regular ring.
The theorem was first proved by Faltings in .
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\operatorname{depth} M_{\\mathfrak{p}} + \\operatorname{ht}(I + \\mathfrak{p})/\\mathfrak{p} \\ge n"
},
{
"math_id": 1,
"text": "\\mathfrak{p} \\in \\operatorname{Spec}(A) - V(J)"
},
{
"math_id": 2,
"text": "\\mathfrak b"
},
{
"math_id": 3,
"text": "\\mathfrak{b} \\supset J"
},
{
"math_id": 4,
"text": "\\operatorname{H}^i_I(M), 0 \\le i \\le n - 1"
}
]
| https://en.wikipedia.org/wiki?curid=71495032 |
71496340 | Wang algebra | Algebraic structure in network theory
In algebra and network theory, a Wang algebra is a commutative algebra formula_0, over a field or (more generally) a commutative unital ring, in which formula_0 has two additional properties:<br>(Rule i) For all elements "x" of formula_0, "x" + "x" = 0 (universal additive nilpotency of degree 1).<br>(Rule ii) For all elements "x" of formula_0, "x"⋅"x" = 0 (universal multiplicative nilpotency of degree 1).
History and applications.
Rules (i) and (ii) were originally published by K. T. Wang (Wang Ki-Tung, 王 季同) in 1934 as part of a method for analyzing electrical networks. From 1935 to 1940, several Chinese electrical engineering researchers published papers on the method. The original Wang algebra is the Grassman algebra over the finite field mod 2. At the 57th annual meeting of the American Mathematical Society, held on December 27–29, 1950, Raoul Bott and Richard Duffin introduced the concept of a Wang algebra in their abstract (number 144"t") "The Wang algebra of networks". They gave an interpretation of the Wang algebra as a particular type of Grassman algebra mod 2. In 1969 Wai-Kai Chen used the Wang algebra formulation to give a unification of several different techniques for generating the trees of a graph. The Wang algebra formulation has been used to systematically generate King-Altman directed graph patterns. Such patterns are useful in deriving rate equations in the theory of enzyme kinetics.
According to Guo Jinhai, professor in the Institute for the History of Natural Sciences of the Chinese Academy of Sciences, Wang Ki Tung's pioneering method of analyzing electrical networks significantly promoted electrical engineering not only in China but in the rest of the world; the Wang algebra formulation is useful in electrical networks for solving problems involving topological methods, graph theory, and Hamiltonian cycles.
#For each node write the sum of all the edge-labels that meet that node.
#Leave out one node and take the product of the sums of labels for all the remaining nodes.
#Expand the product in 2. using the Wang algebra.
#The terms in the sum of the expansion obtained in 3. are in 1-1 correspondence with the spanning trees in the graph. | [
{
"math_id": 0,
"text": "A"
}
]
| https://en.wikipedia.org/wiki?curid=71496340 |
7149681 | Generalized dihedral group | Family of groups in mathematics
In mathematics, the generalized dihedral groups are a family of groups with algebraic structures similar to that of the dihedral groups. They include the finite dihedral groups, the infinite dihedral group, and the orthogonal group "O"(2). Dihedral groups play an important role in group theory, geometry, and chemistry.
Definition.
For any abelian group "H", the generalized dihedral group of "H", written Dih("H"), is the semidirect product of "H" and Z2, with Z2 acting on "H" by inverting elements. I.e., formula_0 with φ(0) the identity and φ(1) inversion.
Thus we get:
("h"1, 0) * ("h"2, "t"2) = ("h"1 + "h"2, "t"2)
("h"1, 1) * ("h"2, "t"2) = ("h"1 − "h"2, 1 + "t"2)
for all "h"1, "h"2 in "H" and "t"2 in Z2.
Note that ("h", 0) * (0,1) = ("h",1), i.e. first the inversion and then the operation in "H". Also (0, 1) * ("h", "t") = (−"h", 1 + "t"); indeed (0,1) inverts "h", and toggles "t" between "normal" (0) and "inverted" (1) (this combined operation is its own inverse).
The subgroup of Dih("H") of elements ("h", 0) is a normal subgroup of index 2, isomorphic to "H", while the elements ("h", 1) are all their own inverse.
The conjugacy classes are:
Thus for every subgroup "M" of "H", the corresponding set of elements ("m",0) is also a normal subgroup. We have:
Dih("H") "/" "M" = Dih ( "H / M" )
Properties.
Dih("H") is Abelian, with the semidirect product a direct product, if and only if all elements of "H" are their own inverse, i.e., an elementary abelian 2-group:
etc.
Topology.
Dih(R"n" ) and its dihedral subgroups are disconnected topological groups. Dih(R"n" ) consists of two connected components: the identity component isomorphic to R"n", and the component with the reflections. Similarly O(2) consists of two connected components: the identity component isomorphic to the circle group, and the component with the reflections.
For the group Dih∞ we can distinguish two cases:
Both topological groups are totally disconnected, but in the first case the (singleton) components are open, while in the second case they are not. Also, the first topological group is a closed subgroup of Dih(R) but the second is not a closed subgroup of O(2).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{Dih}(H) = H \\rtimes_\\phi Z_2"
},
{
"math_id": 1,
"text": "\\infty\\infty"
},
{
"math_id": 2,
"text": "\\infty"
}
]
| https://en.wikipedia.org/wiki?curid=7149681 |
7149734 | Divorce demography | Statistics on divorces by country/region
Estimates of annual divorces by country.
The following are the countries with the most annual divorces according to the United Nations in 2009.
Metrics / statistics.
Crude divorce rate.
This is divorces per 1,000 population per year. For example, if a city has 10,000 people living in it, and 30 couples divorce in one year, then the crude divorce rate for that year is 3 divorces per 1,000 residents.
formula_0
The crude divorce rate can give a general overview of marriage in an area, but it does not take people who cannot marry into account. For example, it would include young children, who are clearly not of marriageable age in its sample. In a place with large numbers of children or single adults, the crude divorce rate can seem low. In a place with few children and single adults, the crude divorce rate can seem high.
Refined divorce rate.
This measures the number of divorces per 1,000 women married to men, so that all unmarried persons are left out of the calculation. For example, if that same city of 10,000 people has 3,000 married women, and 30 couples divorce in one year, then the refined divorce rate is 10 divorces per 1,000 married women.
formula_1
Divorce-to-marriage ratio.
This compares the number of divorces in a given year to the number of marriages in that same year (the ratio of the crude divorce rate to the crude marriage rate). For example, if there are 500 divorces and 1,000 marriages in a given year in a given area, the ratio would be one divorce for every two marriages, e.g. a ratio of 0.50 (50%).
formula_2
However, this measurement compares two unlike populations – those who can marry and those who can divorce. Say there exists a community with 100,000 married couples, and very few people capable of marriage, for reasons such as age. If 1,000 people obtain divorces and 1,000 people get married in the same year, the ratio is one divorce for every marriage, which may lead people to think that the community's relationships are extremely unstable, despite the number of married people not changing. This is also true in reverse: a community with very many people of marriageable age may have 10,000 marriages and 1,000 divorces, leading people to believe that it has very stable relationships.
Furthermore, these two rates are not directly comparable since the marriage rate only examines the current year, while the divorce rate examines the outcomes of marriages for many previous years. This does not equate to the proportion of marriages in a given single-year cohort that will ultimately end in divorce. In any given year, underlying rates may change, and this can affect the ratio. For example, during an economic downturn, some couples might postpone a divorce because they can't afford to live separately. These individual choices could seem to temporarily improve the divorce-to-marriage ratio.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{Crude Divorce Rate} = \\frac {\\text{Number of divorces}}{\\text{Population}} \\times 1000\n"
},
{
"math_id": 1,
"text": "\\text{Refined Divorce Rate} = \\frac {\\text{Number of divorces}}{\\text{Number of married women}} \\times 1000\n"
},
{
"math_id": 2,
"text": "\\text{Divorce-to-Marriage Ratio} = \\frac {\\text{Number of divorces}}{\\text{Number of marriages}}\n"
}
]
| https://en.wikipedia.org/wiki?curid=7149734 |
7149788 | Slice sampling | Algorithm
Slice sampling is a type of Markov chain Monte Carlo algorithm for pseudo-random number sampling, i.e. for drawing random samples from a statistical distribution. The method is based on the observation that to sample a random variable one can sample uniformly from the region under the graph of its density function.
Motivation.
Suppose you want to sample some random variable "X" with distribution "f"("x"). Suppose that the following is the graph of "f"("x"). The height of "f"("x") corresponds to the likelihood at that point.
If you were to uniformly sample "X", each value would have the same likelihood of being sampled, and your distribution would be of the form "f"("x") = "y" for some "y" value instead of some non-uniform function "f"("x"). Instead of the original black line, your new distribution would look more like the blue line.
In order to sample "X" in a manner which will retain the distribution "f"("x"), some sampling technique must be used which takes into account the varied likelihoods for each range of "f"("x").
Method.
Slice sampling, in its simplest form, samples uniformly from underneath the curve "f"("x") without the need to reject any points, as follows:
The motivation here is that one way to sample a point uniformly from within an arbitrary curve is first to draw thin uniform-height horizontal slices across the whole curve. Then, we can sample a point within the curve by randomly selecting a slice that falls at or below the curve at the x-position from the previous iteration, then randomly picking an x-position somewhere along the slice. By using the x-position from the previous iteration of the algorithm, in the long run we select slices with probabilities proportional to the lengths of their segments within the curve.
The most difficult part of this algorithm is finding the bounds of the horizontal slice, which involves inverting the function describing the distribution being sampled from. This is especially problematic for multi-modal distributions, where the slice may consist of multiple discontinuous parts. It is often possible to use a form of rejection sampling to overcome this, where we sample from a larger slice that is known to include the desired slice in question, and then discard points outside of the desired slice.
This algorithm can be used to sample from the area under "any" curve, regardless of whether the function integrates to 1. In fact, scaling a function by a constant has no effect on the sampled x-positions. This means that the algorithm can be used to sample from a distribution whose probability density function is only known up to a constant (i.e. whose normalizing constant is unknown), which is common in computational statistics.
Implementation.
Slice sampling gets its name from the first step: defining a "slice" by sampling from an auxiliary variable formula_0. This variable is sampled from formula_1, where formula_2 is either the probability density function (PDF) of "X" or is at least proportional to its PDF. This defines a slice of "X" where formula_3. In other words, we are now looking at a region of "X" where the probability density is at least formula_0. Then the next value of "X" is sampled uniformly from this slice. A new value of formula_0 is sampled, then "X", and so on. This can be visualized as alternatively sampling the y-position and then the x-position of points under PDF, thus the "X"s are from the desired distribution. The formula_0 values have no particular consequences or interpretations outside of their usefulness for the procedure.
If both the PDF and its inverse are available, and the distribution is unimodal, then finding the slice and sampling from it are simple. If not, a stepping-out procedure can be used to find a region whose endpoints fall outside the slice. Then, a sample can be drawn from the slice using rejection sampling. Various procedures for this are described in detail by Radford M. Neal.
Note that, in contrast to many available methods for generating random numbers from non-uniform distributions, random variates generated directly by this approach will exhibit serial statistical dependence. This is because to draw the next sample, we define the slice based on the value of "f"("x") for the current sample.
Compared to other methods.
Slice sampling is a Markov chain method and as such serves the same purpose as Gibbs sampling and Metropolis. Unlike Metropolis, there is no need to manually tune the candidate function or candidate standard deviation.
Recall that Metropolis is sensitive to step size. If the step size is too small random walk causes slow decorrelation. If the step size is too large there is great inefficiency due to a high rejection rate.
In contrast to Metropolis, slice sampling automatically adjusts the step size to match the local shape of the density function. Implementation is arguably easier and more efficient than Gibbs sampling or simple Metropolis updates.
Note that, in contrast to many available methods for generating random numbers from non-uniform distributions, random variates generated directly by this approach will exhibit serial statistical dependence. In other words, not all points have the same independent likelihood of selection. This is because to draw the next sample, we define the slice based on the value of f(x) for the current sample. However, the generated samples are markovian, and are therefore expected to converge to the correct distribution in long run.
Slice Sampling requires that the distribution to be sampled be evaluable. One way to relax this requirement is to substitute an evaluable distribution which is proportional to the true unevaluable distribution.
Univariate case.
To sample a random variable "X" with density "f"("x") we introduce an auxiliary variable "Y" and iterate as follows:
Our auxiliary variable "Y" represents a horizontal "slice" of the distribution. The rest of each iteration is dedicated to sampling an "x" value from the slice which is representative of the density of the region being considered.
In practice, sampling from a horizontal slice of a multimodal distribution is difficult. There is a tension between obtaining a large sampling region and thereby making possible large moves in the distribution space, and obtaining a simpler sampling region to increase efficiency. One option for simplifying this process is regional expansion and contraction.
Slice-within-Gibbs sampling.
In a Gibbs sampler, one needs to draw efficiently from all the full-conditional distributions. When sampling from a full-conditional density is not easy, a single iteration of slice sampling or the Metropolis-Hastings algorithm can be used within-Gibbs to sample from the variable in question. If the full-conditional density is log-concave, a more efficient alternative is the application of adaptive rejection sampling (ARS) methods. When the ARS techniques cannot be applied (since the full-conditional is non-log-concave), the adaptive rejection Metropolis sampling algorithms are often employed.
Multivariate methods.
Treating each variable independently.
Single variable slice sampling can be used in the multivariate case by sampling each variable in turn repeatedly, as in Gibbs sampling. To do so requires that we can compute, for each component formula_5 a function that is proportional to formula_6.
To prevent random walk behavior, overrelaxation methods can be used to update each variable in turn. Overrelaxation chooses a new value on the opposite side of the mode from the current value, as opposed to choosing a new independent value from the distribution as done in Gibbs.
Hyperrectangle slice sampling.
This method adapts the univariate algorithm to the multivariate case by substituting a hyperrectangle for the one-dimensional "w" region used in the original. The hyperrectangle "H" is initialized to a random position over the slice. "H" is then shrunk as points from it are rejected.
Reflective slice sampling.
Reflective slice sampling is a technique to suppress random walk behavior in which the successive candidate samples of distribution "f"("x") are kept within the bounds of the slice by "reflecting" the direction of sampling inward toward the slice once the boundary has been hit.
In this graphical representation of reflective sampling, the shape indicates the bounds of a sampling slice. The dots indicate start and stopping points of a sampling walk. When the samples hit the bounds of the slice, the direction of sampling is "reflected" back into the slice.
Example.
Consider a single variable example. Suppose our true distribution is a normal distribution with mean 0 and standard deviation 3, formula_7. So:
formula_8. The peak of the distribution is obviously at formula_9, at which point formula_10.
If we're interested in the peak of the distribution, we can keep repeating this process since the new point corresponds to a higher "f"("x") than the original point.
Another example.
To sample from the normal distribution formula_11 we first choose an initial "x"—say 0. After each sample of "x" we choose "y" uniformly at random from formula_12, which is bounded the pdf of formula_11. After each "y" sample we choose "x" uniformly at random from formula_13 where formula_14. This is the slice where formula_15.
An implementation in the Macsyma language is:
slice(x) := block([y, alpha],
y:random(exp(-x^2 / 2.0) / sqrt(2.0 * dfloat(%pi))),
alpha:sqrt(-2.0 * ln(y * sqrt(2.0 * dfloat(%pi)))),
x:signum(random()) * random(alpha) | [
{
"math_id": 0,
"text": "Y"
},
{
"math_id": 1,
"text": "[0, f(x)]"
},
{
"math_id": 2,
"text": "f(x)"
},
{
"math_id": 3,
"text": "f(x) \\ge Y"
},
{
"math_id": 4,
"text": "f^{-1}[y, +\\infty)"
},
{
"math_id": 5,
"text": "x_i"
},
{
"math_id": 6,
"text": "p(x_i|x_0...x_n)"
},
{
"math_id": 7,
"text": "g(x)\\sim N(0,3^2)"
},
{
"math_id": 8,
"text": "f(x) = \\frac{1}{\\sqrt{2\\pi \\cdot3^2}} \\ e^{ -\\frac{(x-0)^2}{2 \\cdot 3^2} }"
},
{
"math_id": 9,
"text": "x = 0"
},
{
"math_id": 10,
"text": "f(x)\\approx0.1330"
},
{
"math_id": 11,
"text": "N(0,1)"
},
{
"math_id": 12,
"text": "(0, e^{-x^2/2}/\\sqrt{2\\pi}]"
},
{
"math_id": 13,
"text": "[-\\alpha, \\alpha]"
},
{
"math_id": 14,
"text": "\\alpha = \\sqrt{-2\\ln(y\\sqrt{2\\pi})}"
},
{
"math_id": 15,
"text": "f(x) > y"
}
]
| https://en.wikipedia.org/wiki?curid=7149788 |
71499116 | Laakso space | Type of mathematical fractal space
In mathematical analysis and metric geometry, Laakso spaces are a class of metric spaces which are fractal, in the sense that they have non-integer Hausdorff dimension, but that admit a notion of differential calculus. They are constructed as quotient spaces of [0, 1] × "K" where "K" is a Cantor set.
Background.
Cheeger defined a notion of differentiability for real-valued functions on metric measure spaces which are doubling and satisfy a Poincaré inequality, generalizing the usual notion on Euclidean space and Riemannian manifolds. Spaces that satisfy these conditions include Carnot groups and other sub-Riemannian manifolds, but not classic fractals such as the Koch snowflake or the Sierpiński gasket. The question therefore arose whether spaces of fractional Hausdorff dimension can satisfy a Poincaré inequality. Bourdon and Pajot were the first to construct such spaces. Tomi J. Laakso gave a different construction which gave spaces with Hausdorff dimension any real number greater than 1. These examples are now known as Laakso spaces.
Construction.
We describe a space formula_0 with Hausdorff dimension formula_1. (For integer dimensions, Euclidean spaces satisfy the desired condition, and for any Hausdorff dimension "S" + "r" in the interval ("S", "S" + 1), where "S" is an integer, we can take the space formula_2.) Let "t" ∈ (0, 1/2) be such that
formula_3
Then define "K" to be the Cantor set obtained by cutting out the middle 1 - 2"t" portion of an interval and iterating that construction. In other words, "K" can be defined as the subset of [0, 1] containing 0 and 1 and satisfying
formula_4
The space formula_0 will be a quotient of "I" × "K", where "I" is the unit interval and "I" × "K" is given the metric induced from ℝ2.
To save on notation, we now assume that "t" = 1/3, so that "K" is the usual middle thirds Cantor set. The general construction is similar but more complicated. Recall that the middle thirds Cantor set consists of all points in [0, 1] whose ternary expansion consists of only 0's and 2's. Given a string a of 0's and 2's, let "K""a" be the subset of points of "K" consisting of points whose ternary expansion starts with a. For example,
formula_5
Now let "b" = "u"/3"k" be a fraction in lowest terms. For every string "a" of 0's and 2's of length "k" - 1, and for every point "x" ∈ "K""a"0, we identify ("b", "x") with the point ("b", "x" + 2/3"k") ∈ {"b"} × "K""a"2.
We give the resulting quotient space the quotient metric:
formula_6
where each "q""i" is identified with "p""i"+1 and the infimum is taken over all finite sequences of this form.
In the general case, the numbers "b" (called "wormhole levels") and their orders "k" are defined in a more complicated way so as to obtain a space with the right Hausdorff dimension, but the basic idea is the same.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F_Q"
},
{
"math_id": 1,
"text": "Q \\in (1,2)"
},
{
"math_id": 2,
"text": "\\mathbb{R}^{S-1} \\times F_{r+1}"
},
{
"math_id": 3,
"text": "Q=1+\\frac{\\ln 2}{\\ln(1/t)}."
},
{
"math_id": 4,
"text": "K=tK \\cup (1-t+tK)."
},
{
"math_id": 5,
"text": "K_{2022}=\\frac{2}{3}+\\frac{2}{27}+\\frac{2}{81}+\\frac{1}{81}K."
},
{
"math_id": 6,
"text": "d_{F_Q}(p,q) = \\inf(d_{I \\times K}(p,q_1)+d_{I \\times K}(p_2,q_2)+\\cdots+d_{I \\times K}(p_{n-1},q_{n-1})+d_{I \\times K}(p_n,q)),"
}
]
| https://en.wikipedia.org/wiki?curid=71499116 |
71505063 | Dual snub 24-cell | In geometry, the dual snub 24-cell is a 144 vertex convex 4-polytope composed of 96 irregular cells. Each cell has faces of two kinds: 3 kites and 6 isosceles triangles. The polytope has a total of 432 faces (144 kites and 288 isosceles triangles) and 480 edges.
Geometry.
The dual snub 24-cell, first described by Koca et al. in 2011, is the dual polytope of the snub 24-cell, a semiregular polytope first described by Thorold Gosset in 1900.
Construction.
The vertices of a dual snub 24-cell are obtained using quaternion simple roots (T') in the generation of the 600 vertices of the 120-cell. The following describe formula_0 and formula_1 24-cells as quaternion orbit weights of D4 under the Weyl group W(D4):
O(1000) : V1
O(0010) : V2
O(0001) : V3
With quaternions formula_2 where formula_3 is the conjugate of formula_4 and formula_5 and formula_6, then the Coxeter group formula_7 is the symmetry group of the 600-cell and the 120-cell of order 14400.
Given formula_8 such that formula_9 and formula_10 as an exchange of formula_11 within formula_4 where formula_12 is the golden ratio, we can construct:
and finally the dual snub 24-cell can then be defined as the orbits of formula_17.
Dual.
The dual polytope of this polytope is the Snub 24-cell.
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "T"
},
{
"math_id": 1,
"text": "T'"
},
{
"math_id": 2,
"text": "(p,q)"
},
{
"math_id": 3,
"text": "\\bar p"
},
{
"math_id": 4,
"text": "p"
},
{
"math_id": 5,
"text": "[p,q]:r\\rightarrow r'=prq"
},
{
"math_id": 6,
"text": "[p,q]^*:r\\rightarrow r''=p\\bar rq"
},
{
"math_id": 7,
"text": "W(H_4)=\\lbrace[p,\\bar p] \\oplus [p,\\bar p]^*\\rbrace "
},
{
"math_id": 8,
"text": "p \\in T"
},
{
"math_id": 9,
"text": "\\bar p=\\pm p^4, \\bar p^2=\\pm p^3, \\bar p^3=\\pm p^2, \\bar p^4=\\pm p"
},
{
"math_id": 10,
"text": "p^\\dagger"
},
{
"math_id": 11,
"text": "-1/\\phi \\leftrightarrow \\phi"
},
{
"math_id": 12,
"text": "\\phi=\\frac{1+\\sqrt{5}}{2}"
},
{
"math_id": 13,
"text": "S=\\sum_{i=1}^4\\oplus p^i T"
},
{
"math_id": 14,
"text": "I=T+S=\\sum_{i=0}^4\\oplus p^i T"
},
{
"math_id": 15,
"text": "J=\\sum_{i,j=0}^4\\oplus p^i\\bar p^{\\dagger j}T'"
},
{
"math_id": 16,
"text": "S'=\\sum_{i=1}^4\\oplus p^i\\bar p^{\\dagger i}T'"
},
{
"math_id": 17,
"text": "T \\oplus T' \\oplus S'"
}
]
| https://en.wikipedia.org/wiki?curid=71505063 |
71506875 | Goodman's conjecture | Goodman's conjecture on the coefficients of multivalent functions was proposed in complex analysis in 1948 by Adolph Winkler Goodman, an American mathematician.
Formulation.
Let formula_0 be a formula_1-valent function. The conjecture claims the following coefficients hold:
formula_2
Partial results.
It's known that when formula_3, the conjecture is true for functions of the form formula_4 where formula_5 is a polynomial and formula_6 is univalent. | [
{
"math_id": 0,
"text": "f(z)= \\sum_{n=1}^{\\infty}{b_n z^n}"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "|b_n| \\le \\sum_{k=1}^{p} \\frac{2k(n+p)!}{(p-k)!(p+k)!(n-p-1)!(n^2-k^2)}|b_k|"
},
{
"math_id": 3,
"text": "p=2,3"
},
{
"math_id": 4,
"text": "P \\circ \\phi"
},
{
"math_id": 5,
"text": "P"
},
{
"math_id": 6,
"text": "\\phi"
}
]
| https://en.wikipedia.org/wiki?curid=71506875 |
71507666 | Path space (algebraic topology) | In algebraic topology, a branch of mathematics, the based path space formula_0 of a pointed space formula_1 is the space that consists of all maps formula_2 from the interval formula_3 to "X" such that formula_4, called based paths. In other words, it is the mapping space from formula_5 to formula_1.
A space formula_6 of all maps from formula_7 to "X", with no distinguished point for the start of the paths, is called the free path space of "X." The maps from formula_7 to "X" are called free paths. The path space formula_0 is then the pullback of formula_8 along formula_9.
The natural map formula_10 is a fibration called the path space fibration.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "PX"
},
{
"math_id": 1,
"text": "(X, *)"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "I = [0, 1]"
},
{
"math_id": 4,
"text": "f(0) = *"
},
{
"math_id": 5,
"text": "(I, 0)"
},
{
"math_id": 6,
"text": "X^I"
},
{
"math_id": 7,
"text": "I"
},
{
"math_id": 8,
"text": "X^I \\to X, \\, \\chi \\mapsto \\chi(0)"
},
{
"math_id": 9,
"text": "* \\hookrightarrow X"
},
{
"math_id": 10,
"text": "PX \\to X, \\, \\chi \\to \\chi(1)"
}
]
| https://en.wikipedia.org/wiki?curid=71507666 |
71508609 | Category of compactly generated weak Hausdorff spaces | In mathematics, the category of compactly generated weak Hausdorff spaces, CGWH, is a category used in algebraic topology as an alternative to the category of topological spaces, Top, as the latter lacks some properties that are common in practice and often convenient to use in proofs. There is also such a category for the CGWH analog of pointed topological spaces, defined by requiring maps to preserve base points.
The articles compactly generated space and weak Hausdorff space define the respective topological properties. For the historical motivation behind these conditions on spaces, see Compactly generated space#Motivation. This article focuses on the properties of the category.
Properties.
CGWH has the following properties:
that is natural in "X", "Y", and "Z". In short, the category is Cartesian closed in an enriched sense.
that is natural in formula_3, formula_4, and formula_6.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\operatorname{Map}(X, Y)"
},
{
"math_id": 1,
"text": "Y^X"
},
{
"math_id": 2,
"text": "\\operatorname{Map}(X \\times Y, Z) \\simeq \\operatorname{Map}(X, \\operatorname{Map}(Y, Z))"
},
{
"math_id": 3,
"text": "(X, *)"
},
{
"math_id": 4,
"text": "(Y, \\circ)"
},
{
"math_id": 5,
"text": "\\operatorname{Map}( (X, *), (Y, \\circ))"
},
{
"math_id": 6,
"text": "(Z, \\star)"
},
{
"math_id": 7,
"text": "\\operatorname{Map}((X, *) \\wedge (Y, \\circ), (Z, \\star)) \\simeq \\operatorname{Map}((X, *), \\operatorname{Map}((Y, \\circ), (Z, \\star)))"
}
]
| https://en.wikipedia.org/wiki?curid=71508609 |
7151375 | Space-oblique Mercator projection | Map projection
Space-oblique Mercator projection is a map projection devised in the 1970s for preparing maps from Earth-survey satellite data. It is a generalization of the oblique Mercator projection that incorporates the time evolution of a given satellite ground track to optimize its representation on the map. The oblique Mercator projection, on the other hand, optimizes for a given geodesic.
History.
The space-oblique Mercator projection (SOM) was developed by John P. Snyder, Alden Partridge Colvocoresses and John L. Junkins in 1976. Snyder had an interest in maps dating back to his childhood; he regularly attended cartography conferences whilst on vacation. In 1972, the United States Geological Survey (USGS) needed to develop a system for reducing the amount of distortion caused when satellite pictures of the ellipsoidal Earth were printed on a flat page. Colvocoresses, the head of the USGS's national mapping program, asked attendees of a geodetic sciences conferences for help solving the projection problem in 1976. Snyder work on the problem with his newly purchased pocket calculator and devised the mathematical formulas needed to solve the problem. After submitting his calculations to Waldo Tobler for review, Snyder submitted these to the USGS at no charge. Impressed with his work, USGS officials offered Snyder a job, and he promptly accepted. His formulas were then used to produce maps from Landsat 4, which launched in the summer of 1978 .
Projection description.
The space-oblique Mercator projection provides continual, nearly conformal mapping of the swath sensed by a satellite. Scale is true along the ground track, varying 0.01 percent within the normal sensing range of the satellite. Conformality is correct within a few parts per million for the sensing range. Distortion is essentially constant along lines of constant distance parallel to the ground track. The space-oblique Mercator is the only projection which takes the rotation of Earth into account.
Equations.
The forward equations for the Space-oblique Mercator projection for the sphere are as follows:
formula_0
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{align}\n\\frac{x}{R} &= \\int_{0}^{\\lambda'} \\frac{H-S^2}{\\sqrt{1+S^2}}d\\lambda' - \\frac{S}{\\sqrt{1+S^2}}\\ln\\tan\\left(\\frac{\\pi}{4}+\\frac{\\varphi'}{2}\\right) \\\\\n\\frac{y}{R} &= \\left(H+1\\right) \\int_{0}^{\\lambda'} \\frac{S}{\\sqrt{1+S^2}}d\\lambda' + \\frac{1}{\\sqrt{1+S^2}}\\ln\\tan\\left(\\frac{\\pi}{4}+\\frac{\\varphi'}{2}\\right) \\\\\nS &= \\tfrac{P_{2}}{P_{1}} \\sin i \\cos \\lambda' \\\\\nH &= 1 - \\tfrac{P_{2}}{P_{1}} \\cos i \\\\\n\\tan\\lambda' &= \\cos i \\tan \\lambda_{t} + \\frac{\\sin i \\tan \\varphi }{ \\cos \\lambda_{t}} \\\\\n\\sin\\varphi' &= \\cos i \\sin \\varphi - \\sin i \\cos \\varphi \\sin \\lambda_{t} \\\\\n\\lambda_{t} &= \\lambda + \\tfrac{P_{2}}{P_{1}} \\lambda'. \\\\\n\\varphi &= \\text{geodetic (or geographic) latitude.} \\\\\n\\lambda &= \\text{geodetic (or geographic) longitude.} \\\\\nP_{2} &= \\text{time required for revolution of satellite.} \\\\\nP_{1} &= \\text{length of Earth rotation.} \\\\\ni &= \\text{angle of inclination.} \\\\\nR &= \\text{radius of Earth.} \\\\\nx,y &= \\text{rectangular map coordinates.}\n\\end{align}\n"
}
]
| https://en.wikipedia.org/wiki?curid=7151375 |
715186 | Piling-up lemma | Principle used in linear cryptanalysis
In cryptanalysis, the piling-up lemma is a principle used in linear cryptanalysis to construct linear approximations to the action of block ciphers. It was introduced by Mitsuru Matsui (1993) as an analytical tool for linear cryptanalysis. The lemma states that the bias (deviation of the expected value from 1/2) of a linear Boolean function (XOR-clause) of independent binary random variables is related to the product of the input biases:
formula_0
or
formula_1
where formula_2 is the bias (towards zero) and formula_3 the "imbalance":
formula_4
formula_5.
Conversely, if the lemma does not hold, then the input variables are not independent.
Interpretation.
The lemma implies that XOR-ing independent binary variables always reduces the bias (or at least does not increase it); moreover, the output is unbiased if and only if there is at least one unbiased input variable.
Note that for two variables the quantity formula_6 is a correlation measure of formula_7 and formula_8, equal to formula_9; formula_10 can be interpreted as the correlation of formula_7 with formula_11.
Expected value formulation.
The piling-up lemma can be expressed more naturally when the random variables take values in formula_12. If we introduce variables formula_13 (mapping 0 to 1 and 1 to -1) then, by inspection, the XOR-operation transforms to a product:
formula_14
and since the expected values are the imbalances, formula_15, the lemma now states:
formula_16
which is a known property of the expected value for independent variables.
For dependent variables the above formulation gains a (positive or negative) covariance term, thus the lemma does not hold. In fact, since two Bernoulli variables are independent if and only if they are uncorrelated (i.e. have zero covariance; see uncorrelatedness), we have the converse of the piling up lemma: if it does not hold, the variables are not independent (uncorrelated).
Boolean derivation.
The piling-up lemma allows the cryptanalyst to determine the probability that the equality:
formula_17
holds, where the "X"'s are binary variables (that is, bits: either 0 or 1).
Let "P"(A) denote "the probability that A is true". If it equals one, A is certain to happen, and if it equals zero, A cannot happen. First of all, we consider the piling-up lemma for two binary variables, where formula_18 and formula_19.
Now, we consider:
formula_20
Due to the properties of the xor operation, this is equivalent to
formula_21
"X"1 = "X"2 = 0 and "X"1 = "X"2 = 1 are mutually exclusive events, so we can say
formula_22
Now, we must make the central assumption of the piling-up lemma: the binary variables we are dealing with are independent; that is, the state of one has no effect on the state of any of the others. Thus we can expand the probability function as follows:
Now we express the probabilities "p"1 and "p"2 as + ε1 and + ε2, where the ε's are the probability biases — the amount the probability deviates from .
Thus the probability bias ε1,2 for the XOR sum above is 2ε1ε2.
This formula can be extended to more "X"'s as follows:
formula_23
Note that if any of the ε's is zero; that is, one of the binary variables is unbiased, the entire probability function will be unbiased — equal to .
A related slightly different definition of the bias is
formula_24
in fact minus two times the previous value. The advantage is that now with
formula_25
we have
formula_26
adding random variables amounts to multiplying their (2nd definition) biases.
Practice.
In practice, the "X"s are approximations to the S-boxes (substitution components) of block ciphers. Typically, "X" values are inputs to the S-box and "Y" values are the corresponding outputs. By simply looking at the S-boxes, the cryptanalyst can tell what the probability biases are. The trick is to find combinations of input and output values that have probabilities of zero or one. The closer the approximation is to zero or one, the more helpful the approximation is in linear cryptanalysis.
However, in practice, the binary variables are not independent, as is assumed in the derivation of the piling-up lemma. This consideration has to be kept in mind when applying the lemma; it is not an automatic cryptanalysis formula.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\epsilon(X_1\\oplus X_2\\oplus\\cdots\\oplus X_n)=2^{n-1}\\prod_{i=1}^n \\epsilon(X_i)"
},
{
"math_id": 1,
"text": "I(X_1\\oplus X_2\\oplus\\cdots\\oplus X_n ) =\\prod_{i=1}^n I(X_i)"
},
{
"math_id": 2,
"text": "\\epsilon \\in [-\\tfrac{1}{2}, \\tfrac{1}{2}]"
},
{
"math_id": 3,
"text": "I \\in [-1, 1]"
},
{
"math_id": 4,
"text": "\\epsilon(X) = P(X=0) - \\frac{1}{2}"
},
{
"math_id": 5,
"text": "I(X) = P(X=0) - P(X=1) = 2 \\epsilon(X)"
},
{
"math_id": 6,
"text": "I(X \\oplus Y)"
},
{
"math_id": 7,
"text": "X"
},
{
"math_id": 8,
"text": "Y"
},
{
"math_id": 9,
"text": "P(X=Y)-P(X\\ne Y)"
},
{
"math_id": 10,
"text": "I(X)"
},
{
"math_id": 11,
"text": "0"
},
{
"math_id": 12,
"text": "\\{-1,1\\}"
},
{
"math_id": 13,
"text": "\\chi_i = 1 - 2X_i = (-1)^{X_i}"
},
{
"math_id": 14,
"text": "\\chi_1\\chi_2\\cdots\\chi_n = 1 - 2(X_1 \\oplus X_2\\oplus\\cdots\\oplus X_n) = (-1)^{X_1 \\oplus X_2\\oplus\\cdots\\oplus X_n}"
},
{
"math_id": 15,
"text": "E(\\chi_i)=I(X_i)"
},
{
"math_id": 16,
"text": "E\\left(\\prod_{i=1}^n \\chi_i \\right)=\\prod_{i=1}^nE(\\chi_i)"
},
{
"math_id": 17,
"text": "X_1\\oplus X_2\\oplus\\cdots\\oplus X_n=0"
},
{
"math_id": 18,
"text": "P(X_1 = 0)=p_1"
},
{
"math_id": 19,
"text": "P(X_2 = 0)=p_2"
},
{
"math_id": 20,
"text": "P(X_1 \\oplus X_2 = 0)"
},
{
"math_id": 21,
"text": "P(X_1=X_2)"
},
{
"math_id": 22,
"text": "P(X_1=X_2)=P(X_1=X_2=0) + P(X_1=X_2=1)=P(X_1=0, X_2=0) + P(X_1=1, X_2=1)"
},
{
"math_id": 23,
"text": "P(X_1\\oplus X_2\\oplus\\cdots\\oplus X_n=0)=1/2+2^{n-1}\\prod_{i=1}^n \\epsilon_i"
},
{
"math_id": 24,
"text": " \\epsilon_i = P(X_i=1) - P(X_i=0),"
},
{
"math_id": 25,
"text": "\\varepsilon_{total}= P(X_1\\oplus X_2\\oplus\\cdots\\oplus X_n=1)- P(X_1\\oplus X_2\\oplus\\cdots\\oplus X_n=0)"
},
{
"math_id": 26,
"text": "\\varepsilon_{total}=(-1)^{n+1}\\prod_{i=1}^n \\varepsilon_i,"
}
]
| https://en.wikipedia.org/wiki?curid=715186 |
71519050 | Le Potier's vanishing theorem | Generalizes the Kodaira vanishing theorem for ample vector bundle
In algebraic geometry, Le Potier's vanishing theorem is an extension of the Kodaira vanishing theorem, on vector bundles. The theorem states the following
<templatestyles src="Template:Blockquote/styles.css" />: Let X be a "n"-dimensional compact complex manifold and E a holomorphic vector bundle of rank r over X, here formula_0 is Dolbeault cohomology group, where formula_1 denotes the sheaf of holomorphic "p"-forms on "X". If E is an ample, then
formula_2 for formula_3 .
from Dolbeault theorem,
formula_4 for formula_3 .
By Serre duality, the statements are equivalent to the assertions:
formula_5 for formula_6 .
In case of r = 1, and let E is an ample (or positive) line bundle on X, this theorem is equivalent to the Nakano vanishing theorem. Also, found another proof.
generalizes Le Potier's vanishing theorem to k-ample and the statement as follows:
<templatestyles src="Template:Blockquote/styles.css" /> Le Potier–Sommese vanishing theorem: Let X be a "n"-dimensional algebraic manifold and E is a k-ample holomorphic vector bundle of rank r over X, then
formula_2 for formula_7 .
gave a counterexample, which is as follows:
<templatestyles src="Template:Blockquote/styles.css" />Conjecture of : Let X be a "n"-dimensional compact complex manifold and E a holomorphic vector bundle of rank r over X. If E is an ample, then
formula_8 for formula_9 is false for formula_10 | [
{
"math_id": 0,
"text": "H^{p,q}(X,E)"
},
{
"math_id": 1,
"text": "\\Omega ^{p}_{X}"
},
{
"math_id": 2,
"text": " H^{p,q}(X, E) = 0"
},
{
"math_id": 3,
"text": "p + q \\geq n + r"
},
{
"math_id": 4,
"text": "H^{q}(X, \\Omega ^{p}_{X} \\otimes E ) = 0"
},
{
"math_id": 5,
"text": "H^{i}(X, \\Omega ^{j}_{X} \\otimes E^* ) = 0"
},
{
"math_id": 6,
"text": "j + i \\leq n - r"
},
{
"math_id": 7,
"text": "p + q \\geq n + r + k"
},
{
"math_id": 8,
"text": "H^{p,q}(X, \\Lambda^a E ) = 0"
},
{
"math_id": 9,
"text": "p + q \\geq n + r - a + 1"
},
{
"math_id": 10,
"text": "n=2r \\geq 6 ."
}
]
| https://en.wikipedia.org/wiki?curid=71519050 |
71519149 | Metric lattice | In the mathematical study of order, a metric lattice L is a lattice that admits a positive valuation: a function "v" ∈ "L" → ℝ satisfying, for any "a", "b" ∈ "L", formula_0 and formula_1
Relation to other notions.
A Boolean algebra is a metric lattice; any finitely-additive measure on its Stone dual gives a valuation.
Every metric lattice is a modular lattice, c.f. lower picture. It is also a metric space, with distance function given by formula_2 With that metric, the join and meet are uniformly continuous contractions, and so extend to the metric completion (metric space). That lattice is usually not the Dedekind-MacNeille completion, but it is conditionally complete.
Applications.
In the study of fuzzy logic and interval arithmetic, the space of uniform distributions is a metric lattice. Metric lattices are also key to von Neumann's construction of the continuous projective geometry. A function satisfies the one-dimensional wave equation if and only if it is a valuation for the lattice of spacetime coordinates with the natural partial order. A similar result should apply to any partial differential equation solvable by the method of characteristics, but key features of the theory are lacking.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "v(a)+v(b)=v(a\\wedge b)+v(a\\vee b)"
},
{
"math_id": 1,
"text": "{a>b}\\Rightarrow v(a)>v(b)\\text{.}"
},
{
"math_id": 2,
"text": "d(x,y)=v(x\\vee y)-v(x\\wedge y)\\text{.}"
}
]
| https://en.wikipedia.org/wiki?curid=71519149 |
7152070 | Dagger category | Category equipped with involution
In category theory, a branch of mathematics, a dagger category (also called involutive category or category with involution) is a category equipped with a certain structure called "dagger" or "involution". The name dagger category was coined by Peter Selinger.
Formal definition.
A dagger category is a category formula_0 equipped with an involutive contravariant endofunctor formula_1 which is the identity on objects.
In detail, this means that:
Note that in the previous definition, the term "adjoint" is used in a way analogous to (and inspired by) the linear-algebraic sense, not in the category-theoretic sense.
Some sources define a category with involution to be a dagger category with the additional property that its set of morphisms is partially ordered and that the order of morphisms is compatible with the composition of morphisms, that is formula_10 implies formula_11 for morphisms formula_12, formula_13, formula_14 whenever their sources and targets are compatible.
Remarkable morphisms.
In a dagger category formula_0, a morphism formula_4 is called
The latter is only possible for an endomorphism formula_22. The terms "unitary" and "self-adjoint" in the previous definition are taken from the category of Hilbert spaces, where the morphisms satisfying those properties are then unitary and self-adjoint in the usual sense. | [
{
"math_id": 0,
"text": "\\mathcal{C}"
},
{
"math_id": 1,
"text": "\\dagger"
},
{
"math_id": 2,
"text": "f: A \\to B"
},
{
"math_id": 3,
"text": "f^\\dagger: B \\to A"
},
{
"math_id": 4,
"text": "f"
},
{
"math_id": 5,
"text": "(f^\\dagger)^\\dagger = f"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "\\mathrm{id}_A^\\dagger = \\mathrm{id}_A"
},
{
"math_id": 8,
"text": "g: B \\to C"
},
{
"math_id": 9,
"text": "(g \\circ f)^\\dagger = f^\\dagger \\circ g^\\dagger: C \\to A"
},
{
"math_id": 10,
"text": "a < b"
},
{
"math_id": 11,
"text": "a\\circ c<b\\circ c"
},
{
"math_id": 12,
"text": "a"
},
{
"math_id": 13,
"text": "b"
},
{
"math_id": 14,
"text": "c"
},
{
"math_id": 15,
"text": "R:X \\rightarrow Y"
},
{
"math_id": 16,
"text": "R^\\dagger:Y \\rightarrow X"
},
{
"math_id": 17,
"text": " R"
},
{
"math_id": 18,
"text": "f:A \\rightarrow B"
},
{
"math_id": 19,
"text": "f^\\dagger:B \\rightarrow A"
},
{
"math_id": 20,
"text": "f^\\dagger = f^{-1},"
},
{
"math_id": 21,
"text": "f^\\dagger = f."
},
{
"math_id": 22,
"text": "f\\colon A \\to A"
}
]
| https://en.wikipedia.org/wiki?curid=7152070 |
7152740 | T (disambiguation) | T, or t, is the twentieth letter of the English alphabet.
T may also refer to:
<templatestyles src="Template:TOC_right/styles.css" />
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "\\tau"
},
{
"math_id": 1,
"text": "\\mathbb{T}^n"
},
{
"math_id": 2,
"text": "\\mathbb{R}^n/\\mathbb{Z}^n"
},
{
"math_id": 3,
"text": "\\mathbb{T}"
}
]
| https://en.wikipedia.org/wiki?curid=7152740 |
71530764 | Blackwell's contraction mapping theorem | Mathematical theorem regarding operators
In mathematics, Blackwell's contraction mapping theorem provides a set of sufficient conditions for an operator to be a contraction mapping. It is widely used in areas that rely on dynamic programming as it facilitates the proof of existence of fixed points. The result is due to David Blackwell who published it in 1965 in the Annals of Mathematical Statistics.
Statement of the Theorem.
Let formula_0 be an operator defined over an ordered normed vector space formula_1. formula_2 is a contraction mapping with modulus formula_3 if it satisfies
Proof of the Theorem.
For all formula_6 and formula_7, formula_8. Properties 1. and 2. imply that formula_9, hence, formula_10.
The symmetric follows from a similar argument and we prove that formula_0 is a contraction mapping.
Applications.
The cake eating problem.
An agent has access to only one cake for its entire, infinite, life. It has to decide the optimal way to consume it. It evaluates a consumption plan, formula_11, by using a separable utility function, formula_12, with discounting factor formula_13. Its problem can be summarized as
formula_14. (1)
Applying Bellman's principle of optimality we find (1)'s corresponding Bellman equation
formula_15. (2)
It can be proven that the solution to this functional equation, if it exists, is equivalent to the solution of (1). To prove its existence we can resort to Blackwell's sufficient conditions.
Define the operator formula_16. A solution to (2) is equivalent to finding a fixed-point for our operator. If we prove that this operator is a contraction mapping then we can use Banach's fixed-point theorem, and conclude that there is indeed a solution to (1).
First note that formula_0 is defined over the space of bounded functions since for all feasible consumption plans, formula_17. Endowing it with the sup-norm we conclude that the domain and co-domain are ordered normed vector spaces. We are just left with verifying that the conditions for Blackwell's theorem are respected:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "T: X \\rightarrow X"
},
{
"math_id": 3,
"text": "\\beta"
},
{
"math_id": 4,
"text": " (monotonicity) \\quad u \\leq v \\implies Tu \\leq Tv "
},
{
"math_id": 5,
"text": " (discounting) \\quad T(u+c) \\leq Tu+\\beta c. "
},
{
"math_id": 6,
"text": "u"
},
{
"math_id": 7,
"text": "v \\in X"
},
{
"math_id": 8,
"text": "u \\leq v + ||v-u||"
},
{
"math_id": 9,
"text": "T(u) \\leq T(v + ||v-u||) \\leq T(v) + \\beta ||v-u||"
},
{
"math_id": 10,
"text": "T(u)-T(v) \\leq \\beta ||v-u||"
},
{
"math_id": 11,
"text": "c_t"
},
{
"math_id": 12,
"text": "\\sum_{t=0}^{\\infty} \\beta^t \\frac{c_t^{1-\\sigma}}{1-\\sigma}"
},
{
"math_id": 13,
"text": "\\beta \\in (0,1)"
},
{
"math_id": 14,
"text": " \\max_{c_t}\\sum_{t=0}^{\\infty} \\beta^t \\frac{c_t^{1-\\sigma}}{1-\\sigma} \\text{ subject to } x_t = x_{t-1} - c_t \\text{, } x_{-1}=0 \\text{, }c_t\\geq0 \\text{ and } x_t \\geq 0 \\, \\forall \\, t \\, \\in \\, \\mathbb{Z}_+ "
},
{
"math_id": 15,
"text": " V(c) = \\max_{c'} \\frac{c^{1-\\sigma}}{1-\\sigma} + \\beta V(c') "
},
{
"math_id": 16,
"text": " T(V(c)) = \\max_{c'} \\frac{c^{1-\\sigma}}{1-\\sigma} + \\beta V(c') "
},
{
"math_id": 17,
"text": "\\sum_{t=0}^{\\infty} \\beta^t \\frac{c_t^{1-\\sigma}}{1-\\sigma} \\leq \\sum_{t=0}^{\\infty} \\beta^t \\frac{1^{1-\\sigma}}{1-\\sigma} = \\frac{1}{(1-\\beta)(1-\\sigma)} < \\infty"
},
{
"math_id": 18,
"text": " \\text{if }V(c) \\geq U(c) \\, \\forall \\, c \\, \\in \\, [0,1]"
},
{
"math_id": 19,
"text": " T(V(c)) = \\max_{c'} \\frac{c^{1-\\sigma}}{1-\\sigma} + \\beta V(c') \\geq \\max_{c'} \\frac{c^{1-\\sigma}}{1-\\sigma} + \\beta U(c') = T(U(c)) "
},
{
"math_id": 20,
"text": " T(V(c)+a) = \\max_{c'} \\frac{c^{1-\\sigma}}{1-\\sigma} + \\beta (V(c') + a) \\leq \\max_{c'} \\frac{c^{1-\\sigma}}{1-\\sigma} + \\beta V(c') + \\beta a = T(V(c)) + \\beta a "
},
{
"math_id": 21,
"text": "a"
}
]
| https://en.wikipedia.org/wiki?curid=71530764 |
715308 | Whirlpool Galaxy | Galaxy in the constellation Canes Venatici
The Whirlpool Galaxy, also known as Messier 51a (M51a) or NGC 5194, is an interacting grand-design spiral galaxy with a Seyfert 2 active galactic nucleus. It lies in the constellation Canes Venatici, and was the first galaxy to be classified as a spiral galaxy. It is away and in diameter.
The galaxy and its companion, NGC 5195, are easily observed by amateur astronomers, and the two galaxies may be seen with binoculars. The Whirlpool Galaxy has been extensively observed by professional astronomers, who study it and its pair with NGC 5195 to understand galaxy structure (particularly structure associated with the spiral arms) and galaxy interactions. Its pair with NGC 5195 is among the most famous and relatively close interacting systems, and thus is a favorite subject of galaxy interaction models.
Discovery.
What later became known as the Whirlpool Galaxy was discovered on October 13, 1773, by Charles Messier while hunting for objects that could confuse comet hunters, and was designated in Messier's catalogue as M51. William Parsons, 3rd Earl of Rosse, employing a reflecting telescope at Birr Castle, Ireland, found that the Whirlpool possessed a spiral structure, the first "nebula" to be known to have one. These "spiral nebulae" were not recognized as galaxies until Edwin Hubble was able to observe Cepheid variables in some of these spiral nebulae, which provided evidence that they were so far away that they must be entirely separate galaxies.
The advent of radio astronomy and subsequent radio images of M51 unequivocally demonstrated that the Whirlpool and its companion galaxy are indeed interacting. Sometimes the designation M51 is used to refer to the pair of galaxies, in which case the individual galaxies may be referred to as M51a (NGC 5194) and M51b (NGC 5195).
Visual appearance.
Deep in the constellation Canes Venatici, M51 is often found by finding the easternmost star of the Big Dipper, Alkaid, and going 3.5° southwest. Its declination is, rounded, +47°, making it circumpolar (never setting) for observers above the 43rd parallel north; it reaches a high altitude throughout this hemisphere making it an accessible object from the early hours in November through to the end of May, after which observation is more coincidental in modest latitudes with the risen sun (due to the Sun approaching to and receding from its right ascension, specifically figuring in Gemini, just to the north).
M51 is visible through binoculars under dark sky conditions, and it can be resolved in detail with modern amateur telescopes. When seen through a 100 mm telescope the basic outlines of M51 (limited to 5×6') and its companion are visible. Under dark skies, and with a moderate eyepiece through a 150 mm telescope, M51's intrinsic spiral structure can be detected. With larger (>300 mm) instruments under dark sky conditions, the various spiral bands are apparent with HII regions visible, and M51 can be seen to be attached to M51B.
As is usual for galaxies, the true extent of its structure can only be gathered from inspecting photographs; long exposures reveal a large nebula extending beyond the visible circular appearance. In 1984, thanks to the high-speed detector—the so-called image-photon-counting- IPCS—system—developed jointly by the CNRS Laboratoire d'Astronomie Spatiald (L.A.S.- CNRS) and the Observatoire de Haute Provence (O.H.P.) along with the particularly nice seeing offered by the Canada-France-Hawaii-Telescope (C.F.H.T.) 3.60m Cassegrain focus at Mauna Kea summit in Hawaii, Hua et al. detected the double component of the very nucleus of the Whirlpool galaxy.
In January 2005 the Hubble Heritage Project constructed a 11,477 × 7,965-pixel composite image (shown in the infobox above) of M51 using Hubble's ACS instrument. The image highlights the galaxy's spiral arms, and shows detail into some of the structures inside the arms.
Properties.
The Whirlpool Galaxy lies at a distance of about 23 to 31 million light-years from Earth. Based on the 1991 measurement by the Third Reference Catalogue of Bright Galaxies using the D25 isophote at the B-band, the Whirlpool Galaxy has a diameter of . Overall the galaxy is about 88% the size of the Milky Way. Its mass is estimated to be 160 billion solar masses, or around 10.3% of the mass of Milky Way Galaxy.
A black hole, once thought to be surrounded by a ring of dust, but now believed to be partially occluded by dust instead, exists at the heart of the spiral. A pair of ionization cones extend from the active galactic nucleus.
Spiral structure.
The Whirlpool Galaxy has two, very prominent spiral arms that wind clockwise. One arm deviates from a constant angle significantly.
The pronounced spiral structure of the Whirlpool Galaxy is believed to be the result of the close interaction between it and its companion galaxy NGC 5195, which may have passed through the main disk of M51 about 500 to 600 million years ago. In this proposed scenario, NGC 5195 came from behind M51 through the disk towards the observer and made another disk crossing as recently as 50 to 100 million years ago until it is where we observe it to be now, slightly behind M51.
Tidal features.
As a result of the Whirpool Galaxy's interaction with NGC 5195, a variety of tidal features have been created. The largest of these features is the so-called Northwest plume, which extends out to from the galaxy's center. This plume is uniform in color and likely originated from the Whirpool Galaxy itself due to having diffuse gas. Adjacent to it are two other plumes that have a slightly bluer color, referred to as the Western plumes due to their location.
In 2015, a study discovered two new tidal features caused by the interaction between the Whirlpool Galaxy and NGC 5195, the "Northeast plume" and the "South plume". The study remarks that a simulation that takes into account only one passage of NGC 5195 into the Whirpool Galaxy will fail to produce an analogue to the Northeast tail. In contrast, the multiple-passage simulations made by Salo and Laurikainen et.al reproduce the northeast plume.
Star formation.
The central region of M51 appears to be undergoing a period of enhanced star formation.
The present efficiency of star formation, defined as the ratio of mass of new stars to the mass of star-forming gas, is only ~1%, quite comparable to the global value for the Milky Way and other galaxies. It is estimated that the current high rate of star formation can last no more than another 100 million years or so.
Similarly, the spiral arms are experiencing high levels of star formation, as well as the space along the arms.
Transient events.
Three supernovae have been observed in the Whirlpool Galaxy:
In 1994, SN 1994I was observed in the Whirlpool Galaxy. It was classified as type Ic, indicating that its progenitor star was very massive and had already shed much of its mass, and its brightness peaked at apparent magnitude 12.91.
In June 2005 the type II supernova SN 2005cs was observed in the Whirlpool Galaxy, peaking at apparent magnitude 14.
On 31 May 2011 a type II supernova was detected in the Whirlpool Galaxy, peaking at magnitude 12.1. This supernova, designated SN 2011dh, showed a spectrum much bluer than average, with P Cygni profiles, which indicate rapidly expanding material, in its hydrogen-Balmer lines. The progenitor was probably a yellow supergiant and not a red or blue supergiant, which are thought to be the most common supernova progenitors.
On 22 January 2019, a supernova impostor, designated AT 2019abn, was discovered in Messier 51. The transient was later identified as a luminous red nova. The progenitor star was detected in archival Spitzer Space Telescope infrared images. No object could be seen at the position of the transient in archival Hubble images, indicating that the progenitor star was heavily obstructed by interstellar dust. 2019abn peaked at magnitude 17, reaching an intrinsic brightness of formula_0.
Planet candidate.
In September 2020, the detection by the Chandra X-ray Observatory of a candidate exoplanet, named M51-ULS-1b, orbiting the high-mass X-ray binary M51-ULS-1 in this galaxy was announced. If confirmed, it would be the first known instance of an extragalactic planet, a planet "outside" the Milky Way Galaxy. The planet candidate was detected by eclipses of the X-ray source (XRS), which consists of a stellar remnant (either a neutron star or a black hole) and a massive star, likely a B-type supergiant. The planet would be slightly smaller than Saturn and orbit at a distance of some tens of astronomical units.
Companion.
NGC 5195 (also known as Messier 51b or M51b) is a dwarf galaxy that is interacting with the Whirlpool Galaxy (also known as M51a or NGC 5194). Both galaxies are located approximately 25 million light-years away in the constellation Canes Venatici. Together, the two galaxies are one of the most widely studied interacting galaxy pairs.
Galaxy group information.
The Whirlpool Galaxy is the brightest galaxy in the M51 Group, a small group of galaxies that also includes M63 (the Sunflower Galaxy), NGC 5023, and NGC 5229. This small group may actually be a subclump at the southeast end of a large, elongated group that includes the M101 Group and the NGC 5866 Group, although most group identification methods and catalogs identify the three groups as separate entities.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
External links.
<indicator name="01-sky-coordinates"><templatestyles src="Template:Sky/styles.css" />Coordinates: &show_grid=1&show_constellation_lines=1&show_constellation_boundaries=1&show_const_names=1&show_galaxies=1&img_source=IMG_all 13h 29m 52.7s, +47° 11′ 43″</indicator> | [
{
"math_id": 0,
"text": "M_{r}=-14.9"
}
]
| https://en.wikipedia.org/wiki?curid=715308 |
71535495 | Kuratowski's intersection theorem | In mathematics, Kuratowski's intersection theorem is a result in general topology that gives a sufficient condition for a nested sequence of sets to have a non-empty intersection. Kuratowski's result is a generalisation of Cantor's intersection theorem. Whereas Cantor's result requires that the sets involved be compact, Kuratowski's result allows them to be non-compact, but insists that their non-compactness "tends to zero" in an appropriate sense. The theorem is named for the Polish mathematician Kazimierz Kuratowski, who proved it in 1930.
Statement of the theorem.
Let ("X", "d") be a complete metric space. Given a subset "A" ⊆ "X", its Kuratowski measure of non-compactness "α"("A") ≥ 0 is defined by
formula_0
Note that, if "A" is itself compact, then "α"("A") = 0, since every cover of "A" by open balls of arbitrarily small diameter will have a finite subcover. The converse is also true: if "α"("A") = 0, then "A" must be precompact, and indeed compact if "A" is closed. Also, if "A" is a subset of "B", then "α"("A") ≤ "α"("B"). In some sense, the quantity "α"("A") is a numerical description of "how non-compact" the set "A" is.
Now consider a sequence of sets "A""n" ⊆ "X", one for each natural number "n". Kuratowski's intersection theorem asserts that if these sets are non-empty, closed, decreasingly nested (i.e. "A""n"+1 ⊆ "A""n" for each "n"), and "α"("A""n") → 0 as "n" → ∞, then their infinite intersection
formula_1
is a non-empty compact set.
The result also holds if one works with the ball measure of non-compactness or the separation measure of non-compactness, since these three measures of non-compactness are mutually Lipschitz equivalent; if any one of them tends to zero as "n" → ∞, then so must the other two. | [
{
"math_id": 0,
"text": "\\alpha(A) = \\inf \\left\\{ r \\geq 0 \\left| \\begin{array}{c} A \\text{ can be covered by finitely many subsets} \\\\ \\text{of } X \\text{, each with diameter at most } r \\end{array} \\right. \\right\\}."
},
{
"math_id": 1,
"text": "\\bigcap_{n \\in \\mathbb{N}} A_{n}"
}
]
| https://en.wikipedia.org/wiki?curid=71535495 |
71535582 | Gheorghe Călugăreanu | Romanian mathematician
Gheorghe Călugăreanu (16 June 1902 – 15 November 1976) was a Romanian mathematician, professor at Babeș-Bolyai University, and full member of the Romanian Academy.
He was born in Iași, the son of physician, naturalist, and physiologist Dimitrie Călugăreanu. From 1913 to 1921 he studied at the Gheorghe Lazăr High School in Bucharest, after which he attended University of Cluj, graduating in 1924. In 1926 he went to Paris to pursue his studies at the Sorbonne, supported by a scholarship from the Romanian government. He obtained his Ph.D. in mathematics in 1929, with thesis "Sur les fonctions polygènes d'une variable complexe" written under the direction of Émile Picard and defended before a jury that also included Édouard Goursat and Gaston Julia. After returning to Romania, he was appointed assistant the University of Cluj in 1930; he was promoted to lecturer in 1934 and named professor in 1942. From 1953 to 1957 he served as Dean of the Faculty of Mathematics. His Ph.D. students include Petru Mocanu. He was elected a corresponding member of the Romanian Academy in 1955, and he became a full member in 1963.
Călugăreanu studied the theory of functions of a complex variable (meromorphic functions, univalent functions, analytic extension invariants), as well as differential geometry and algebraic topology, especially in knot theory. In his best-known work, he established in 1961 the following foundational result regarding the writhe of a knot: take a ribbon in three-dimensional space, let formula_0 be the linking number of its border components, and let formula_1 be its total twist; then the difference formula_2 depends only on the core curve of the ribbon. In a paper from 1959, he showed how to calculate the writhe of a knot by means of a Gaussian double integral. Călugăreanu's formula has since been pursued by James H. White and F. Brock Fuller, leading to applications in DNA topology, where writhe is used to describe the amount a piece of DNA is deformed as a result of torsional stress (a phenomenon known as DNA supercoiling). The topological interpretation of helicity in terms of the Gauss linking number and its limiting form has been called the "Călugăreanu invariant" by Keith Moffatt and Renzo L. Ricca.
He died of cancer in Cluj-Napoca in 1976; following his wishes, he was cremated and the urn was deposited at Bellu Cemetery in Bucharest.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\operatorname{Lk}"
},
{
"math_id": 1,
"text": "\\operatorname{Tw}"
},
{
"math_id": 2,
"text": "\\operatorname{Wr}=\\operatorname{Lk}-\\operatorname{Tw}"
}
]
| https://en.wikipedia.org/wiki?curid=71535582 |
7153598 | Richards equation | The Richards equation represents the movement of water in unsaturated soils, and is attributed to Lorenzo A. Richards who published the equation in 1931. It is a quasilinear partial differential equation; its analytical solution is often limited to specific initial and boundary conditions. Proof of the existence and uniqueness of solution was given only in 1983 by Alt and Luckhaus. The equation is based on Darcy-Buckingham law representing flow in porous media under variably saturated conditions, which is stated as
formula_0
where
formula_1 is the volumetric flux;
formula_2 is the volumetric water content;
formula_3 is the liquid pressure head, which is negative for unsaturated porous media;
formula_4 is the unsaturated hydraulic conductivity;
formula_5 is the geodetic head gradient, which is assumed as formula_6 for three-dimensional problems.
Considering the law of mass conservation for an incompressible porous medium and constant liquid density, expressed as
formula_7,
where
formula_8 is the sink term [Tformula_9], typically root water uptake.
Then substituting the fluxes by the Darcy-Buckingham law the following mixed-form Richards equation is obtained:
formula_10.
For modeling of one-dimensional infiltration this divergence form reduces to
formula_11.
Although attributed to L. A. Richards, the equation was originally introduced 9 years earlier by Lewis Fry Richardson in 1922.
Formulations.
The Richards equation appears in many articles in the environmental literature because it describes the flow in the vadose zone between the atmosphere and the aquifer. It also appears in pure mathematical journals because it has non-trivial solutions. The above-given mixed formulation involves two unknown variables: formula_2 and formula_3. This can be easily resolved by considering constitutive relation formula_12, which is known as the water retention curve. Applying the chain rule, the Richards equation may be reformulated as either formula_3-form (head based) or formula_2-form (saturation based) Richards equation.
Head-based.
By applying the chain rule on temporal derivative leads to
formula_13,
where formula_14 is known as the retention water capacity formula_15. The equation is then stated as
formula_16.
The head-based Richards equation is prone to the following computational issue: the discretized temporal derivative using the implicit Rothe method yields the following approximation:
formula_17
This approximation produces an error formula_18 that affects the mass conservation of the numerical solution, and so special strategies for temporal derivatives treatment are necessary.
Saturation-based.
By applying the chain rule on the spatial derivative leads to
formula_19
where formula_20, which could be further formulated as formula_21, is known as the soil water diffusivity formula_22. The equation is then stated as
formula_23
The saturation-based Richards equation is prone to the following computational issues. Since the limits formula_24 and formula_25, where formula_26 is the saturated (maximal) water content and formula_27 is the residual (minimal) water content a successful numerical solution is restricted just for ranges of water content satisfactory below the full saturation (the saturation should be even lower than air entry value) as well as satisfactory above the residual water content.
Parametrization.
The Richards equation in any of its forms involves soil hydraulic properties, which is a set of five parameters representing soil type. The soil hydraulic properties typically consist of water retention curve parameters by van Genuchten: (formula_28), where formula_29 is the inverse of air entry value [L−1], formula_30 is the pore size distribution parameter [-], and formula_31 is usually assumed as formula_32. Further the saturated hydraulic conductivity formula_33 (which is for non isotropic environment a tensor of second order) should also be provided. Identification of these parameters is often non-trivial and was a subject of numerous publications over several decades.
Limitations.
The numerical solution of the Richards equation is one of the most challenging problems in earth science. Richards' equation has been criticized for being computationally expensive and unpredictable because there is no guarantee that a solver will converge for a particular set of soil constitutive relations. Advanced computational and software solutions are required here to over-come this obstacle. The method has also been criticized for over-emphasizing the role of capillarity, and for being in some ways 'overly simplistic' In one dimensional simulations of rainfall infiltration into dry soils, fine spatial discretization less than one cm is required near the land surface, which is due to the small size of the representative elementary volume for multiphase flow in porous media. In three-dimensional applications the numerical solution of the Richards equation is subject to aspect ratio constraints where the ratio of horizontal to vertical resolution in the solution domain should be less than about 7.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\vec{q}=-\\mathbf{K}(\\theta) (\\nabla h + \\nabla z),"
},
{
"math_id": 1,
"text": "\\vec{q}"
},
{
"math_id": 2,
"text": "\\theta"
},
{
"math_id": 3,
"text": "h"
},
{
"math_id": 4,
"text": "\\mathbf{K}(h)"
},
{
"math_id": 5,
"text": "\\nabla z"
},
{
"math_id": 6,
"text": "\\nabla z = \\left(\\begin{smallmatrix} 0 \\\\ 0 \\\\ 1 \\end{smallmatrix} \\right)"
},
{
"math_id": 7,
"text": "\\frac{\\partial \\theta}{\\partial t} + \\nabla \\cdot \\vec{q} + S = 0"
},
{
"math_id": 8,
"text": "S"
},
{
"math_id": 9,
"text": "^{-1}"
},
{
"math_id": 10,
"text": " \\frac{\\partial \\theta}{\\partial t} = \\nabla \\cdot \\mathbf{K}(h) (\\nabla h + \\nabla z) - S "
},
{
"math_id": 11,
"text": "\\frac{\\partial \\theta}{\\partial t}= \\frac{\\partial}{\\partial z} \n\\left( \\mathbf{K}(\\theta) \\left (\\frac{\\partial h}{\\partial z} + 1 \\right) \\right) - S "
},
{
"math_id": 12,
"text": "\\theta(h)"
},
{
"math_id": 13,
"text": "\\frac{\\partial \\theta(h)}{\\partial t} = \\frac{\\textrm{d} \\theta}{\\textrm{d} h} \\frac{\\partial h}{\\partial t} "
},
{
"math_id": 14,
"text": "\\frac{\\textrm{d} \\theta}{\\textrm{d} h}"
},
{
"math_id": 15,
"text": " C(h) "
},
{
"math_id": 16,
"text": " C(h)\\frac{\\partial h}{\\partial t}= \\nabla \\cdot \\left( \\mathbf{K}(h) \\nabla h + \\nabla z\\right) - S "
},
{
"math_id": 17,
"text": "\\frac{\\Delta \\theta}{\\Delta t} \\approx C(h) \\frac{\\Delta h}{\\Delta t}, \\quad \\mbox{and so} \\quad \\frac{\\Delta \\theta}{\\Delta t} - C(h) \\frac{\\Delta h}{\\Delta t} = \\varepsilon ."
},
{
"math_id": 18,
"text": "\\varepsilon"
},
{
"math_id": 19,
"text": " \\mathbf{K}(h) \\nabla h = \\mathbf{K}(h) \\frac{\\textrm{d}h}{\\textrm{d} \\theta} \\nabla \\theta, "
},
{
"math_id": 20,
"text": "\\mathbf{K}(h) \\frac{\\textrm{d}h}{\\textrm{d} \\theta}"
},
{
"math_id": 21,
"text": "\\frac{\\mathbf{K}(\\theta)}{C(\\theta)}"
},
{
"math_id": 22,
"text": "\\mathbf{D}(\\theta)"
},
{
"math_id": 23,
"text": " \\frac{\\partial \\theta }{\\partial t}= \\nabla \\cdot \\mathbf{D}(\\theta) \\nabla \\theta - S. "
},
{
"math_id": 24,
"text": " \\lim_{\\theta \\to \\theta_s} ||\\mathbf{D}(\\theta)|| = \\infty "
},
{
"math_id": 25,
"text": "\\lim_{\\theta \\to \\theta_r}||\\mathbf{D}(\\theta)|| = \\infty"
},
{
"math_id": 26,
"text": " \\theta_s "
},
{
"math_id": 27,
"text": "\\theta_r"
},
{
"math_id": 28,
"text": " \\alpha, \\, n, \\,m, \\, \\theta_s, \\theta_r "
},
{
"math_id": 29,
"text": "\\alpha"
},
{
"math_id": 30,
"text": "n"
},
{
"math_id": 31,
"text": "m"
},
{
"math_id": 32,
"text": "m= 1-\\frac{1}{n}"
},
{
"math_id": 33,
"text": "\\mathbf{K}_s"
}
]
| https://en.wikipedia.org/wiki?curid=7153598 |
71550825 | Bretagnolle–Huber inequality | Inequality in information theory
In information theory, the Bretagnolle–Huber inequality bounds the total variation distance between two probability distributions formula_0 and formula_1 by a concave and bounded function of the Kullback–Leibler divergence formula_2. The bound can be viewed as an alternative to the well-known Pinsker's inequality: when formula_2 is large (larger than 2 for instance.), Pinsker's inequality is vacuous, while Bretagnolle–Huber remains bounded and hence non-vacuous. It is used in statistics and machine learning to prove information-theoretic lower bounds relying on hypothesis testing
Formal statement.
Preliminary definitions.
Let formula_0 and formula_1 be two probability distributions on a measurable space formula_3.
Recall that the total variation between formula_0 and formula_1 is defined by
formula_4
The Kullback-Leibler divergence is defined as follows:
formula_5
In the above, the notation formula_6 stands for absolute continuity of formula_0 with respect to formula_1, and formula_7 stands for the Radon–Nikodym derivative of formula_0 with respect to formula_1.
General statement.
The Bretagnolle–Huber inequality says:
formula_8
Alternative version.
The following version is directly implied by the bound above but some authors prefer stating it this way.
Let formula_9 be any event. Then
formula_10
where formula_11 is the complement of formula_12.
Indeed, by definition of the total variation, for any formula_13,
formula_14
Rearranging, we obtain the claimed lower bound on formula_15.
Proof.
We prove the main statement following the ideas in Tsybakov's book (Lemma 2.6, page 89), which differ from the original proof (see C.Canonne's note for a modernized retranscription of their argument).
The proof is in two steps:
1. Prove using Cauchy–Schwarz that the total variation is related to the Bhattacharyya coefficient (right-hand side of the inequality):
formula_16
2. Prove by a clever application of Jensen’s inequality that
formula_17
First notice that
formula_18
To see this, denote formula_19 and without loss of generality, assume that formula_20 such that formula_21. Then we can rewrite
formula_22
And then adding and removing formula_23 we obtain both identities.
Then
formula_24
because formula_25
We write formula_26 and apply Jensen's inequality:
formula_27
Combining the results of steps 1 and 2 leads to the claimed bound on the total variation.
Examples of applications.
Sample complexity of biased coin tosses.
Source:
The question is
"How many coin tosses do I need to distinguish a fair coin from a biased one?"
Assume you have 2 coins, a fair coin (Bernoulli distributed with mean formula_28) and an formula_29-biased coin (formula_30). Then, in order to identify the biased coin with probability at least formula_31 (for some formula_32), at least
formula_33
In order to obtain this lower bound we impose that the total variation distance between two sequences of formula_34 samples is at least formula_35. This is because the total variation upper bounds the probability of under- or over-estimating the coins' means. Denote formula_36 and formula_37 the respective joint distributions of the formula_34 coin tosses for each coin, then
We have
formula_38
The result is obtained by rearranging the terms.
Information-theoretic lower bound for "k"-armed bandit games.
In multi-armed bandit, a lower bound on the minimax regret of any bandit algorithm can be proved using Bretagnolle–Huber and its consequence on hypothesis testing (see Chapter 15 of "Bandit Algorithms").
History.
The result was first proved in 1979 by Jean Bretagnolle and Catherine Huber, and published in the proceedings of the Strasbourg Probability Seminar. Alexandre Tsybakov's book features an early re-publication of the inequality and its attribution to Bretagnolle and Huber, which is presented as an early and less general version of Assouad's lemma (see notes 2.8). A constant improvement on Bretagnolle–Huber was proved in 2014 as a consequence of an extension of Fano's Inequality.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "Q"
},
{
"math_id": 2,
"text": "D_\\mathrm{KL}(P \\parallel Q)"
},
{
"math_id": 3,
"text": "(\\mathcal{X}, \\mathcal{F})"
},
{
"math_id": 4,
"text": " d_\\mathrm{TV}(P,Q) = \\sup_{A \\in \\mathcal{F}} \\{|P(A)-Q(A)| \\}."
},
{
"math_id": 5,
"text": "D_\\mathrm{KL}(P \\parallel Q) = \n\\begin{cases}\n\\int_{\\mathcal{X}} \\log\\bigl(\\frac{dP}{dQ}\\bigr)\\, dP & \\text{if } P \\ll Q, \\\\[1mm]\n+\\infty & \\text{otherwise}.\n\\end{cases}\n"
},
{
"math_id": 6,
"text": "P\\ll Q"
},
{
"math_id": 7,
"text": "\\frac{dP}{dQ}"
},
{
"math_id": 8,
"text": " d_\\mathrm{TV}(P,Q) \\leq \\sqrt{1-\\exp(-D_\\mathrm{KL}(P \\parallel Q))} \\leq 1 - \\frac{1}{2}\\exp(-D_\\mathrm{KL}(P \\parallel Q)) "
},
{
"math_id": 9,
"text": "A\\in \\mathcal{F}"
},
{
"math_id": 10,
"text": "P(A) + Q(\\bar{A}) \\geq \\frac{1}{2}\\exp(-D_\\mathrm{KL}(P \\parallel Q))"
},
{
"math_id": 11,
"text": "\\bar{A} = \\Omega \\smallsetminus A"
},
{
"math_id": 12,
"text": "A"
},
{
"math_id": 13,
"text": "A \\in \\mathcal{F}"
},
{
"math_id": 14,
"text": " \\begin{align}\nQ(A) - P(A) \\leq d_\\mathrm{TV}(P,Q) & \\leq 1- \\frac{1}{2}\\exp(-D_\\mathrm{KL}(P \\parallel Q)) \\\\\n& = Q(A) + Q(\\bar{A}) - \\frac{1}{2}\\exp(-D_\\mathrm{KL}(P \\parallel Q))\n\\end{align}\n"
},
{
"math_id": 15,
"text": "P(A)+Q(\\bar{A})"
},
{
"math_id": 16,
"text": " 1-d_\\mathrm{TV}(P,Q)^2 \\geq \\left(\\int \\sqrt{PQ}\\right)^2"
},
{
"math_id": 17,
"text": "\\left(\\int \\sqrt{PQ}\\right)^2 \\geq \\exp(-D_\\mathrm{KL}(P \\parallel Q))"
},
{
"math_id": 18,
"text": "d_\\mathrm{TV}(P,Q) = 1-\\int \\min(P,Q) = \\int \\max(P,Q) -1"
},
{
"math_id": 19,
"text": "A^* = \\arg\\max_{A\\in \\Omega} |P(A)-Q(A)|"
},
{
"math_id": 20,
"text": " P(A^*)>Q(A^*)"
},
{
"math_id": 21,
"text": "d_\\mathrm{TV}(P,Q)=P(A^*)-Q(A^*)"
},
{
"math_id": 22,
"text": "d_\\mathrm{TV}(P,Q) = \\int_{A^*} \\max(P,Q) - \\int_{A^*} \\min(P,Q) "
},
{
"math_id": 23,
"text": "\\int_{\\bar{A^*}} \\max(P,Q) \\text{ or } \\int_{\\bar{A^*}}\\min(P,Q)"
},
{
"math_id": 24,
"text": " \\begin{align}\n1-d_\\mathrm{TV}(P,Q)^2 & = (1-d_\\mathrm{TV}(P,Q))(1+d_\\mathrm{TV}(P,Q)) \\\\\n& = \\int \\min(P,Q) \\int \\max(P,Q) \\\\\n& \\geq \\left(\\int \\sqrt{\\min(P,Q) \\max(P,Q)}\\right)^2 \\\\\n& = \\left(\\int \\sqrt{PQ}\\right)^2 \n\\end{align}\n"
},
{
"math_id": 25,
"text": "PQ = \\min(P,Q) \\max(P,Q). "
},
{
"math_id": 26,
"text": "(\\cdot)^2=\\exp(2\\log(\\cdot))"
},
{
"math_id": 27,
"text": " \\begin{align}\n\\left(\\int \\sqrt{PQ}\\right)^2 &= \\exp\\left(2\\log\\left(\\int \\sqrt{PQ}\\right)\\right) \\\\\n& = \\exp\\left(2\\log\\left(\\int P\\sqrt{\\frac{Q}{P}}\\right)\\right) \\\\\n& =\\exp\\left(2\\log\\left(\\operatorname{E}_P \\left[\\left(\\sqrt{\\frac{P}{Q}}\\right)^{-1} \\, \\right] \\right) \\right) \\\\\n& \\geq \\exp\\left(\\operatorname{E}_P\\left[-\\log\\left(\\frac{P}{Q} \\right)\\right] \\right) = \\exp(-D_{KL}(P,Q))\n\\end{align}\n"
},
{
"math_id": 28,
"text": "p_1=1/2"
},
{
"math_id": 29,
"text": "\\varepsilon"
},
{
"math_id": 30,
"text": "p_2=1/2+\\varepsilon"
},
{
"math_id": 31,
"text": "1-\\delta"
},
{
"math_id": 32,
"text": "\\delta>0"
},
{
"math_id": 33,
"text": " n\\geq \\frac{1}{2\\varepsilon^2}\\log\\left(\\frac{1}{2\\delta}\\right)."
},
{
"math_id": 34,
"text": "n"
},
{
"math_id": 35,
"text": "1-2\\delta"
},
{
"math_id": 36,
"text": "P_1^n"
},
{
"math_id": 37,
"text": " P_2^n"
},
{
"math_id": 38,
"text": " \\begin{align}\n(1-2\\delta)^2 & \\leq d_\\mathrm{TV}\\left(P_1^n, P_2^n \\right)^2 \\\\[4pt]\n& \\leq 1-e^{-D_\\mathrm{KL}(P_1^n \\parallel P_2^n)} \\\\[4pt]\n&= 1-e^{-nD_\\mathrm{KL}(P_1 \\parallel P_2)}\\\\[4pt]\n& = 1-e^{-n\\frac{\\log(1/(1-4\\varepsilon^2))}{2}}\n\\end{align}\n"
}
]
| https://en.wikipedia.org/wiki?curid=71550825 |
7155145 | Dagger compact category | Special dagger category that is compact
In category theory, a branch of mathematics, dagger compact categories (or dagger compact closed categories) first appeared in 1989 in the work of Sergio Doplicher and John E. Roberts on the reconstruction of compact topological groups from their category of finite-dimensional continuous unitary representations (that is, Tannakian categories). They also appeared in the work of John Baez and James Dolan as an instance of semistrict "k"-tuply monoidal "n"-categories, which describe general topological quantum field theories, for "n" = 1 and "k" = 3. They are a fundamental structure in Samson Abramsky and Bob Coecke's categorical quantum mechanics.
Overview.
Dagger compact categories can be used to express and verify some fundamental quantum information protocols, namely: teleportation, logic gate teleportation and entanglement swapping, and standard notions such as unitarity, inner-product, trace, Choi–Jamiolkowsky duality, complete positivity, Bell states and many other notions are captured by the language of dagger compact categories. All this follows from the completeness theorem, below. Categorical quantum mechanics takes dagger compact categories as a background structure relative to which other quantum mechanical notions like quantum observables and complementarity thereof can be abstractly defined. This forms the basis for a high-level approach to quantum information processing.
Formal definition.
A dagger compact category is a dagger symmetric monoidal category formula_0 which is also compact closed, together with a relation to tie together the dagger structure to the compact structure. Specifically, the dagger is used to connect the unit to the counit, so that, for all formula_1 in formula_2, the following diagram commutes:
To summarize all of these points:
A dagger compact category is then a category that is each of the above, and, in addition, has a condition to relate the dagger structure to the compact structure. This is done by relating the unit to the counit via the dagger:
formula_10
shown in the commuting diagram above. In the category FdHilb of finite-dimensional Hilbert spaces, this last condition can be understood as defining the dagger (the Hermitian conjugate) as the transpose of the complex conjugate.
Examples.
The following categories are dagger compact.
Infinite-dimensional Hilbert spaces are not dagger compact, and are described by dagger symmetric monoidal categories.
Structural theorems.
Selinger showed that dagger compact categories admit a Joyal-Street style diagrammatic language and proved that dagger compact categories are complete with respect to finite dimensional Hilbert spaces "i.e." an equational statement in the language of dagger compact categories holds if and only if it can be derived in the concrete category of finite dimensional Hilbert spaces and linear maps. There is no analogous completeness for Rel or nCob.
This completeness result implies that various theorems from Hilbert spaces extend to this category. For example, the no-cloning theorem implies that there is no universal cloning morphism. Completeness also implies far more mundane features as well: dagger compact categories can be given a basis in the same way that a Hilbert space can have a basis. Operators can be decomposed in the basis; operators can have eigenvectors, "etc.". This is reviewed in the next section.
Basis.
The completeness theorem implies that basic notions from Hilbert spaces carry over to any dagger compact category. The typical language employed, however, changes. The notion of a basis is given in terms of a coalgebra. Given an object "A" from a dagger compact category, a basis is a comonoid object formula_11. The two operations are a "copying" or comultiplication δ: "A" → "A" ⊗ "A" morphism that is cocommutative and coassociative, and a "deleting" operation or counit morphism ε: "A" → "I" . Together, these obey five axioms:
Comultiplicativity:
formula_12
Coassociativity:
formula_13
Cocommutativity:
formula_14
Isometry:
formula_15
Frobenius law:
formula_16
To see that these relations define a basis of a vector space in the traditional sense, write the comultiplication and counit using bra–ket notation, and understanding that these are now linear operators acting on vectors formula_17 in a Hilbert space "H":
formula_18
and
formula_19
The only vectors formula_17 that can satisfy the above five axioms must be orthogonal to one-another; the counit then uniquely specifies the basis. The suggestive names "copying" and "deleting" for the comultiplication and counit operators come from the idea that the no-cloning theorem and no-deleting theorem state that the "only" vectors that it is possible to copy or delete are orthogonal basis vectors.
General results.
Given the above definition of a basis, a number of results for Hilbert spaces can be stated for compact dagger categories. We list some of these below, taken from unless otherwise noted.
formula_22
Eigenstates are orthogonal to one another.
formula_23
(In quantum mechanics, a state vector formula_21 is said to be complementary to an observable if any measurement result is equiprobable. viz. an spin eigenstate of "S"x is equiprobable when measured in the basis "S"z, or momentum eigenstates are equiprobable when measured in the position basis.)
formula_26
formula_27
is unitary if and only if formula_21 is complementary to the observable formula_20
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{C}"
},
{
"math_id": 1,
"text": " A "
},
{
"math_id": 2,
"text": " \\mathbf{C}"
},
{
"math_id": 3,
"text": "\\mathbf{C} \\otimes \\mathbf{C} \\to \\mathbf{C}"
},
{
"math_id": 4,
"text": "\\sigma_{A, B}: A \\otimes B \\simeq B \\otimes A"
},
{
"math_id": 5,
"text": "A \\in \\mathbf{C}"
},
{
"math_id": 6,
"text": "A^*"
},
{
"math_id": 7,
"text": "\\eta_A:I\\to A^*\\otimes A"
},
{
"math_id": 8,
"text": "\\varepsilon_A:A\\otimes A^*\\to I"
},
{
"math_id": 9,
"text": "\\dagger\\colon \\mathbf{C}^{op}\\rightarrow\\mathbf{C}"
},
{
"math_id": 10,
"text": "\\sigma_{A, A^*} \\circ\\varepsilon^\\dagger_A = \\eta_A"
},
{
"math_id": 11,
"text": "(A,\\delta,\\varepsilon)"
},
{
"math_id": 12,
"text": "(1_A \\otimes \\varepsilon) \\circ \\delta =1_A = (\\varepsilon \\otimes 1_A) \\circ \\delta"
},
{
"math_id": 13,
"text": "(1_A \\otimes \\delta) \\circ \\delta = (\\delta \\otimes 1_A) \\circ \\delta"
},
{
"math_id": 14,
"text": "\\sigma_{A,A} \\circ \\delta = \\delta"
},
{
"math_id": 15,
"text": "\\delta^\\dagger \\circ \\delta = 1_A"
},
{
"math_id": 16,
"text": "(\\delta^\\dagger \\otimes 1_A) \\circ (1_A \\otimes \\delta) = \\delta \\circ \\delta^\\dagger"
},
{
"math_id": 17,
"text": "|j\\rangle"
},
{
"math_id": 18,
"text": "\\begin{align}\n\\delta : H &\\to H\\otimes H \\\\\n|j\\rangle & \\mapsto |j\\rangle\\otimes |j\\rangle = |j j \\rangle \\\\\n\\end{align}"
},
{
"math_id": 19,
"text": "\\begin{align}\n\\varepsilon : H &\\to \\mathbb{C} \\\\\n|j\\rangle & \\mapsto 1\\\\\n\\end{align}"
},
{
"math_id": 20,
"text": "(A, \\delta, \\varepsilon)"
},
{
"math_id": 21,
"text": "\\psi"
},
{
"math_id": 22,
"text": "\\delta \\circ \\psi = \\psi \\otimes \\psi"
},
{
"math_id": 23,
"text": "\\delta^\\dagger \\circ (\\overline\\psi \\otimes \\psi) = \\varepsilon^\\dagger"
},
{
"math_id": 24,
"text": "(A, \\delta_X, \\varepsilon_X)"
},
{
"math_id": 25,
"text": "(A, \\delta_Z, \\varepsilon_Z)"
},
{
"math_id": 26,
"text": "\\delta^\\dagger_Z \\circ \\delta_X = \\varepsilon_Z \\circ \\varepsilon_X^\\dagger"
},
{
"math_id": 27,
"text": "\\delta^\\dagger \\circ (\\psi\\otimes 1_A)"
}
]
| https://en.wikipedia.org/wiki?curid=7155145 |
71555641 | Thom–Sebastiani Theorem | In complex analysis, a branch of mathematics, the Thom–Sebastiani Theorem states: given the germ formula_0 defined as formula_1 where formula_2 are germs of holomorphic functions with isolated singularities, the vanishing cycle complex of formula_3 is isomorphic to the tensor product of those of formula_4. Moreover, the isomorphism respects the monodromy operators in the sense: formula_5.
The theorem was introduced by Thom and Sebastiani in 1971.
Observing that the analog fails in positive characteristic, Deligne suggested that, in positive characteristic, a tensor product should be replaced by a (certain) local convolution product.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f : (\\mathbb{C}^{n_1 + n_2}, 0) \\to (\\mathbb{C}, 0)"
},
{
"math_id": 1,
"text": "f(z_1, z_2) = f_1(z_1) + f_2(z_2)"
},
{
"math_id": 2,
"text": "f_i"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "f_1, f_2"
},
{
"math_id": 5,
"text": "T_{f_1} \\otimes T_{f_2} = T_f"
}
]
| https://en.wikipedia.org/wiki?curid=71555641 |
71559453 | Emiko Hiyama | Japanese physicist
Emiko Hiyama () is a Japanese computational nuclear physicist whose research concerns computational methods for few-body systems of nucleons. She is the director of the Strangeness Nuclear Physics Laboratory at the Riken Nishina Center for Accelerator-Based Science, and a professor of physics at Tohoku University.
Education and career.
Hiyama is originally from Fukuoka Prefecture, and studied physics at Kyushu University.
She was a researcher for KEK, the Japanese High Energy Accelerator Research Organization, from 2000 to 2004. In 2004, she became an associate professor at Nara Women's University. She moved to Riken in 2008, and became laboratory director there in 2018. From 2017 to 2020 she was also affiliated with the Department of Physics at Kyushu University, and in 2021 she took her present position as a professor at Tohoku University.
Recognition.
Hiyama was the 2013 winner of the Saruhashi Prize, for "developing computational methods for precise solutions of quantum formula_0-body problems".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
}
]
| https://en.wikipedia.org/wiki?curid=71559453 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.