id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
77310683 | Rain attenuation frequency scaling | In communications satellite systems, rain attenuation frequency scaling is a technique implemented to model rain fade phenomena affecting a telecommunications link, both statistically and instantaneously. Accurate predictions of rain attenuation are crucial both for the proper design of a satellite communication (SatCom) system, as the detrimental impact of hydrometeors present within the troposphere, mainly rain, on radio frequency signals, can lead to system failures (commonly known as network outage periods). Moreover, such analyses are essential for the implementation of adaptive fade mitigation techniques, such as uplink power control and variable rate encoding schemes, to increase the link availability.
A scaling approach is particularly suitable in scenarios where the uplink and downlink, which typically share the same channel capacity and therefore operate at different frequency to avoid co-channel interference, are affected by the same rainfall event along the link. In such context, it may be advantageous to derive the attenuation due to rain at the higher frequency, called "target frequency", by properly scaling concurrent attenuation measurements affecting the same link at lower frequency, called "reference frequency."
Furthermore, as rain attenuation measurements inherently embed key information about the rain event, such as the spatial distribution of the rain and the information on the raindrop size distribution (DSD), frequency scaling models provide an enhanced prediction accuracy if compared to statistical prediction models, which are typically fed with local pointfall rain data only. As proof, frequency scaling models applied to experimental SatCom systems operating within the geostationary orbit yield statistical errors of 12 - 15% in contrast to the 30 - 40% associated with statistical prediction models.
General definition.
Conceptually, the frequency scaling (FS) of rain attenuation, formula_0, can be expressed as:
formula_1
where the estimation of the rain attenuation at the target frequency formula_2, namely formula_3, is directly related to the corresponding attenuation measured at the reference frequency formula_4 (Hz), namely formula_5 (dB), by means of the "frequency scaling ratio", formula_6, whose definition changes model by model.
Several FS models have been proposed in the past, and they can be classified as either statistical (S-FS) models or instantaneous (I-FS) models. S-FS are typically empirically-based and relate the attenuation values at formula_4 and formula_2 as function of the same frequency of exceedance, commonly referred to as the exceedance probability level formula_7%:
formula_8
In this context, formula_6 is typically a constant only dependent only on the two operating frequencies. However, defining a fixed formula_6 limits the scaling prediction accuracy, as the value of formula_6 can vary significantly from one rain event to another, and even within the same event.
I-FS models aim at overcoming this limitation by introducing a time-variant formula_9:
formula_10
In addition to enhanced accuracy, I-FS models are fundamental for assessing the dynamics of rain attenuation along an Earth-space link. This is crucial for investigating, for example, fade slope (i.e., the rate of change with time for rain attenuation) and fade duration (i.e., the duration for a given rain attenuation threshold) statistics.
Statistical frequency scaling models.
Several long-term S-FS models have been proposed in the past to extrapolate attenuation induced by rain from one frequency to another. One of the most straightforward S-FS approaches is based on the following power law:
formula_11,
where formula_4 and formula_2 represent the lower and upper operating frequencies, respectively. Various values for the power "n" have been proposed:
The model recommended by the International Radio Consultative Committee (CCIR), now ITU-R, is defined by a fixed formula_6 based on a formula of the type:
formula_15,
where formula_16 is a function defined as:
formula_17.
Boithias's model is fed by the base attenuation formula_18 (dB) and the operating frequencies:
formula_19,
where
formula_20
and
formula_21.
Similarly, the ITU-R proposes a statistical scaling model valid in the frequency range from 7 to 55 GHz. This model defines a scaling ratio similar to that proposed by Boithias, except for formula_22, which is expressed as:
formula_23.
The advantage associated with statistical frequency scaling (S-FS) models is their relatively minimal input requirements, typically involving only the operating frequency and, in some cases, the rain attenuation evaluated at the reference frequency. However, it has been demonstrated that these models tend to be accurate only for specific frequency pairs, and no model has shown consistent accuracy across a broader frequency range.
Instantaneous frequency scaling models.
I-FS models can be applied at each individual time instant, thereby accommodating the variability of the frequency scaling ratio between different rain events and even within a single rain event. This variability can be accurately accounted for by utilizing the specific rain attenuation, namely formula_24 (dB/km), which is calculated based on the actual rainfall rate measured at the ground station. Assuming that the impact of the link and the rainfall remains the same across both frequencies, it is possible to define a scaling ratio by using the specific rain attenuation values at the lower and upper frequencies, rather than the relative attenuation values. This is expressed, for a generic time instant formula_25, as:
formula_26.
A relatively straightforward approach for the estimation of formula_27 is proposed in the Recommendation ITU-R P.838-3, where the specific rain attenuation is modeled from the local rain rate formula_28 (mm/h) using the following power-law relationship:
formula_29
where formula_30 (GHz) is the frequency, formula_31 (rad) is the signal polarization and formula_32 (rad) is the link elevation angle. Values for formula_33 and formula_34 are tabulated in the referenced Recommendation for frequencies in the range from 1 to 1000 GHz.
formula_28 measurements are typically collected using rain gauges, which provide time series of rainfall rate. Additionally, if a disdrometer is available at the site, it can measure not only the precipitation intensity but also the physical microproperties of hydrometeors, such as the size and falling velocity of the drops. This allows for the computation of the raindrop size distribution (DSD):
formula_35
where formula_36 (mm) represents the width of each drop-size class,
formula_37 is the disdrometer sampling area, formula_38 (s) is the instrument integration time and formula_39 is the number of velocity classes. Consequently, the specific rain attenuation at a generic time instant formula_25 depends on formula_40 as:
formula_41
where formula_42 is the number of diameter classes measured by the disdrometer, while the forward scattering coefficient formula_43 is calculated using the T-matrix method, assuming the axial ratio defined by Beard and Chuang. Although this approach provides highly frequency scaling accuracy, DSD data are seldom available among network planners. | [
{
"math_id": 0,
"text": "{A}_{R}"
},
{
"math_id": 1,
"text": "\\tilde{A}_{R,f_{U}} = R_{FS} \\ A_{R,f_{L}} \\quad\\textrm{(dB)}"
},
{
"math_id": 2,
"text": "f_{U}"
},
{
"math_id": 3,
"text": "\\tilde{A}_{R,f_{U}}"
},
{
"math_id": 4,
"text": "f_{L}"
},
{
"math_id": 5,
"text": "{A}_{R,f_{L}}"
},
{
"math_id": 6,
"text": "R_{FS}"
},
{
"math_id": 7,
"text": "p"
},
{
"math_id": 8,
"text": "\\tilde{A}_{R,f_{U}}(p) = R_{FS} \\ A_{R,f_{L}}(p) \\quad\\textrm{(dB)}"
},
{
"math_id": 9,
"text": "R_{FS}(t)"
},
{
"math_id": 10,
"text": "\\tilde{A}_{R,f_{U}}(t) = R_{FS}(t) \\ A_{R,f_{L}}(t) \\quad\\textrm{(dB)}"
},
{
"math_id": 11,
"text": "R_{FS}^{POW} = \\left( \\frac{f_{U}}{f_{L}} \\right)^n \\quad n>0"
},
{
"math_id": 12,
"text": "n=1.8"
},
{
"math_id": 13,
"text": "n=2"
},
{
"math_id": 14,
"text": "n=1.72"
},
{
"math_id": 15,
"text": "R_{FS}^{CCIR} = \\frac{g(f_{U})}{g(f_{L})}"
},
{
"math_id": 16,
"text": "g(f)"
},
{
"math_id": 17,
"text": "g(f)=\\frac{f^{1.72}}{1+3\\cdot 10^{-7} f^{3.44}}"
},
{
"math_id": 18,
"text": "A_{L}"
},
{
"math_id": 19,
"text": "R_{FS}^{B} = \\left( \\frac{\\phi_{U}}{\\phi_{L}} \\right) ^{1-H(\\phi_{L},\\phi_{U},A_{L})}"
},
{
"math_id": 20,
"text": "\\phi(f) = \\frac{f^2}{1+10^{-4}f^2}"
},
{
"math_id": 21,
"text": "H(\\phi_{L},\\phi_{U},A_{L})=1.12 \\cdot 10^{-3}\\left(\\frac{\\phi_{U}}{\\phi_{L}}\\right)(\\phi_{U}A_{L})^{0.55}"
},
{
"math_id": 22,
"text": "H(\\phi_{L},\\phi_{U},A_{L})"
},
{
"math_id": 23,
"text": "H_{ITU}(\\phi_{L},\\phi_{U},A_{L})=1.12 \\cdot 10^{-3}\\left(\\frac{\\phi_{U}}{\\phi_{L}} \\right)^{0.5}(\\phi_{U}A_{L})^{0.55}"
},
{
"math_id": 24,
"text": "\\gamma_{R}"
},
{
"math_id": 25,
"text": "t_{0}"
},
{
"math_id": 26,
"text": "R_{FS}(t_{0}) = \\frac{\\gamma_{R}(t_{0},f_{U})}{\\gamma_{R}(t_{0},f_{L})}"
},
{
"math_id": 27,
"text": "\\gamma_{R}(t_{0},f)"
},
{
"math_id": 28,
"text": "R"
},
{
"math_id": 29,
"text": "\\gamma_{R}^{ITU}(t_{0},f) = k(f,pol,\\theta) R(t_{0})^{\\alpha(f,pol,\\theta)} \\quad \\text{(dB/km)}"
},
{
"math_id": 30,
"text": "f"
},
{
"math_id": 31,
"text": "pol"
},
{
"math_id": 32,
"text": "\\theta"
},
{
"math_id": 33,
"text": "k"
},
{
"math_id": 34,
"text": "\\alpha"
},
{
"math_id": 35,
"text": "N(d_{i}) = \\sum_{j=1}^{N_{V}} \\frac{n(d_{i},v_{j})}{S\\ v(d_{i},v_{j})\\ T \\ \\Delta d_{i}} \\quad \\left(\\frac{\\text{mm}^{-1}}{\\text{m}^{3}}\\right)"
},
{
"math_id": 36,
"text": "\\Delta d_{i}"
},
{
"math_id": 37,
"text": "S \\ (\\text{mm}^{2})"
},
{
"math_id": 38,
"text": "T"
},
{
"math_id": 39,
"text": "N_{V}"
},
{
"math_id": 40,
"text": "N"
},
{
"math_id": 41,
"text": "\\gamma_{R}^{DSD}(t_{0},f)=4.343 \\cdot 10^{3} \\frac{\\lambda^{2}}{\\pi} \\sum_{i=1}^{N_{D}} Re[S_{0}(d_{i},f)]\\ N(d_{i})\\ \\Delta d_{i} \\quad \\text{(dB/km)}"
},
{
"math_id": 42,
"text": "N_{D}"
},
{
"math_id": 43,
"text": "S_{0}"
}
]
| https://en.wikipedia.org/wiki?curid=77310683 |
77310910 | Normalized solutions (nonlinear Schrödinger equation) | In mathematics, a normalized solution to an ordinary or partial differential equation is a solution with prescribed norm, that is, a solution which satisfies a condition like formula_0 In this article, the normalized solution is introduced by using the nonlinear Schrödinger equation. The nonlinear Schrödinger equation (NLSE) is a fundamental equation in quantum mechanics and other various fields of physics, describing the evolution of complex wave functions. In Quantum Physics, normalization means that the total probability of finding a quantum particle anywhere in the universe is unity.
Definition and variational framework.
In order to illustrate this concept, consider the following nonlinear Schrödinger equation with prescribed norm:
formula_1
where formula_2 is a Laplacian operator, formula_3 is a Lagrange multiplier and formula_4 is a nonlinearity. If we want to find a normalized solution to the equation, we need to consider the following functional: Let formula_5 be defined by
formula_6
with the constraint
formula_7
where formula_8 is the Hilbert space and formula_9 is the primitive of formula_10.
A common method of finding normalized solutions is through variational methods, i.e., finding the maxima and minima of the corresponding functional with the prescribed norm. Thus, we can find the weak solution of the equation. Moreover, if it satisfies the constraint, it's a normalized solution.
A simple example on Euclidean space.
On a Euclidean space formula_11, we define a function formula_12
formula_13 with the constraint formula_14.
By direct calculation, it is not difficult to conclude that the constrained maximum is formula_15, with solutions formula_16 and formula_17, while the constrained minimum is formula_18, with solutions formula_19 and formula_20.
History.
The exploration of normalized solutions for the nonlinear Schrödinger equation can be traced back to the study of standing wave solutions with prescribed formula_21-norm. Jürgen Moser firstly introduced the concept of normalized solutions in the study of regularity properties of solutions to elliptic partial differential equations (elliptic PDEs). Specifically, he used normalized sequences of functions to prove regularity results for solutions of elliptic equations, which was a significant contribution to the field. Inequalities developed by Emilio Gagliardo and Louis Nirenberg played a crucial role in the study of PDE solutions in formula_22 spaces. These inequalities provided important tools and background for defining and understanding normalized solutions.
For the variational problem, early foundational work in this area includes the concentration-compactness principle introduced by Pierre-Louis Lions in 1984, which provided essential techniques for solving these problems.
For variational problems with prescribed mass, several methods commonly used to deal with unconstrained variational problems are no longer available. At the same time, a new critical exponent appeared, the formula_23-critical exponent. From the Gagliardo-Nirenberg inequality, we can find that the nonlinearity satisfying formula_23-subcritical or critical or supercritical leads to a different geometry for functional. In the case the functional is bounded below, i.e., formula_23 subcritical case, the earliest result on this problem was obtained by Charles-Alexander Stuart using bifurcation methods to demonstrate the existence of solutions. Later, Thierry Cazenave and Pierre-Louis Lions obtained existence results using minimization methods. Then, Masataka Shibata considered Schrödinger equations with a general nonlinear term.
In the case the functional is not bounded below, i.e., formula_23 supcritical case, some new difficulties arise. Firstly, since formula_24 is unknown, it is impossible to construct the corresponding Nehari manifold. Secondly, it is not easy to obtain the boundedness of the Palais-Smale sequence. Furthermore, verifying the compactness of the Palais-Smale sequence is challenging because the embedding formula_25 is not compact. In 1997, Louis Jeanjean using the following transform:
formula_26
Thus, one has the following functional:
formula_27
Then,
formula_28
which corresponds exactly to the Pokhozhaev's identity of equation. Jeanjean used this additional condition to ensure the boundedness of the Palais-Smale sequence, thereby overcoming the difficulties mentioned earlier. As the first method to address the issue of normalized solutions in unbounded functional, Jeanjean's approach has become a common method for handling such problems and has been imitated and developed by subsequent researchers.
In the following decades, researchers expanded on these foundational results. Thomas Bartsch and Sébastien de Valeriola investigate the existence of multiple normalized solutions to nonlinear Schrödinger equations. The authors focus on finding solutions that satisfy a prescribed formula_29 norm constraint. Recent advancements include the study of normalized ground states for NLS equations with combined nonlinearities by Nicola Soave in 2020, who examined both subcritical and critical cases. This research highlighted the intricate balance between different types of nonlinearities and their impact on the existence and multiplicity of solutions.
In bounded domain, the situation is very different. Let's define formula_30 where formula_31. Refer to Pokhozhaev's identity,
formula_32
The boundary term will make it impossible to apply Jeanjean's method. This has led many scholars to explore the problem of normalized solutions on bounded domains in recent years. In addition, there have been a number of interesting results in recent years about normalized solutions in Schrödinger system, Choquard equation, or Dirac equation.
Some extended concepts.
Mass critical, mass subcritical, mass supcritical.
Let's consider the nonlinear term to be homogeneous, that is, let's define formula_30 where formula_31. Refer to Gagliardo-Nirenberg inequality: define
formula_33
then there exists a constant formula_34 such that for any formula_35, the following inequality holds:
formula_36
Thus, there's a concept of mass critical exponent,
formula_37
From this, we can get different concepts about mass subcritical as well as mass supercritical. It is also useful to get whether the functional is bounded below or not.
Palais-Smale sequence.
Let formula_38 be a Banach space and formula_39 be a functional. A sequence formula_40 is called a Palais-Smale sequence for formula_41 at the level formula_42 if it satisfies the following conditions:
1. Energy Bound: formula_43.
2. Gradient Condition: formula_44 as formula_45 for some formula_46.
Here, formula_47 denotes the Fréchet derivative of formula_41, and formula_48 denotes the inner product in formula_38. Palais-Smale sequence named after Richard Palais and Stephen Smale.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
| [
{
"math_id": 0,
"text": "\\int_{\\mathbb{R}^N} |u(x)|^2 \\, dx = 1."
},
{
"math_id": 1,
"text": " -\\Delta u + \\lambda u = f(u), \\quad \\int_{\\mathbb{R}^N} |u|^2 \\, dx = 1, "
},
{
"math_id": 2,
"text": " \\Delta"
},
{
"math_id": 3,
"text": "N\\ge1, \\lambda\\in \\mathbb{R} "
},
{
"math_id": 4,
"text": "f"
},
{
"math_id": 5,
"text": "I: H^{1}_{0}(\\mathbb{R}^{N})\\rightarrow \\mathbb{R}"
},
{
"math_id": 6,
"text": "I(u)=\\frac{1}{2}\\int_{\\mathbb{R}^{N}}|\\nabla u|^{2}dx -\\int_{\\mathbb{R}^{N}}F(u)dx"
},
{
"math_id": 7,
"text": "\\mathcal{M}=\\{u\\in H^{1}_{0}(\\mathbb{R}^{N}): \\int_{\\mathbb{R}^{N}} u^{2}=1\\}, \\ \\ \\ \\ "
},
{
"math_id": 8,
"text": " H^{1}_{0}(\\mathbb{R}^{N}) "
},
{
"math_id": 9,
"text": " F(s) "
},
{
"math_id": 10,
"text": " f(s) "
},
{
"math_id": 11,
"text": "\\mathbb{R}^3"
},
{
"math_id": 12,
"text": "f:\\mathbb{R}^2 \\rightarrow\\mathbb{R}: "
},
{
"math_id": 13,
"text": "f(x,y) = (x + y)^2"
},
{
"math_id": 14,
"text": " x^2 +y^2 =1"
},
{
"math_id": 15,
"text": "f=2"
},
{
"math_id": 16,
"text": " (x,y)= (\\frac{\\sqrt{2}}{2},\\frac{\\sqrt{2}}{2})"
},
{
"math_id": 17,
"text": " (x,y)= (\\frac{-\\sqrt{2}}{2},\\frac{-\\sqrt{2}}{2})"
},
{
"math_id": 18,
"text": "f=0"
},
{
"math_id": 19,
"text": " (x,y)= (\\frac{-\\sqrt{2}}{2},\\frac{\\sqrt{2}}{2})"
},
{
"math_id": 20,
"text": " (x,y)= (\\frac{\\sqrt{2}}{2},\\frac{-\\sqrt{2}}{2})"
},
{
"math_id": 21,
"text": "L^2"
},
{
"math_id": 22,
"text": " L^p "
},
{
"math_id": 23,
"text": " L^2"
},
{
"math_id": 24,
"text": "\\lambda"
},
{
"math_id": 25,
"text": "H^1(\\mathbb{R}^N) \\hookrightarrow L^2(\\mathbb{R}^N) "
},
{
"math_id": 26,
"text": " (s \\star u)(x) := e^{\\frac{Ns}{2}} u(e^s x)."
},
{
"math_id": 27,
"text": " \\tilde{I}(u, s) := I(s \\star u) = e^{2s} \\int_{\\mathbb{R}^N} |\\nabla u(x)|^2 dx - \\frac{1}{e^{sN}} \\int_{\\mathbb{R}^N} F(e^{\\frac{Ns}{2}} u(x)) dx."
},
{
"math_id": 28,
"text": "P(u) := \\partial_s \\tilde{I}(u, s)|_{s=0} = \\int_{\\mathbb{R}^N} |\\nabla u|^2 - N \\int_{\\mathbb{R}^N} \\left( \\frac{1}{2} f(u)u - F(u) \\right) "
},
{
"math_id": 29,
"text": " L^2 "
},
{
"math_id": 30,
"text": "f(s)=|s|^{p-2}s "
},
{
"math_id": 31,
"text": " p \\in (2, 2^*) "
},
{
"math_id": 32,
"text": " \\frac{2 - N}{2} \\int_{\\Omega} |\\nabla u|^2 \\, dx - \\frac{\\lambda N}{2} \\int_{\\Omega} u^2 \\, dx + \\frac{N}{p} \\int_{\\Omega} |u|^{p} \\, dx - \\frac{1}{2} \\int_{\\partial \\Omega} |\\frac{\\partial u}{\\partial \\nu}|^2 x \\cdot \\nu \\, d\\sigma = 0.\n"
},
{
"math_id": 33,
"text": " \\gamma_p := \\frac{N(p - 2)}{2p}, "
},
{
"math_id": 34,
"text": " C_{N,p} "
},
{
"math_id": 35,
"text": " u \\in H^1(\\mathbb{R}^N) "
},
{
"math_id": 36,
"text": " |u|_p \\leq C_{N,p} |\\nabla u|^{\\gamma_p}_2 |u|^{1-\\gamma_p}_2."
},
{
"math_id": 37,
"text": " p := 2+ \\frac{4}{N}. "
},
{
"math_id": 38,
"text": " X "
},
{
"math_id": 39,
"text": " I: X \\to \\mathbb{R} "
},
{
"math_id": 40,
"text": " (u_n)_n \\subset X "
},
{
"math_id": 41,
"text": " I "
},
{
"math_id": 42,
"text": " c \\in \\mathbb{R} "
},
{
"math_id": 43,
"text": "\\sup_n I(u_n) < \\infty "
},
{
"math_id": 44,
"text": " \\langle I'(u_n), u_n - u \\rangle \\to 0 "
},
{
"math_id": 45,
"text": " n \\to \\infty "
},
{
"math_id": 46,
"text": " u \\in X "
},
{
"math_id": 47,
"text": " I' "
},
{
"math_id": 48,
"text": " \\langle \\cdot, \\cdot \\rangle"
}
]
| https://en.wikipedia.org/wiki?curid=77310910 |
77311267 | Rail vehicle resistance | Total force necessary to maintain a rail vehicle in motion
The rail vehicle resistance (or train resistance or simply resistance) is the total force necessary to maintain a rail vehicle in motion. This force depends on a number of variables and is of crucial importance for the energy efficiency of the vehicle as it is proportional to the locomotive power consumption. For the speed of the vehicle to remain the same, the locomotive must express the proper tractive force, otherwise the speed of the vehicle will change until this condition is met.
Davis equation.
A number of experimental measurements of the train resistance have shown that this force can be expressed as a quadratic equation with respect to speed as shown below:
formula_0
Where formula_1 is the resistance, formula_2 is the speed of the rail vehicle and formula_3, formula_4, and formula_5 are experimentally determined coefficients. The most well-known of these relations was proposed by Davis W. J. Jr. and is named after him. The Davis equation contains mechanical and aerodynamic contributions to resistance. The first formulation assumes that there is no wind, however, formulations that do not make this assumptions exist:
formula_6,
where formula_7 is the speed of the air with respect to the vehicle while formula_8 and formula_9 are experimental coefficients that separately account for mechanical and aerodynamic (viscous) phenomena respectively.
The coefficients for these equations are determined with experiments by measuring the tractive effort from the locomotive at different constant speeds or with a coasting experiments (the rail vehicle is set in motion at a certain speed and then the traction is disengaged, causing the vehicle to stop due to resistance).
Most methods for determining these coefficients do not consider the effect lateral forces on the vehicle. Lateral forces can be caused by the centripetal acceleration of the vehicle following the curving of the tracks, by lateral tilt of the rails, or by aerodynamic forces if crosswind is present. These forces affect the resistance by pushing the vehicle laterally against the rail causing sliding friction between the wheels and the rails. In case of crosswind, the resistance is also affected by the change in the aerodynamic contribution as a consequence of changes in the flow.
Physical interpretation of the Davis equation.
Speed-independent term.
The first term in the Davis equation (formula_3) accounts for the contributions to the resistance that are independent from speed. Track gradient and acceleration are two of the contributing phenomena to this term. These are not dissipative processes and thus the additional work required from the locomotive to overcome the increased resistance is converted to mechanical energy (potential energy for the gradient and kinetic energy for the acceleration). The consequence of this is that these phenomena may, in different conditions, result in positive or negative contributions to the resistance. For example, a train decelerating on horizontal tracks will experience reduced resistance than if it where travelling at constant speed. Other contributions to this term are dissipative, for example bearing friction and rolling friction due to the local deformation of the rail at the point of contact with the wheels, these latter quantities can never reduce the train resistance.
The term formula_3 is constant with respect to vehicle speed but various empirical relations have been proposed to predict its value. It is the general consensus that the term is directly related to the mass of the vehicle with some observing an effect of the number axles as well as the axle loads.
Speed-linear term.
The coefficient in the second term of the Davis equation (formula_4) relates to the terms linearly dependent on speed and is sometimes omitted because it is negligible compared to the other terms. This term accounts for mass-related, speed-dependent, mechanical contributions to the resistance and for the momentum of the intake air for cooling and HVAC.
Similarly to formula_3, empirical formulas have been proposed to evaluate the term formula_4, and again a mass dependence is present in all major methods for determining the rail vehicle resistance coefficients, with some also observing a dependence from number of trailers and locomotives or a dependence from length.
Speed-quadratic term.
The coefficient in the third term of the Davis equation (formula_5) accounts for the aerodynamic drag acting on the vehicle, it is explained by the fact that as the train moves through the air, it sets some of the air surrounding it in motion (this is called slipstream). To maintain constant speed, the continuous transfer of momentum to the air needs to be compensated by an additional tractive force by the locomotive, this is accounted for by this term. As train speed increases, the aerodynamic drag becomes the dominant contribution to the resistance, for high-speed trains above 250 "km/h" and for freight trains above 115 "km/h" it accounts for 75-80% of the resistance.
This term is highly dependent on the geometry of the vehicle, and therefore it will be much lower for the streamlined high-speed passenger train than for freight trains, which behave like bluff bodies and produce much larger and more turbulent slipstreams at the same vehicle speed, leading to increased momentum transfer to the surrounding air.
Few general considerations can be made about the aerodynamic contribution to rail vehicle resistance because the aerodynamic drag heavily depends on both flow conditions and the geometry of the vehicle. However, the drag is higher in crosswind conditions than in still air, and for small angles the relation between drag coefficient and yaw angle is approximately linear.
Empirical relations for the Davis equation coefficients.
In the years, empirical relations have been proposed for estimating the values of the coefficients for the Davis equation, these however also rely on more coefficients to determine experimentally. Below are the relations proposed by Armostrong and Swift:
formula_10
formula_11
formula_12
Where formula_13 and formula_14 are respectively the total mass of the trailer cars and the total mess of the locomotives expressed in tons, formula_15, formula_16, formula_17 and formula_18 are respectively the number of trailer cars, the number of locomotives, the number of bogies and the number of pantographs, formula_19 is the total power expressed in kW, formula_20 and formula_21 are respectively the head/tail drag coefficients and the bogies drag coefficients, formula_22 is the frontal cross-sectional area in squared meters, formula_23 is the perimeter, formula_24 is the length and formula_25 is the intervehicle gap (all lengths expressed in meters). The coefficients formula_26, formula_27 and formula_28 are expressed in "N", "Ns/m" and "Ns2/m2." | [
{
"math_id": 0,
"text": "R=A+BV+CV^2"
},
{
"math_id": 1,
"text": "R"
},
{
"math_id": 2,
"text": "V"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "B"
},
{
"math_id": 5,
"text": "C"
},
{
"math_id": 6,
"text": "R=A+B_1V+B_2v+Cv^2"
},
{
"math_id": 7,
"text": "v"
},
{
"math_id": 8,
"text": "B_1"
},
{
"math_id": 9,
"text": "B_2"
},
{
"math_id": 10,
"text": "A=6.4M_t+8.0M_l"
},
{
"math_id": 11,
"text": "B=0.18(M_t+M_l)+1N_t+0.005N_lP"
},
{
"math_id": 12,
"text": "C=0.6125C_D(head,tail)A_f+0.00197pL+0.0021pG_i(N_t+N_l -1)+0.2061C_D(bogies)N_b+0.2566N_p"
},
{
"math_id": 13,
"text": "M_t"
},
{
"math_id": 14,
"text": "M_l"
},
{
"math_id": 15,
"text": "N_t"
},
{
"math_id": 16,
"text": "N_l"
},
{
"math_id": 17,
"text": "N_b"
},
{
"math_id": 18,
"text": "N_p"
},
{
"math_id": 19,
"text": "P\n"
},
{
"math_id": 20,
"text": "C_D(head,tail)"
},
{
"math_id": 21,
"text": "C_D(bogies)"
},
{
"math_id": 22,
"text": "A_f"
},
{
"math_id": 23,
"text": "p"
},
{
"math_id": 24,
"text": "L"
},
{
"math_id": 25,
"text": "G_i"
},
{
"math_id": 26,
"text": "A"
},
{
"math_id": 27,
"text": "B"
},
{
"math_id": 28,
"text": "C"
}
]
| https://en.wikipedia.org/wiki?curid=77311267 |
77311534 | Momentum mapping format | Technique in the Material Point Method (MPM)
Momentum mapping format is a key technique in the Material Point Method (MPM) for transferring physical quantities such as momentum, mass, and stress between a material point and a background grid.
The Material Point Method (MPM) is a numerical technique using a mixed Eulerian-Lagrangian description. It discretises the computational domain with material points and employs a background grid to solve the momentum equations. Proposed by Sulsky et al. in 1994.
MPM has since been expanded to various fields such as computational solid dynamics. Currently, MPM features several momentum mapping schemes, with the four main ones being PIC (Particle-in-cell), FLIP (Fluid-Implicit Particle), hybrid format, and APIC (Affine Particle-in-Cell). Understanding these schemes in-depth is crucial for the further development of MPM.
Background.
MPM represents materials as collections of material points (or particles). Unlike other particle methods such as SPH(Smoothed-particle hydrodynamics) and DEM (Discrete element method), MPM also uses a background grid to solve the momentum equations arising from particle interactions. MPM can be categorized as a mixed particle/grid method or a mixed Lagrangian-Eulerian method. By combining the strengths of both frameworks, MPM aims to be the most effective numerical solver for large deformation problems. It has been further developed and applied to various challenging problems such as high-speed impact (Huang et al., 2011), landslides (Fern et al., 2019), saturated porous media (He et al., 2024), and fluid-structure interaction (Li et al., 2022).
The Material Point Method (MPM) community has developed several momentum mapping schemes, among which PIC, FLIP, the hybrid scheme, and APIC are the most common. The FLIP scheme is widely used for dynamic problems due to its energy conservation properties, although it can introduce numerical noise and instability (Bardenhagen, 2002), potentially leading to computational failure. Conversely, the PIC scheme is known for numerical stability and is advantageous for static problems, but it suffers from significant numerical dissipation (Brackbill et al., 1988), which is unacceptable for strongly dynamic responses. Nairn et al. combined FLIP and PIC linearly (Nairn, 2015) to create a hybrid scheme, adjusting the proportion of each component based on empirical rather than theoretical analysis. Hammerquist and Nairn (2017) introduced an improved scheme called XPIC-m (for eXtended Particle-In-Cell of order m), which addresses the excessive filtering and numerical diffusion of PIC while suppressing the noise caused by the nonlinear space in FLIP used in MPM. XPIC-1 (eXtended Particle-In-Cell of order 1) is equivalent to the standard PIC method. Jiang et al. (2017, 2015) introduced the Affine Particle In Cell (APIC) method, where particle velocities are represented locally affine, preserving linear and angular momentum during the transfer process. This significantly reduces numerical dissipation and avoids the velocity noise and instability seen in FLIP. Fu et al. (2017) introduced generalized local functions into the APIC method, proposing the Polynomial Particle In Cell (PolyPIC) method. PolyPIC views G2P (Grid-to-Particle) transfer as a projection of the particle's local grid velocity, preserving linear and angular momentum, thereby improving energy and vorticity retention compared to the original APIC. Additionally, PolyPIC retains the filtering properties of APIC and PIC, providing robustness against noise.
Affine particle in cell method.
In the PIC scheme, particle velocities during the Grid-to-Particle (G2P) substep are directly overwritten by extrapolating the nodal velocities to the particles themselves:
formula_0
In the FLIP scheme, the material point velocities are updated by interpolating the velocity increments of the grid nodes over the current time step:
formula_1
formula_2
The hybrid scheme's momentum mapping can be mathematically represented as:
formula_3
where the parameters are defined as shown here below
Based on the idea of "providing the local velocity field around the material point to the background grid by transferring the material point's velocity gradient," Jiang et al. (2015) proposed the APIC method. In this method, the particle velocity is locally affine, mathematically expressed as:
formula_9
where the parameters are defined as shown here below:
Computational implementation.
PIC (Particle-In-Cell), FLIP (Fluid-Implicit Particle), hybrid (hybrid solution) and APIC (Affine) The different numerical methods used in Particle-In-Cell fluid simulation greatly show how they map momentum and time integrals between material points and grids, and how they differ from each other. The typical time integration schemes for PIC, FLIP, hybrid, and APIC schemes have their own unique characteristics. The evolution of momentum on the grid under each scheme is identical. Despite the differences among these four-momentum mapping formats, their common points are still dominant. In the P2G process, the momentum mapping in PIC, FLIP, and hybrid schemes is the same. The material point positions are updated in the same manner across all four schemes. During the G2P stage, PIC transfers the updated momentum on grid nodes directly back to the material points, FLIP uses incremental mapping, and the hybrid scheme linearly combines FLIP and PIC using a coefficient. APIC mapping maintains an additional affine matrix on top of the PIC mapping.
Numerical tests.
Numerical tests on ring collision highlight the performance of different momentum mapping schemes in dynamic problems. The mean stress distribution and total energy evolution curve at typical time are the key contents of researchers' attention. Due to the PIC mapping scheme canceling out velocities in opposite directions, significant energy loss occurs, preventing effective conversion of kinetic energy into strain energy. GIMP_FLIP (Generalized Interpolation Material Point - Fluid Implicit Particle ) shows notable numerical noise and instability, with severe oscillations in mean stress, leading to numerical fracture. GIMP_FLPI0.99 exhibits improved stability but still carries the risk of numerical fracture. Tests indicate that increasing the PIC component enhances numerical stability, with stress distribution becoming more uniform and regular, and the probability of numerical fracture decreasing. However, energy loss also becomes more pronounced. GIMP_APIC (Generalized Interpolation Material Point - Affine Particle-In-Cell) demonstrates the best performance, providing a stable and smooth stress distribution while maintaining excellent energy conservation characteristics.
Related research and developments.
Recently, Qu et al. proposed PowerPIC (Qu et al., 2022), a more stable and accurate mapping scheme based on optimization, which also maintains volume and uniform particle distribution characteristics. | [
{
"math_id": 0,
"text": " \\mathbf{V}_{p}^{n+1} =\\sum_{I}^{} S_{Ip}^{n} \\mathbf{V}_{I}^{n+1} "
},
{
"math_id": 1,
"text": " \\mathbf{a}_{I}^{n} =\\frac{f_{I}^{int,n}+f_{I}^{ext,n} }{m_{I}^{n}} "
},
{
"math_id": 2,
"text": " \\mathbf{V}_{p}^{n+1} =\\mathbf{V}_{p}^{n}+ \\sum_{I}^{} S_{Ip}^{n} \\mathbf{a}_{I}^{n} \\Delta t"
},
{
"math_id": 3,
"text": " \\mathbf{V}_{p}^{n+1} =(1-\\alpha _{FLIP}^{} ) \\mathbf{V}_{p}^{PIC,n+1} + \\alpha_{FLIP}^{} ) \\mathbf{V}_{p}^{FLIP,n+1} "
},
{
"math_id": 4,
"text": "\\mathbf{V}_{p}^{FLIP,n+1} "
},
{
"math_id": 5,
"text": "\\mathbf{V}_{p}^{PIC,n+1}"
},
{
"math_id": 6,
"text": " \\alpha _{FLIP}^{} "
},
{
"math_id": 7,
"text": " \\alpha _{FLIP}^{} =1"
},
{
"math_id": 8,
"text": " \\alpha _{FLIP}^{} =0"
},
{
"math_id": 9,
"text": " \\mathbf{V}_{p}^{affine} = \\mathbf{V}_{p}^{} + \\mathbf{C}_{p}(\\mathbf{x}_{}^{} - \\mathbf{x}_{p}^{}) "
},
{
"math_id": 10,
"text": " \\mathbf{V}_{p}"
},
{
"math_id": 11,
"text": " \\mathbf{C}_{p}^{} =\\begin{bmatrix} \n C_{00}^{}, C_{01}^{} \\\\ \n C_{10}^{}, C_{11}^{} \\\\\\end{bmatrix} \n"
},
{
"math_id": 12,
"text": "C_{00}^{}"
},
{
"math_id": 13,
"text": "C_{01}^{}"
},
{
"math_id": 14,
"text": " \\mathbf{C}_{p}^{} =0"
}
]
| https://en.wikipedia.org/wiki?curid=77311534 |
773153 | Illuminance | Luminous flux incident on a surface per area
In photometry, illuminance is the total luminous flux incident on a surface, per unit area. It is a measure of how much the incident light illuminates the surface, wavelength-weighted by the luminosity function to correlate with human brightness perception. Similarly, luminous emittance is the luminous flux per unit area emitted from a surface. Luminous emittance is also known as luminous exitance.
In SI units illuminance is measured in lux (lx), or equivalently in lumens per square metre (lm·m−2). Luminous exitance is measured in lm·m−2 only, not lux. In the CGS system, the unit of illuminance is the phot, which is equal to . The foot-candle is a non-metric unit of illuminance that is used in photography.
Illuminance was formerly often called brightness, but this leads to confusion with other uses of the word, such as to mean luminance. "Brightness" should never be used for quantitative description, but only for nonquantitative references to physiological sensations and perceptions of light.
The human eye is capable of seeing somewhat more than a 2 trillion-fold range. The presence of white objects is somewhat discernible under starlight, at (50 μlx), while at the bright end, it is possible to read large text at 108 lux (100 Mlx), or about 1000 times that of direct sunlight, although this can be very uncomfortable and cause long-lasting afterimages.
Astronomy.
In astronomy, the illuminance stars cast on the Earth's atmosphere is used as a measure of their brightness. The usual units are apparent magnitudes in the visible band. V-magnitudes can be converted to lux using the formula
formula_0
where "E"v is the illuminance in lux, and "m"v is the apparent magnitude. The reverse conversion is
formula_1
Relation to luminance.
The luminance of a reflecting surface is related to the illuminance it receives:
formula_2
where the integral covers all the directions of emission ΩΣ, and
In the case of a perfectly diffuse reflector (also called a Lambertian reflector), the luminance is isotropic, per Lambert's cosine law. Then the relationship is simply
formula_3
References.
<templatestyles src="Reflist/styles.css" />
External links.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E_\\mathrm{v} = 10^{(-14.18-m_\\mathrm{v})/2.5},"
},
{
"math_id": 1,
"text": "m_\\mathrm{v} = -14.18 - 2.5 \\log(E_\\mathrm{v})."
},
{
"math_id": 2,
"text": "\\int_{\\Omega_\\Sigma} L_\\mathrm{v} \\mathrm{d}\\Omega_\\Sigma \\cos \\theta_\\Sigma = M_\\mathrm{v} = E_\\mathrm{v} R"
},
{
"math_id": 3,
"text": "L_\\mathrm{v} = \\frac{E_\\mathrm{v} R}{\\pi}"
}
]
| https://en.wikipedia.org/wiki?curid=773153 |
77317100 | Incompatibility of quantum measurements | Crucial concept of quantum information
Incompatibility of quantum measurements is a crucial concept of quantum information, addressing whether two or more quantum measurements can be performed on a quantum system simultaneously. It highlights the unique and non-classical behavior of quantum systems. This concept is fundamental to the nature of quantum mechanics and has practical applications in various quantum information processing tasks like quantum key distribution and quantum metrology.
History.
Early ages.
The concept of incompatibility of quantum measurements originated from Heisenberg's uncertainty principle, which states that certain pairs of physical quantities, like position and momentum, cannot be simultaneously measured with arbitrary precision. This principle laid the groundwork for understanding the limitations of measurements in quantum mechanics.
Mid-20th century.
In the mid-20th century, researchers began to formalize the idea of compatibility of quantum measurements, and to explore conditions under which a set of measurements can be performed together on a single quantum system without disturbing each other. This was crucial for understanding how quantum systems behave under simultaneous observations.
Late-20th century.
The study of incompatibility of quantum measurements gained significant attention with the rise of quantum information theory. Researchers realized that measurement incompatibility is not just a limitation but also a resource for various quantum information processing tasks. For example, it plays a crucial role in quantum cryptography, where the security of quantum key distribution protocols relies on the incompatibility of certain quantum measurements.
21th century.
Modern research focuses on quantifying measurement incompatibility using various measures. Quite a number of approaches involve robustness-based measures, which assess how much noise can be added to a set of quantum measurements before they become compatible.
Definition.
In quantum mechanics, two measurements, formula_0 and formula_1, are called compatible if and only if there exists a third measurement formula_2 of which formula_3 can be obtained as margins, or equivalently, from which formula_3 can be simulated via classical post-processings. More precisely, let formula_4 and formula_5 be two positive operator-valued measures (POVMs), where formula_6 are two measurable spaces, and
formula_7 is the set of bounded linear operators on a Hilbert space formula_8.
Then formula_9 are called compatible (jointly measurable) if and only if there is a POVM
formula_10 such that
formula_11
formula_12
for all formula_13 and formula_14. Otherwise, formula_9 are called incompatible. The definition of compatibility of a finite number of POVMs is similar.
An example.
For a two-outcome POVM formula_15 on a qubit it is possible to write
formula_16
where formula_17 with formula_18 are Pauli matrices.
Let formula_19 and formula_20 be two such POVMs with formula_21, formula_22, and formula_23, where formula_24. Then for formula_25
one can verify that the following POVM
formula_26
formula_27
formula_28
formula_29
is a compatiblizer of formula_19 and formula_20, despite the fact that formula_19 and formula_20 are noncommutative for formula_30.
As for formula_31, it was shown that formula_32 are incompatible.
Criteria for compatibility.
Analytical criteria.
Let formula_3 be two two-outcome POVMs on a qubit. It was shown that formula_3 are compatible if and only if
formula_33
where formula_34 for formula_35.
Moreover, let formula_36 be three two-outcome POVMs on a qubit with formula_37. Then it was proved that formula_36 are compatible if and only if
formula_38
where formula_39, and formula_40 for formula_41.
Numerical criteria.
For a finite number of POVMs formula_42 on a finite-dimensional quantum system, with a finite number of measurement outcomes (labeled by formula_43), one can cast the following semidefinite program (SDP) to decide whether they are compatible or not:
formula_44
formula_45
formula_46
formula_47
where formula_48 is a deterministic classical post-processing (i.e., formula_49 or formula_50 valued conditional probability density). If for all formula_48 the maximizer formula_51 leads to a negative formula_52, then formula_42 are incompatible. Otherwise, they are compatible.
Quantifiers of incompatibility.
Incompatibility noise robustness.
Let formula_42 be a finite number of POVMs. Incompatibility noise robustness formula_53 of formula_42 is defined by
formula_54
where formula_55 denotes the number of outcomes. This quantity is a monotone under quantum channels since (i) it vanishes on formula_42 that are compatible, (ii) it is symmetric under the exchange of POVMs, and (iii) it does not increase under the pre-processing by a quantum channel.
Incompatibility weight.
Incompatibility weight formula_56 of POVMs formula_42 is defined by
formula_57
where formula_58 is any set of POVMs.
This quantity is a monotone under compatibility nondecreasing operations that consist of pre-processing by a quantum instrument and classical post-processing.
Incompatibility robustness.
Incompatibility robustness formula_59 of POVMs formula_42 is defined by
formula_60
where formula_58 is any set of POVMs. Similar to incompatibility weight, this quantity is a monotone under compatibility nondecreasing operations.
All these quantifiers are based on the convex distance of incompatible POVMs to the set of compatible ones under the addition of different types of noise. Specifically, all these quantifiers can be evaluated numerically, as they fall under the framework of SDPs. For instance, it was shown that incompatibility robustness can be cast as the following SDP:
formula_61
formula_62
formula_63
formula_64
where formula_48 is a deterministic classical post-processing, and formula_65 is the dimension of the Hilbert space on which formula_42 are performed. If the minimizer is formula_66, then formula_67.
Incompatibility and quantum information processing.
Measurement incompatibility is intrinsically tied to the nonclassical features of quantum correlations. In fact, in many scenarios, it becomes evident that incompatible measurements are necessary to exhibit nonclassical correlations. Since such correlations are essential for tasks like quantum key distribution or quantum metrology, these connections underscore the resource aspect of measurement incompatibility.
Bell nonlocality.
Bell nonlocality is a phenomenon where the correlations between quantum measurements on entangled quantum states cannot be explained by any local hidden variable theory. This was first demonstrated by Bell through Bell's Theorem, which showed that certain predictions of quantum mechanics are incompatible with the principle of locality (an idea that objects are only directly influenced by their immediate surroundings).
If the POVMs on Alice's side are compatible, then they will never lead to Bell nonlocality. However, the converse is not true. It was shown that there exist three binary measurements on a qubit, pairwise compatible but globally incompatible, yet not leading to any Bell nonlocality.
Quantum steering.
Quantum steering is a phenomenon where one party (Alice) can influence the state of a distant party's (Bob's) quantum system through local measurements on her own quantum system. This was first introduced by Schrödinger and is a form of quantum nonlocality. It demonstrates that the state of Bob's quantum system can be "steered" into different states depending on the local measurements performed by Alice.
For quantum steering to occur, the local measurements performed by Alice must be incompatible. This is because steering relies on the ability to create different outcomes in Bob's system based on Alice's measurements, which is only possible if those measurements are not compatible. Further, it was shown that if the following POVMs formula_68 are incompatible, then a state assemblage formula_69 is steerable, where formula_70.
Quantum contextuality.
Quantum contextuality refers to the idea that the outcome of a quantum measurement cannot be explained by any pre-existing value that is independent of the measurement context. In other words, the result of a quantum measurement depends on which other measurements are being performed simultaneously. This concept challenges classical notions of reality, where it is assumed that properties of a system exist independently of measurement.
It was proved that any set of compatible POVMs leads to preparation noncontextual correlations for all input quantum states. Conversely, the existence of a preparation noncontextual model for all input states implies compatibility of the involved POVMs.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M_{1}"
},
{
"math_id": 1,
"text": "M_{2}"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "M_{1},M_{2}"
},
{
"math_id": 4,
"text": "M_{1}:\\mathcal{A}_1\\to\\mathcal{B}(\\mathbb{H})"
},
{
"math_id": 5,
"text": "M_{2}:\\mathcal{A}_2\\to\\mathcal{B}(\\mathbb{H})"
},
{
"math_id": 6,
"text": "(X_1,\\mathcal{A}_1),(X_2,\\mathcal{A}_2)"
},
{
"math_id": 7,
"text": "\\mathcal{B}(\\mathbb{H})"
},
{
"math_id": 8,
"text": "\\mathbb{H}"
},
{
"math_id": 9,
"text": "M_1,M_2"
},
{
"math_id": 10,
"text": "M:\\mathcal{A}_1\\otimes\\mathcal{A}_2\\to\\mathcal{B}(\\mathbb{H})"
},
{
"math_id": 11,
"text": "M(A_1\\times X_2)=M_1(A_1),"
},
{
"math_id": 12,
"text": "M(X_1\\times A_2)=M_2(A_2),"
},
{
"math_id": 13,
"text": "A_1\\in\\mathcal{A}_1"
},
{
"math_id": 14,
"text": "A_2\\in\\mathcal{A}_2"
},
{
"math_id": 15,
"text": "M_\\pm"
},
{
"math_id": 16,
"text": "M_\\pm=\\cfrac{1}{2}\\left[(1\\pm\\alpha)\\mathbf{I}\\pm \\vec{m}\\cdot\\vec{\\sigma}\\right],"
},
{
"math_id": 17,
"text": "\\vec{m}\\cdot\\vec{\\sigma}=m_x\\sigma_x+m_y\\sigma_y+m_z\\sigma_z"
},
{
"math_id": 18,
"text": "\\sigma_{x},\\sigma_y,\\sigma_z"
},
{
"math_id": 19,
"text": "M_{\\pm|1}"
},
{
"math_id": 20,
"text": "M_{\\pm|2}"
},
{
"math_id": 21,
"text": "\\alpha_{1}=\\alpha_{2}=0"
},
{
"math_id": 22,
"text": "\\vec{m}_{1}=(\\mu,0,0)"
},
{
"math_id": 23,
"text": "\\vec{m}_{2}=(0,0,\\mu)"
},
{
"math_id": 24,
"text": "\\mu\\in[0,1]"
},
{
"math_id": 25,
"text": "\\mu\\in[0,1/\\sqrt{2}]"
},
{
"math_id": 26,
"text": "M_{+,+}:=\\cfrac{1}{4}\\left[\\mathbf{I}+\\mu(\\sigma_{x}+\\sigma_{z})\\right],"
},
{
"math_id": 27,
"text": "M_{-,+}:=\\cfrac{1}{4}\\left[\\mathbf{I}+\\mu(-\\sigma_{x}+\\sigma_{z})\\right],"
},
{
"math_id": 28,
"text": "M_{+,-}:=\\cfrac{1}{4}\\left[\\mathbf{I}+\\mu(\\sigma_{x}-\\sigma_{z})\\right],"
},
{
"math_id": 29,
"text": "M_{-,-}:=\\cfrac{1}{4}\\left[\\mathbf{I}+\\mu(-\\sigma_{x}-\\sigma_{z})\\right],"
},
{
"math_id": 30,
"text": "\\mu\\in(0,1/\\sqrt{2}]"
},
{
"math_id": 31,
"text": "\\mu\\in(1/\\sqrt{2},1]"
},
{
"math_id": 32,
"text": "M_{\\pm|1},M_{\\pm|2}"
},
{
"math_id": 33,
"text": "(1-C_{1}^{2}-C_{2}^{2})\\left(1-\\cfrac{\\alpha_{1}^{2}}{C_{1}^{2}}-\\cfrac{\\alpha_{2}^{2}}{C_{2}^{2}}\\right)\n\\leqslant \\left(\\vec{m}_{1}\\cdot\\vec{m}_{2}-\\alpha_{1}\\alpha_{2}\\right)^{2},"
},
{
"math_id": 34,
"text": "C_{i}=\\cfrac{1}{2}\\left[\\sqrt{(1+\\alpha_{i})^{2}-\\left|\\vec{m}_{i}\\right|^{2}}+\\sqrt{(1-\\alpha_{i})^{2}-\\left|\\vec{m}_{i}\\right|^{2}}\\right],"
},
{
"math_id": 35,
"text": "i=1,2"
},
{
"math_id": 36,
"text": "M_{1},M_{2},M_{3}"
},
{
"math_id": 37,
"text": "\\alpha_{1}=\\alpha_{2}=\\alpha_{3}=0"
},
{
"math_id": 38,
"text": "\\min_{\\vec{n}\\in\\mathbb{R}^{3}}\\sum_{j=0}^{3}\\left|\\vec{n}_{j}-\\vec{n}\\right|\\leqslant 4,"
},
{
"math_id": 39,
"text": "\\vec{n}_{0}=\\vec{m}_{1}+\\vec{m}_{2}+\\vec{m}_{3}"
},
{
"math_id": 40,
"text": "\\vec{n}_{j}=2\\vec{m}_{j}-\\vec{n}_{0}"
},
{
"math_id": 41,
"text": "j=1,2,3"
},
{
"math_id": 42,
"text": "\\{M_{x|i}\\}"
},
{
"math_id": 43,
"text": "x"
},
{
"math_id": 44,
"text": "\\mathrm{given}\\ \\{M_{x|i}\\},\\{p(x|i,\\lambda)\\}"
},
{
"math_id": 45,
"text": "\\max_{\\{G_{\\lambda}\\}}\\mu"
},
{
"math_id": 46,
"text": "\\mathrm{subject\\ to}\\sum_{\\lambda}p(x|i,\\lambda)G_{\\lambda}=M_{x|i},\\quad\\forall x,i,"
},
{
"math_id": 47,
"text": "G_{\\lambda}\\geqslant\\mu\\mathbf{I},\\quad\\forall \\lambda\\quad\\mathrm{and}\\quad\\sum_{\\lambda}G_{\\lambda}=\\mathbf{I}."
},
{
"math_id": 48,
"text": "\\{p(x|i,\\lambda)\\}"
},
{
"math_id": 49,
"text": "0"
},
{
"math_id": 50,
"text": "1"
},
{
"math_id": 51,
"text": "\\{G_{\\lambda}\\}"
},
{
"math_id": 52,
"text": "\\mu"
},
{
"math_id": 53,
"text": "R_{\\mathrm{n}}"
},
{
"math_id": 54,
"text": "R_{\\mathrm{n}}(\\{M_{x|i}\\})=\\inf\\{p\\in[0,1]|\\{(1-p)M_{x|i}+p\\mathrm{I}/|x|\\} \\ \\ \\text{are compatible}\\},"
},
{
"math_id": 55,
"text": "|x|"
},
{
"math_id": 56,
"text": "W"
},
{
"math_id": 57,
"text": "W(\\{M_{x|i}\\})=\\inf\\left\\{q\\geqslant 0: \\left\\{\\cfrac{M_{x|i}-qN_{x|i}}{1-q}\\right\\}\\ \\ \\text{are compatible}\\right\\},"
},
{
"math_id": 58,
"text": "\\{N_{x|i}\\}"
},
{
"math_id": 59,
"text": "R"
},
{
"math_id": 60,
"text": "R(\\{M_{x|i}\\})=\\inf\\left\\{r\\geqslant 0: \\left\\{\\cfrac{M_{x|i}+rN_{x|i}}{1+r}\\right\\}\\ \\ \\text{are compatible}\\right\\},"
},
{
"math_id": 61,
"text": "\\mathrm{given}\\ \\{M_{x|i}\\}"
},
{
"math_id": 62,
"text": "\\min_{\\left(\\{G_{\\lambda}\\},\\{p(x|i,\\lambda)\\}\\right)}\\cfrac{1}{d}\\sum_{\\lambda}\\mathrm{tr}(G_{\\lambda})"
},
{
"math_id": 63,
"text": "\\mathrm{subject\\ to}\\sum_{\\lambda}p(x|i,\\lambda)G_{\\lambda}\\geqslant M_{x|i},\\quad\\forall x,i,"
},
{
"math_id": 64,
"text": "G_{\\lambda}\\geqslant 0,\\quad\\forall \\lambda\\quad\\mathrm{and}\\quad\\sum_{\\lambda}G_{\\lambda}=\\cfrac{\\mathbf{I}}{d}\\left[\\sum_{\\lambda}\\mathrm{tr}(G_{\\lambda})\\right],"
},
{
"math_id": 65,
"text": "d"
},
{
"math_id": 66,
"text": "\\left(\\{G_{\\lambda}'\\},\\{p(x|i,\\lambda)'\\}\\right)"
},
{
"math_id": 67,
"text": "R(\\{M_{x|i}\\})=\\cfrac{1}{d}\\sum_{\\lambda}\\mathrm{tr}(G_{\\lambda}')-1"
},
{
"math_id": 68,
"text": "\\left\\{\\sigma_{i}^{-1/2}\\sigma_{x|i}\\sigma_{i}^{-1/2}\\right\\}"
},
{
"math_id": 69,
"text": "\\{\\sigma_{x|i}\\}"
},
{
"math_id": 70,
"text": "\\sigma_{i}=\\sum_{x}\\sigma_{x|i}"
}
]
| https://en.wikipedia.org/wiki?curid=77317100 |
77318529 | Anchor losses | Anchor losses are a type of damping commonly highlighted in micro-resonators. They refer to the phenomenon where energy is dissipated as mechanical waves from the resonator attenuate into the substrate.
Introduction.
In physical systems, damping is the loss of energy of an oscillating system by dissipation. In the field of micro-electro-mechanicals, the damping is usually measured by a dimensionless parameter Q factor (Quality factor). A higher Q factor indicates lower damping and reduced energy dissipation, which is desirable for micro-resonators as it leads to lower energy consumption, better accuracy and efficiency, and reduced noise.
Several factors contribute to the damping of micro-electro-mechanical resonators, including fluid damping and solid damping. Anchor losses are a type of solid damping observed in resonators operating in various environments. When a resonator is fixed to a substrate, either directly or via other structures such as tethers, mechanical waves propagate into the substrate through these connections. The wave traveling through a perfectly elastic solid would have a constant energy and an isolated perfectly elastic solid once set into vibration would continue to vibrate indefinitely. Actual materials do not show such behavior and dissipation will happen due to some imperfection of elasticity within the body. In typical micro-resonators, the substrate dimensions are significantly larger than those of the resonator itself. Consequently, it can be approximated that all waves entering the substrate will attenuate without reflecting back to the resonator. In other words, the energy carried by the waves will dissipate, leading to damping. This phenomenon is referred to as anchor losses.
Estimation of anchor losses.
Analytical estimation.
Standard theories of structural mechanics permit the expression of concentrated forces and couples exerted by the structure on the support.These generally include a constant component (due, for instance, to pre-stresses or initial deformation) and a sinusoidal varying contribution. Some researchers have investigated some simple geometries following this idea, and one example is the anchor losses of a cantilaver beam connected to a 3-D semi-infinite region:
formula_0
where L is the length of the beam ,H is the in-plane (curvature plane) thickness, W is the out-of-plane thickness, C is a constant depending on the Poisson's coefficient, with C = 3.45 for ν = 0.25, C = 3.23 for ν = 0.3, C = 3.175 for ν = 0.33.
Numerical estimation.
Due to the complexity of geometries and the anisotropy or inhomogeneities of materials, usually it is difficult to use analytical method to estimate the anchor losses of some devices. Numerical methods are more widely applied for this issue. An artifical boundary or an artifical absorbing layer is applied to the numerical model to prevent the wave reflection. One such method is the perfectly matched layer, initially developed for electromagnetic wave transmission and later adapted for solid mechanics. Perfectly matched layers act as special elements where wave attenuation occurs through a complex coordinate transformation, ensuring all waves entering the layer are absorbed, thus simulating anchor losses.
To determine the Q factor from a Finite Element Method model with perfectly matched layers, two common approaches are used:
formula_1
where formula_2 and formula_3 is the real and imaginary part of the complex eigenfrequency.
Methods to mitigate anchor losses.
Anchor losses are highly dependent on the geometry of the resonator. How to anchor the resonator or the size of the tether has a strong effect on the anchor losses. Some common methods to eliminate anchor losses are summarized as followings.
Anchor at nodal points.
A common method is to fix the resonator at the nodal points, where the motion amplitude is minimum. From the definitions of anchor losses, now the wave magnitude into the substrate will be minimized and less energy will dissipate. However, this method may not apply to certain resonators, in which the nodal points are not around the resonator edges, causing difficulty in tether designs.
Quater wavelength tethers.
Quarter wavelength tether is an effective approach to minimize the energy loss through these tethers. Similar to the theory used for transmission lines, quarter wavelength tether is assumed as the best acoustic isolation, since the complete in phase reflection occurs as the tether length equals to a quarter acoustic wavelength, or λ/4. Therefore, there is hardly any energy dissipation to the substrate through the tethers. However, quarter-wavelength design results in extremely long tether structures, usually in tens to hundreds of micrometers, which is counter to minimization and leads to a decrease in the mechanical stability of the devices.
Material-mismatched support.
The resonator structure and anchoring stem are made with different materials. The acoustic impedance mismatch between these two suppresses the energy from the resonator to the stem, thus reducing anchor losses and allowing high Q factor.
Acoustic reflection cavity.
The basic mechanism is to reflect back a portion of the elastic waves at the anchor boundary due to the discontinuity in the acoustic impedance caused by the acoustic cavity (the etching trenches).
Phonon crystal tether and metamaterial.
Phonon crystal tether is a promising way to restrain the acoustic wave propagation in the supporting tethers, since they can arouse complete band gaps in which the transmissions of the elastic waves are prohibited. Thus, the vibration energy is retained in the resonator body, reducing the anchor losses into the substrate. Besides the phonon crystal tether, some other kinds of metamaterial could be applied to the anchor and surrounding regions to prohibit the wave transmission. A key drawback of this method is the challenge to the fabrication process.
Optimized anchor geometry.
Anchor losses are highly sensitive to the geometry of the anchors. Features such as fillets, curvature, sidewall inclination, and other detailed geometric aspects can affect anchor losses. By carefully optimizing these geometric configurations, anchor losses can be significantly reduced. | [
{
"math_id": 0,
"text": "Q_{anchor}\\approx C \\left ( \\frac{L}{W} \\right )\\left ( \\frac{L}{H} \\right )^4"
},
{
"math_id": 1,
"text": "Q=\\frac{Re(\\omega)}{2Im(\\omega)}"
},
{
"math_id": 2,
"text": "Re(\\omega)"
},
{
"math_id": 3,
"text": "Im(\\omega)"
}
]
| https://en.wikipedia.org/wiki?curid=77318529 |
77325638 | Layered costmaps | Delay-based audio effects system
Layered costmaps is a method to create and update maps for robot navigation and path planning proposed by David V. Lu in 2014. During robot navigation, layered costmaps can abstract the realistic environment around the robot into maps that can be comprehended by robot navigation methods. The method consists of more than one layer of costmaps, each of which describes obstacles with different properties. Each layered costmap consists of grids and is represented as a matrix, the values of the matrix elements are related to the risks of grids (the higher the value,the greater the risk and the deeper the colour in the grid). In the layered costmap, the colour of a grid indicates the risk of moving through that grid. There is a high probability of robot collision when the robot moves through a grid with deep colour. When updating a layered costmap, only the area covered by the sensors (such as Lidar) in the map is updated, rather than the entire map.
Standard Layers.
Layered costmaps is a flexible method, and special categories of layered costmaps can be designed and added to the method depending on the navigation environment of the mobile robot. In Robot Operating System (ROS), some categories of layered costmaps are provided for mobile robot navigation, and they are also the standard layers that are added in most cases.
Static Map Layer.
The static map layer is at the bottom among all layered costmaps, and it includes the situation of all known static obstacles in the environment before navigation starts. The static map layer can be generated with a SLAM algorithm or can be created from an architectural diagram.
If in the static map matrix formula_0, the real location formula_1 corresponds to the element index formula_2, then an example of the static map layer defining rules is as follows:
formula_3
Obstacles Map Layer.
The obstacles map layer describes the situation of obstacles detected by sensors during robot navigation. Since the observation distance of the sensor is limited, the obstacles map layer can only be updated in the observation area of the sensor.
Assume that there is only one sensor in the scene and it is equipped on the robot, its observation area is a square with side length formula_4, and the location of the robot is formula_5. If in the obstacles map matrix formula_6, the real location formula_1 corresponds to the element index formula_2, then an example of the obstacles map layer defining rules is as follows:
formula_7
where formula_1 meets that formula_8 and formula_9.
Inflation Layer.
The inflation layer inserts a buffer zone around each obstacle in order to keep the robot from getting too close to obstacles. Note that these obstacles include obstacles described in the static map layer and the obstacles map layer. In the inflation layer, the grids that are further away from the obstacles have the lower cost.
The inflation rule used in the inflation layer is flexible and can be freely designed according to the features of the environment and the robot. For example, a simple inflation rule can be designed with two thresholds and a function. Assume that the maximum radius of the robot is formula_10. If in the inflation matrix formula_11, the real location formula_1 corresponds to the element index formula_2, then an example of the inflation rules is as follows:
formula_12
where the function formula_13 represents the distance between point formula_1 and the nearest obstacle.
Social Navigation Layers.
Although social navigation layers are not as necessary as standard layers, when there are pedestrians in the environment, by adding social navigation layers, more information can be included in the map. In standard layers, the speed of obstacles is not considered, but in social navigation layers, the speed of obstacles (i.e., pedestrians) is considered and reflected in costmap layers combined with the social habits of pedestrians.
Proxemic Layer.
The proxemic layer reflects the social habits to the surroundings of pedestrians in both moving and stationary situations.
Passing Layer.
As in many countries, traffic rules require drivers to drive on the right side of the road, there is the custom of walking on the right side of pathways in many cultures. In order to obey this social habit of pedestrians as much as possible, the pass layer has a higher cost on the right side of pedestrians, guiding the robot to pass by the left side of pedestrians when it is possible.
After layered costmaps.
After the creation, the layered costmaps will be continuously updated based on the information provided by the sensors, and the navigation algorithm will drive the robot to reach the destination without collision based on the information provided by the layered costmaps.
Applications.
Layered costmaps is the default map creation method when ROS, a popular robot control software framework, controls mobile robots for navigation. The robot developers can control the robots (such as TurtleBot) through ROS to navigate in scenes indoors or outdoors, with or without people. During the navigation process, the layered costmaps will be constantly updated to provide information about the surrounding environment to the robot.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " M^{sta} "
},
{
"math_id": 1,
"text": " (x^w,y^w) "
},
{
"math_id": 2,
"text": " (x,y) "
},
{
"math_id": 3,
"text": " M^{sta}_{x,y} = \\begin{cases} 254 & (x^w,y^w) \\text{ is occupied} \\\\ 0 & \\text{otherwise} \\end{cases} \\;"
},
{
"math_id": 4,
"text": " 2L "
},
{
"math_id": 5,
"text": " (x^r,y^r) "
},
{
"math_id": 6,
"text": " M^{obs} "
},
{
"math_id": 7,
"text": " M^{obs}_{x,y} = \\begin{cases} 254 & (x^w,y^w) \\text{ is occupied} \\\\ 0 & \\text{otherwise} \\end{cases} \\; "
},
{
"math_id": 8,
"text": " x^r-L < x^w < x^r+L "
},
{
"math_id": 9,
"text": " y^r-L < y^w < y^r+L "
},
{
"math_id": 10,
"text": " R "
},
{
"math_id": 11,
"text": " M^{inf} "
},
{
"math_id": 12,
"text": " M^{inf}_{x,y} = \\begin{cases} 254 & dis(x^w,y^w) < R \\\\ 0 & dis(x^w,y^w) > 3 \\cdot R \\\\ 127\\cdot(3-dis(x^w,y^w)/R) & \\text{otherwise} \\end{cases} \\; "
},
{
"math_id": 13,
"text": " dis(x^w,y^w) "
}
]
| https://en.wikipedia.org/wiki?curid=77325638 |
77340920 | Udayadivākara | Indian astronomer and mathematician
Udayadivākara (c. 1073 CE) was an Indian astronomer and mathematician who has authored an influential and elaborate commentary, called "Sundari", on "Laghu-bhāskarīya" of Bhāskara I. No personal details about Udayadivākara are known. Since the commentary "Sundari" takes the year 1073 CE as its epoch, probably the commentary was completed about that year. "Sundari" has not yet been published and is available only in manuscript form. Some of these manuscripts are preserved in the manuscript depositories in Thiruvananthapuram. According to K. V. Sarma, historian of the astronomy and mathematics of the Kerala school, Udayadivākara probably hailed from Kerala, India.
Historical significance of "Sundari".
Apart from the fact that "Sundari" is an elaborate commentary, it has some historical significance. It has quoted extensively from a now lost work by a little known mathematician Jayadeva. The quotations relate to the "cakravala" method for solving indeterminate integral equations of the form formula_0. This shows that the method predates Bhaskara II contrary to generally held beliefs. Another important reference to Jayadeva’s work is the solution of the indeterminate equation of the form formula_1, formula_2 being positive or negative.
A problem and its solution.
Udayadivākara used his method for solving the equation formula_1 to obtain some particular solutions of a certain algebraic problem. The problem and Udayadivākara's solution to the problem are presented below only to illustrate the techniques used by Indian astronomers for solving algebraic equations.
Problem.
Find positive integers formula_3 and formula_4 satisfying the following conditions:
formula_5
Solution.
To solve the problem, Udayadivākara makes a series of apparently arbitrary assumptions all aimed at reducing the problem to one of solving an indeterminate equation of the form formula_6.
Udayadivākara begins by assuming that formula_7 which can be written in the form formula_8. He next assumes that formula_9 which, together with the earlier equation, yields
formula_10
Now, Udayadivākara puts
formula_11
which is then transformed to the equation
formula_12
This equation is of the form formula_13 with formula_14, formula_15 and formula_16. Using the method for solving the equation formula_6, Udayadivākara finds the following solutions formula_17, formula_18 and formula_19 from which the values of formula_3 and formula_4 are obtained by back substitution.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " Nx^2+1=y^2"
},
{
"math_id": 1,
"text": "Nx^2 + C = y^2"
},
{
"math_id": 2,
"text": "C"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "y"
},
{
"math_id": 5,
"text": "\n\\begin{align}\nx+y & = \\text{a prefect square},\\\\\nx-y & = \\text{a prefect square},\\\\\nxy+1 & = \\text{a prefect square}.\n\\end{align}\n"
},
{
"math_id": 6,
"text": "Nx^2+C=y^2"
},
{
"math_id": 7,
"text": " xy+1=(2y+1)^2 "
},
{
"math_id": 8,
"text": "x-y=3y+4 "
},
{
"math_id": 9,
"text": " 3y+4 = (3z+2)^2 "
},
{
"math_id": 10,
"text": "\n\\begin{align}\nx & =12z^2+16z+4,\\\\\ny &= 3z^2+4z,\\\\\nx+y & = 15z^2+20z+4.\n\\end{align}\n"
},
{
"math_id": 11,
"text": " 15z^2+20z+4 = u^2"
},
{
"math_id": 12,
"text": " (30z+20)^2= 60u^2+160."
},
{
"math_id": 13,
"text": "Nx^2+C=\\lambda^2"
},
{
"math_id": 14,
"text": "N=60"
},
{
"math_id": 15,
"text": "C=160"
},
{
"math_id": 16,
"text": "\\lambda = 30z+20"
},
{
"math_id": 17,
"text": "( u=2, \\lambda=20)"
},
{
"math_id": 18,
"text": "( u=18, \\lambda=140)"
},
{
"math_id": 19,
"text": "( u=8802, \\lambda=68180)"
}
]
| https://en.wikipedia.org/wiki?curid=77340920 |
773481 | Space–time tradeoff | Algorithm trading more space for lower time
A space–time trade-off, also known as time–memory trade-off or the algorithmic space-time continuum in computer science is a case where an algorithm or program trades increased space usage with decreased time. Here, "space" refers to the data storage consumed in performing a given task (RAM, HDD, etc.), and "time" refers to the time consumed in performing a given task (computation time or response time).
The utility of a given space–time tradeoff is affected by related fixed and variable costs (of, e.g., CPU speed, storage space), and is subject to diminishing returns.
History.
Biological usage of time–memory tradeoffs can be seen in the earlier stages of animal behavior. Using stored knowledge or encoding stimuli reactions as "instincts" in the DNA avoids the need for "calculation" in time-critical situations. More specific to computers, look-up tables have been implemented since the very earliest operating systems.
In 1980 Martin Hellman first proposed using a time–memory tradeoff for cryptanalysis.
Types of tradeoff.
Lookup tables vs. recalculation.
A common situation is an algorithm involving a lookup table: an implementation can include the entire table, which reduces computing time, but increases the amount of memory needed, or it can compute table entries as needed, increasing computing time, but reducing memory requirements.
Database indexes vs. table scans.
Database Management Systems offer the capability to create Database index data structures. Indexes improve the speed of lookup operations at the cost of additional space. Without indexes, time-consuming Full table scan operations are sometimes required to locate desired data.
Compressed vs. uncompressed data.
A space–time trade off can be applied to the problem of data storage. If data is stored uncompressed, it takes more space but access takes less time than if the data were stored compressed (since compressing the data reduces the amount of space it takes, but it takes time to run the decompression algorithm). Depending on the particular instance of the problem, either way is practical. There are also rare instances where it is possible to directly work with compressed data, such as in the case of compressed bitmap indices, where it is faster to work with compression than without compression.
Re-rendering vs. stored images.
Storing only the SVG source of a vector image and rendering it as a bitmap image every time the page is requested would be trading time for space; more time used, but less space. Rendering the image when the page is changed and storing the rendered images would be trading space for time; more space used, but less time. This technique is more generally known as caching.
Smaller code vs. loop unrolling.
Larger code size can be traded for higher program speed when applying loop unrolling. This technique makes the code longer for each iteration of a loop, but saves the computation time required for jumping back to the beginning of the loop at the end of each iteration.
Other examples.
Algorithms that also make use of space–time tradeoffs include:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2^{n+1}"
},
{
"math_id": 1,
"text": "O(2^n)"
},
{
"math_id": 2,
"text": "2^{2n}"
},
{
"math_id": 3,
"text": "O(1)"
}
]
| https://en.wikipedia.org/wiki?curid=773481 |
7735022 | Cage (graph theory) | Regular graph with fewest possible nodes for its girth
In the mathematical field of graph theory, a cage is a regular graph that has as few vertices as possible for its girth.
Formally, an is defined to be a graph in which each vertex has exactly r neighbors, and in which the shortest cycle has length exactly g.
An is an with the smallest possible number of vertices, among all . A is often called a .
It is known that an exists for any combination of and . It follows that all exist.
If a Moore graph exists with degree r and girth g, it must be a cage. Moreover, the bounds on the sizes of Moore graphs generalize to cages: any cage with odd girth g must have at least
formula_0
vertices, and any cage with even girth g must have at least
formula_1
vertices. Any with exactly this many vertices is by definition a Moore graph and therefore automatically a cage.
There may exist multiple cages for a given combination of r and g. For instance there are three non-isomorphic , each with 70 vertices: the Balaban 10-cage, the Harries graph and the Harries–Wong graph. But there is only one : the Balaban 11-cage (with 112 vertices).
Known cages.
A 1-regular graph has no cycle, and a connected 2-regular graph has girth equal to its number of vertices, so cages are only of interest for "r" ≥ 3. The ("r",3)-cage is a complete graph "K""r" +&hairsp;1 on "r" + 1 vertices, and the ("r",4)-cage is a complete bipartite graph "K""r","r" on 2"r" vertices.
Notable cages include:
The numbers of vertices in the known ("r","g") cages, for values of "r" > 2 and "g" > 2, other than projective planes and generalized polygons, are:
Asymptotics.
For large values of "g", the Moore bound implies that the number "n" of vertices must grow at least singly exponentially as a function of "g". Equivalently, "g" can be at most proportional to the logarithm of "n". More precisely,
formula_2
It is believed that this bound is tight or close to tight . The best known lower bounds on "g" are also logarithmic, but with a smaller constant factor (implying that "n" grows singly exponentially but at a higher rate than the Moore bound). Specifically, the construction of Ramanujan graphs defined by satisfy the bound
formula_3
This bound was improved slightly by .
It is unlikely that these graphs are themselves cages, but their existence gives an upper bound to the number of vertices needed in a cage. | [
{
"math_id": 0,
"text": "1 + r\\sum_{i=0}^{(g-3)/2}(r-1)^i"
},
{
"math_id": 1,
"text": "2\\sum_{i=0}^{(g-2)/2}(r-1)^i"
},
{
"math_id": 2,
"text": "g \\le 2\\log_{r-1} n + O(1)."
},
{
"math_id": 3,
"text": "g \\ge \\frac{4}{3}\\log_{r-1} n + O(1)."
}
]
| https://en.wikipedia.org/wiki?curid=7735022 |
7735247 | Fama–MacBeth regression | Method for estimating parameters
The Fama–MacBeth regression is a method used to estimate parameters for asset pricing models such as the capital asset pricing model (CAPM). The method estimates the betas and risk premia for any risk factors that are expected to determine asset prices.
Methodology.
The method works with multiple assets across time (panel data). The parameters are estimated in two steps:
Eugene F. Fama and James D. MacBeth (1973) demonstrated that the residuals of risk-return regressions and the observed "fair game" properties of the coefficients are consistent with an "efficient capital market" (quotes in the original).
Note that Fama MacBeth regressions provide standard errors corrected only for cross-sectional correlation. The standard errors from this method do not correct for time-series autocorrelation. This is usually not a problem for stock trading since stocks have weak time-series autocorrelation in daily and weekly holding periods, but autocorrelation is stronger over long horizons.
This means Fama MacBeth regressions may be inappropriate to use in many corporate finance settings where project holding periods tend to be long. For alternative methods of correcting standard errors for time series and cross-sectional correlation in the error term look into double clustering by firm and year.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{array}{lcr} R_{1,t} = \\alpha_1 + \\beta_{1,F_1}F_{1,t} + \\beta_{1,F_2}F_{2,t} + \\cdots + \\beta_{1,F_m}F_{m,t} + \\epsilon_{1,t} \\\\\nR_{2,t} = \\alpha_2 + \\beta_{2,F_1}F_{1,t} + \\beta_{2,F_2}F_{2,t} + \\cdots + \\beta_{2,F_m}F_{m,t} + \\epsilon_{2,t} \\\\\n\\vdots \\\\\nR_{n,t} = \\alpha_{n} + \\beta_{n,F_1}F_{1,t} + \\beta_{n,F_2}F_{2,t} + \\cdots + \\beta_{n,F_m}F_{m,t} + \\epsilon_{n,t}\\end{array}"
},
{
"math_id": 1,
"text": "\\begin{array}{lcr} R_{i,1} = \\gamma_{1,0} + \\gamma_{1,1}\\hat{\\beta}_{i,F_1} + \\gamma_{1,2}\\hat{\\beta}_{i,F_2} + \\cdots + \\gamma_{1,m}\\hat{\\beta}_{i,F_m} + \\epsilon_{i,1} \\\\\nR_{i,2} = \\gamma_{2,0} + \\gamma_{2,1}\\hat{\\beta}_{i,F_1} + \\gamma_{2,2}\\hat{\\beta}_{i,F_2} + \\cdots + \\gamma_{2,m}\\hat{\\beta}_{i,F_m} + \\epsilon_{i,2} \\\\\n\\vdots \\\\\nR_{i,T} = \\gamma_{T,0} + \\gamma_{T,1}\\hat{\\beta}_{i,F_1} + \\gamma_{T,2}\\hat{\\beta}_{i,F_2} + \\cdots + \\gamma_{T,m}\\hat{\\beta}_{i,F_m} + \\epsilon_{i,T}\\end{array}"
}
]
| https://en.wikipedia.org/wiki?curid=7735247 |
7735427 | E-function | In mathematics, E-functions are a type of power series that satisfy particular arithmetic conditions on the coefficients. They are of interest in transcendental number theory, and are more special than G-functions.
Definition.
A function "f"("x") is called of type "E", or an "E"-function, if the power series
formula_0
satisfies the following three conditions:
where the left hand side represents the maximum of the absolute values of all the algebraic conjugates of "cn";
The second condition implies that "f" is an entire function of "x".
Uses.
"E"-functions were first studied by Siegel in 1929. He found a method to show that the values taken by certain "E"-functions were algebraically independent. This was a result which established the algebraic independence of classes of numbers rather than just linear independence. Since then these functions have proved somewhat useful in number theory and in particular they have application in transcendence proofs and differential equations.
The Siegel–Shidlovsky theorem.
Perhaps the main result connected to "E"-functions is the Siegel–Shidlovsky theorem (also known as the Siegel and Shidlovsky theorem), named after Carl Ludwig Siegel and Andrei Borisovich Shidlovsky.
Suppose that we are given "n" "E"-functions, "E"1("x")...,"E""n"("x"), that satisfy a system of homogeneous linear differential equations
formula_4
where the "fij" are rational functions of "x", and the coefficients of each "E" and "f" are elements of an algebraic number field "K". Then the theorem states that if "E"1("x")...,"E""n"("x") are algebraically independent over "K"("x"), then for any non-zero algebraic number α that is not a pole of any of the "fij" the numbers "E"1(α)...,"E""n"(α) are algebraically independent. | [
{
"math_id": 0,
"text": "f(x)=\\sum_{n=0}^\\infty c_n \\frac{x^n}{n!}"
},
{
"math_id": 1,
"text": " \\varepsilon>0"
},
{
"math_id": 2,
"text": "\\overline{\\left|c_n\\right|}=O\\left(n^{n\\varepsilon}\\right),"
},
{
"math_id": 3,
"text": "q_n=O\\left(n^{n\\varepsilon}\\right). "
},
{
"math_id": 4,
"text": "y^\\prime_i=\\sum_{j=1}^n f_{ij}(x)y_j\\quad(1\\leq i\\leq n)"
}
]
| https://en.wikipedia.org/wiki?curid=7735427 |
77355 | Shellsort | Sorting algorithm which uses multiple comparison intervals
Shellsort, also known as Shell sort or Shell's method, is an in-place comparison sort. It can be seen as either a generalization of sorting by exchange (bubble sort) or sorting by insertion (insertion sort). The method starts by sorting pairs of elements far apart from each other, then progressively reducing the gap between elements to be compared. By starting with far-apart elements, it can move some out-of-place elements into the position faster than a simple nearest-neighbor exchange. Donald Shell published the first version of this sort in 1959. The running time of Shellsort is heavily dependent on the gap sequence it uses. For many practical variants, determining their time complexity remains an open problem.
Description.
Shellsort is an optimization of insertion sort that allows the exchange of items that are far apart. The idea is to arrange the list of elements so that, starting anywhere, taking every "h"th element produces a sorted list. Such a list is said to be "h"-sorted. It can also be thought of as "h" interleaved lists, each individually sorted. Beginning with large values of "h" allows elements to move long distances in the original list, reducing large amounts of disorder quickly, and leaving less work for smaller "h"-sort steps to do. If the list is then "k-sorted" for some smaller integer "k", then the list remains "h"-sorted. A final sort with "h" = 1 ensures the list is fully sorted at the end, but a judiciously chosen decreasing sequence of "h" values leaves very little work for this final pass to do.
In simplistic terms, this means if we have an array of 1024 numbers, our first gap ("h") could be 512. We then run through the list comparing each element in the first half to the element in the second half. Our second gap ("k") is 256, which breaks the array into four sections (starting at 0, 256, 512, 768), and we make sure the first items in each section are sorted relative to each other, then the second item in each section, and so on. In practice the gap sequence could be anything, but the last gap is always 1 to finish the sort (effectively finishing with an ordinary insertion sort).
An example run of Shellsort with gaps 5, 3 and 1 is shown below.
The first pass, 5-sorting, performs insertion sort on five separate subarrays ("a"1, "a"6, "a"11), ("a"2, "a"7, "a"12), ("a"3, "a"8), ("a"4, "a"9), ("a"5, "a"10). For instance, it changes the subarray ("a"1, "a"6, "a"11) from (62, 17, 25) to (17, 25, 62). The next pass, 3-sorting, performs insertion sort on the three subarrays ("a"1, "a"4, "a"7, "a"10), ("a"2, "a"5, "a"8, "a"11), ("a"3, "a"6, "a"9, "a"12). The last pass, 1-sorting, is an ordinary insertion sort of the entire array ("a"1..., "a"12).
As the example illustrates, the subarrays that Shellsort operates on are initially short; later they are longer but almost ordered. In both cases insertion sort works efficiently.
Unlike insertion sort, Shellsort is not a stable sort since gapped insertions transport equal elements past one another and thus lose their original order. It is an adaptive sorting algorithm in that it executes faster when the input is partially sorted.
Pseudocode.
Using Marcin Ciura's gap sequence, with an inner insertion sort.
gaps = [701, 301, 132, 57, 23, 10, 4, 1] # Ciura gap sequence
foreach (gap in gaps)
# Do a gapped insertion sort for every elements in gaps
# Each loop leaves a[0..gap-1] in gapped order
for (i = gap; i < n; i += 1)
# save a[i] in temp and make a hole at position i
temp = a[i]
# shift earlier gap-sorted elements up until the correct location for a[i] is found
for (j = i; (j >= gap) && (a[j - gap] > temp); j -= gap)
a[j] = a[j - gap]
# put temp (the original a[i]) in its correct location
a[j] = temp
Gap sequences.
The question of deciding which gap sequence to use is difficult. Every gap sequence that contains 1 yields a correct sort (as this makes the final pass an ordinary insertion sort); however, the properties of thus obtained versions of Shellsort may be very different. Too few gaps slows down the passes, and too many gaps produces an overhead.
The table below compares most proposed gap sequences published so far. Some of them have decreasing elements that depend on the size of the sorted array ("N"). Others are increasing infinite sequences, whose elements less than "N" should be used in reverse order.
When the binary representation of "N" contains many consecutive zeroes, Shellsort using Shell's original gap sequence makes Θ("N"2) comparisons in the worst case. For instance, this case occurs for "N" equal to a power of two when elements greater and smaller than the median occupy odd and even positions respectively, since they are compared only in the last pass.
Although it has higher complexity than the "O"("N" log "N") that is optimal for comparison sorts, Pratt's version lends itself to sorting networks and has the same asymptotic gate complexity as Batcher's bitonic sorter.
Gonnet and Baeza-Yates observed that Shellsort makes the fewest comparisons on average when the ratios of successive gaps are roughly equal to 2.2. This is why their sequence with ratio 2.2 and Tokuda's sequence with ratio 2.25 prove efficient. However, it is not known why this is so. Sedgewick recommends using gaps which have low greatest common divisors or are pairwise coprime. Gaps which are odd numbers seem to work well in practice: 25% reductions have been observed by avoiding even-numbered gaps. Gaps which avoid multiples of 3 and 5 seem to produce small benefits of < 10%.
With respect to the average number of comparisons, Ciura's sequence has the best known performance; gaps greater than 701 were not determined but the sequence can be further extended according to the recursive formula formula_0.
Tokuda's sequence, defined by the simple formula formula_1, where formula_2, formula_3, can be recommended for practical applications.
If the maximum input size is small, as may occur if Shellsort is used on small subarrays by another recursive sorting algorithm such as quicksort or merge sort, then it is possible to tabulate an optimal sequence for each input size.
Computational complexity.
The following property holds: after "h"2-sorting of any "h"1-sorted array, the array remains "h"1-sorted. Every "h"1-sorted and "h"2-sorted array is also ("a"1"h"1+"a"2"h"2)-sorted, for any nonnegative integers "a"1 and "a"2. The worst-case complexity of Shellsort is therefore connected with the Frobenius problem: for given integers "h"1..., "hn" with gcd = 1, the Frobenius number "g"("h"1..., "hn") is the greatest integer that cannot be represented as "a"1"h"1+ ... +"anhn" with nonnegative integer "a"1..., "an". Using known formulae for Frobenius numbers, we can determine the worst-case complexity of Shellsort for several classes of gap sequences. Proven results are shown in the above table.
Mark Allen Weiss proved that Shellsort runs in "O"("N" log "N") time when the input array is in reverse order.
With respect to the average number of operations, none of the proven results concerns a practical gap sequence. For gaps that are powers of two, Espelid computed this average as formula_4. Knuth determined the average complexity of sorting an "N"-element array with two gaps ("h", 1) to be formula_5. It follows that a two-pass Shellsort with "h" = Θ("N"1/3) makes on average "O"("N"5/3) comparisons/inversions/running time. Yao found the average complexity of a three-pass Shellsort. His result was refined by Janson and Knuth: the average number of comparisons/inversions/running time made during a Shellsort with three gaps ("ch", "cg", 1), where "h" and "g" are coprime, is formula_6 in the first pass, formula_7 in the second pass and formula_8 in the third pass. "ψ"("h", "g") in the last formula is a complicated function asymptotically equal to formula_9. In particular, when "h" = Θ("N"7/15) and "g" = Θ("N"1/5), the average time of sorting is "O"("N"23/15).
Based on experiments, it is conjectured that Shellsort with Hibbard's gap sequence runs in "O"("N"5/4) average time, and that Gonnet and Baeza-Yates's sequence requires on average 0.41"N" ln "N" (ln ln "N" + 1/6) element moves. Approximations of the average number of operations formerly put forward for other sequences fail when sorted arrays contain millions of elements.
The graph below shows the average number of element comparisons use by various gap sequences, divided by the theoretical lower bound, i.e. log2"N"!. Ciuria's sequence 1, 4, 10, 23, 57, 132, 301, 701 (labelled Ci01) has been extended according to the formula formula_10.
Applying the theory of Kolmogorov complexity, Jiang, Li, and Vitányi
proved the following lower bound for the order of the average number of operations/running time in a "p"-pass Shellsort: Ω("pN"1+1/"p") when "p" ≤ log2"N" and Ω("pN") when "p" > log2"N".
Therefore, Shellsort has prospects of running in an average time that asymptotically grows like "N" log"N" only when using gap sequences whose number of gaps grows in proportion to the logarithm of the array size. It is, however, unknown whether Shellsort can reach this asymptotic order of average-case complexity, which is optimal for comparison sorts. The lower bound was improved by Vitányi for every number of passes formula_11 to
formula_12
where formula_13. This result implies for example the Jiang-Li-Vitányi lower bound for all formula_11-pass increment sequences and improves that lower bound for particular increment sequences. In fact all bounds (lower and upper) currently known for the average case are precisely matched by this lower bound. For example, this gives the new result that the Janson-Knuth upper bound is matched by the resulting lower bound for the used increment sequence, showing that three pass Shellsort for this increment sequence uses formula_14 comparisons/inversions/running time.
The formula allows us to search for increment sequences that yield lower bounds which are unknown; for example an increment sequence for four passes which has a lower bound greater than
formula_15 for the increment sequence
formula_16 formula_17 formula_18 formula_19. The lower bound becomes
formula_20
The worst-case complexity of any version of Shellsort is of higher order: Plaxton, Poonen, and Suel showed that it grows at least as rapidly as formula_21.
Robert Cypher proved a stronger lower bound: formula_22 when formula_23 for all formula_24.
Applications.
Shellsort performs more operations and has higher cache miss ratio than quicksort. However, since it can be implemented using little code and does not use the call stack, some implementations of the qsort function in the C standard library targeted at embedded systems use it instead of quicksort. Shellsort is, for example, used in the uClibc library. For similar reasons, in the past, Shellsort was used in the Linux kernel.
Shellsort can also serve as a sub-algorithm of introspective sort, to sort short subarrays and to prevent a slowdown when the recursion depth exceeds a given limit. This principle is employed, for instance, in the bzip2 compressor.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "h_k = \\lfloor 2.25 h_{k-1} \\rfloor"
},
{
"math_id": 1,
"text": "h_k = \\lceil h'_k \\rceil"
},
{
"math_id": 2,
"text": "h'_k = 2.25 h'_{k-1} + 1"
},
{
"math_id": 3,
"text": "h'_1 = 1"
},
{
"math_id": 4,
"text": "0.5349N\\sqrt{N}-0.4387N-0.097\\sqrt{N}+O(1)"
},
{
"math_id": 5,
"text": "\\frac{2N^2}{h} + \\sqrt{\\pi N^3 h}"
},
{
"math_id": 6,
"text": "\\frac{N^2}{4ch} + O(N)"
},
{
"math_id": 7,
"text": "\\frac{1}{8g}\\sqrt{\\frac{\\pi}{ch}}(h - 1)N^{3/2} + O(hN)"
},
{
"math_id": 8,
"text": "\\psi(h, g)N + \\frac{1}{8}\\sqrt{\\frac{\\pi}{c}}(c - 1)N^{3/2} + O\\left((c - 1)gh^{1/2}N\\right) + O\\left(c^2g^3h^2\\right)"
},
{
"math_id": 9,
"text": "\\sqrt{\\frac{\\pi h}{128}}g + O\\left(g^{-1/2}h^{1/2}\\right) + O\\left(gh^{-1/2}\\right)"
},
{
"math_id": 10,
"text": "h_k = \\lfloor2.25 h_{k-1}\\rfloor"
},
{
"math_id": 11,
"text": "p"
},
{
"math_id": 12,
"text": "\n\\Omega ( N\\sum_{k=1}^p h_{k-1}/h_k )\n"
},
{
"math_id": 13,
"text": "h_0=N"
},
{
"math_id": 14,
"text": "\\Theta(N^{23/15})"
},
{
"math_id": 15,
"text": "\\Omega(pn^{1+1/p}) = \\Omega(n^{5/4})"
},
{
"math_id": 16,
"text": "h_1 = n^{11/16},"
},
{
"math_id": 17,
"text": "h_2 = n^{7/16},"
},
{
"math_id": 18,
"text": "h_3 = n^{3/16},"
},
{
"math_id": 19,
"text": "h_4 = 1"
},
{
"math_id": 20,
"text": "T = \\Omega(n\\cdot (n^{1-11/16}+n^{11/16-7/16}+n^{7/16-3/16}+n^{3/16}) = \\Omega(n^{1+5/16}) = \\Omega(n^{21/16})."
},
{
"math_id": 21,
"text": "\\Omega\\left(N \\left( {\\log N \\over \\log \\log N} \\right)^2\\right)"
},
{
"math_id": 22,
"text": "\\Omega\\left(N {{(\\log N)^2} \\over {\\log\\log N}}\\right)"
},
{
"math_id": 23,
"text": "h_{s+1} > h_s"
},
{
"math_id": 24,
"text": "s"
}
]
| https://en.wikipedia.org/wiki?curid=77355 |
77356586 | Problems and Theorems in Analysis | Problems and Theorems in Analysis () is a two-volume problem book in analysis by George Pólya and Gábor Szegő. Published in 1925, the two volumes are titled (I) "Series. Integral Calculus. Theory of Functions."; and (II) "Theory of Functions. Zeros. Polynomials. Determinants. Number Theory. Geometry".
The volumes are highly regarded for the quality of their problems and their method of organisation, not by topic but by method of solution, with a focus on cultivating the student's problem-solving skills. Each volume the contains problems at the beginning and (brief) solutions at the end. As two authors have put it, "there is a general consensus among mathematicians that the two-volume Pólya-Szegő is the best written and most useful problem book in the history of mathematics."
Background.
It was Pólya who had the idea for a comprehensive problem book in analysis first, but he realised he would not be able complete it alone. He decided to write it with Szegő, who had been a friend of Pólya's since 1913, when the pair met in Budapest (at this time, Szegő was only 17, while Pólya was a postdoctoral researcher of 25). Szegő's early career was intertwined with Pólya, his first two papers concerned problems posed by Pólya. However Pólya believed their areas of expertise were sufficiently different that the collaboration would prove fruitful. Pólya and Szegő signed the contract with Springer-Verlag for the book in 1923 and it was published by 1925.
Pólya later wrote of the period in which they wrote the book:
<templatestyles src="Template:Blockquote/styles.css" />It was a wonderful time; we worked with enthusiasm and concentration. We had similar backgrounds. We were both influenced, like all other young Hungarian mathematicians of that time, by Leopold Fejér. We were both readers of the same well directed Hungarian Mathematical Journal for high school students that stressed problem solving. We were interested in the same kind of questions, in the same topics; but one of us knew more about one topic and the other more about some other topic. It was a fine collaboration. The book "Aufgaben und Lehrsatze aus der Analysis", the result of our cooperation, is my best work and also the best work of Gábor Szegő.
Writing "Problems and Theorems" was an intense experience for both young mathematicians. Pólya was a professor in Zurich and Szegő was a "Privatdozent" in Berlin, so both had independent workloads. Pólya's wife worried he might have a nervous breakdown. Both were also under threat by the rise of antisemitism in Central Europe (Pólya and Szegő were Hungarian Jews). Financial difficulties, on top of pessimism about appointment to a German university, convinced Pólya to move to England in 1925. Szegő took longer to flee, not leaving Germany until 1934 when Pólya and Harald Bohr convinced him to accept a post at Washington University. By this time the Nazis had already begun purging Jewish professors from German universities. Szegő and Pólya (who collaborated on little after the problem book) were reunited in America in the 1950s, in the mathematics department of Stanford University.
Contents.
Although the book's title refers only to analysis, a broad range of problems are contained within. It starts in combinatorics, and quickly branches out from mathematical analysis to number theory, geometry, linear algebra, and even some physics. The specific topics treated bear witness to the special interests of Pólya (Descartes' rule of signs, Pólya's enumeration theorem), Szegö (polynomials, trigonometric polynomials, and his own work in orthogonal polynomials) and sometimes both (the zeros of polynomials and analytic functions, complex analysis in general). Many of the book's problems are not new, and their solutions include back-references to their original sources. The section on geometry (IX) contains many problems contributed by Loewner (in differential geometry) and Hirsch (in algebraic geometry).
The book was unique at the time because of its arrangement, less by topic and more by method of solution, so arranged in order to build up the student's problem-solving abilities. The preface of the book contains some remarks on general problem solving and mathematical heuristics which anticipate Pólya's later works on that subject ("Mathematics and Plausible Reasoning", "How to Solve It"). The pair held practice sessions, in which the problems were put to university students and worked through as a class (with some of the representative problems solved by the teacher, and the harder problems set as homework). They went through portions of the book at a rate of about one chapter a semester.
A typical example of the progression of questions in "Problems and Theorems in Analysis" is given by the first three questions in (the American edition of) volume I:
1. In how many different ways can you change one dollar? That is, in how many different ways can you pay 100 cents using five different kinds of coins, cents, nickels, dimes, quarters and half-dollars (worth 1, 5, 10, 25, and 50 cents, respectively)?
2. Let n stand for a non-negative integer and let formula_0 denote the number of solutions of the Diophantine equation
formula_1
in non-negative integers. Then the series
formula_2
represents a rational function of formula_3. Find it.
3. In how many ways can you put the necessary stamps in one row on an airmail letter sent inside the U.S., using 2, 4, 6, 8 cent stamps? The postage is 10 cents. (Different arrangements of the same values are regarded as different ways.)
The first question sets up an elementary combinatorics question; but the second suggests both a solution (using generating functions) and a generalisation. The third gives another combinatorics question which can be solved by means of generating functions. Indeed, questions 1-26 follow generating function through further examples. Whole areas of mathematics are developed in this way.
Substantial additions were made in the English translation (published in 1972 and 1976), including new sections and back-references to Pólya's other works on problem solving.
Reception.
Richard Askey and Paul Nevai wrote of the book that, "there is a general consensus among mathematicians that the two-volume Pólya-Szegő is the best written and most useful problem book in the history of mathematics." The book has had its admirers. Various eminent mathematicians (Bernays, Courant, Fejér, E. Landau, F. Riesz, Toeplitz) had read over the galley proofs while the work was in press and its early reviewers (F. Riesz again, Knopp, Tamarkin) were not much less impressive, all effusive in their praise. The careful pedagogy meant that graduate students were able to learn analysis from "Problems and Theorems" alone. Paul Erdős once approached a young mathematician with a problem taken from volume II and announced "I will give $10 to China if you can solve this problem in ten minutes".
A Russian translation was published in 1937–38. An English translation was published in 1972–76.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A_n"
},
{
"math_id": 1,
"text": " x + 5y + 10z + 25u + 50v = n "
},
{
"math_id": 2,
"text": " A_0 + A_1 \\zeta + A_2 \\zeta^2 + \\cdots + A_n\\zeta^n + \\cdots "
},
{
"math_id": 3,
"text": "\\zeta"
}
]
| https://en.wikipedia.org/wiki?curid=77356586 |
7735709 | Lost time | Traffic engineering term
Lost time is the term within traffic engineering for the time during which no vehicles are able to pass through an intersection despite the traffic signal displaying a green (go) signal. The total lost time is the sum of two separate elements: start-up lost time and clearance lost time. Start-up lost time happens when a traffic signal changes from red (stop) to green (go). Some amount of time elapses between the signal changing from red to green and the first queued vehicle moving through the intersection. There is then an additional amount of time for the next vehicle to begin moving and pass through the intersection, and so on. The total time taken for all waiting drivers to react and accelerate is the start-up lost time. Clearance lost time is the time lost to stopping a line of vehicles at the end of a green phase. Lost time is always measured in seconds.
Start-up lost time can be calculated as the sum of the differences between the headways for the first cars in line and the average headway through the intersection at a theoretical maximum flow, the "saturation flow rate". When no observations have been made, the start-up lost time is assumed to be 2.0 seconds as a default value.
Since clearance lost time is often not observable since observation requires that some vehicles which were waiting at the start of a green phase still be waiting when the green phase ends, it is typically determined through the equation
formula_0, where formula_1 is the clearance lost time, formula_2 is the yellow time, formula_3 is the "all-red interval" time during which the traffic signal is red for all directions, and formula_4 is the portion of the yellow and all-red times during which vehicles are illegally in the intersection and has a default value of 2.0 seconds in HCM. ALDOT's Traffic Signal Design Guide and Timing Manual, on the other hand, determines clearance lost time to be half of the yellow interval plus the entire all-red interval.
In order to find out how clearance lost time can be calculated see and. In this study, clearance lost time was measured in the field and compared with the default values specified in HCM and ALDOT's manual.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "l_2 = y + ar - e"
},
{
"math_id": 1,
"text": "\\mathit{l}_2"
},
{
"math_id": 2,
"text": "y"
},
{
"math_id": 3,
"text": "ar"
},
{
"math_id": 4,
"text": "e"
}
]
| https://en.wikipedia.org/wiki?curid=7735709 |
77361091 | Electronic mail game | In game theory, the electronic mail game is an example of an "almost common knowledge" incomplete information game. It illustrates the apparently paradoxical situation where arbitrarily close approximations to common knowledge lead to very different strategical implications from that of perfect common knowledge. Intuitively, it shows that arbitrarily long but finite chains of "I know that you know that I know that you know..." are fundamentally different from infinite ones.
It was first introduced by Ariel Rubinstein in 1989.
The game.
Setup.
The electronic mail game is a coordination game of incomplete information. Players 1 (she) and 2 (he) can choose between actions formula_0 and formula_1. There are two states of the world formula_2 and formula_3, which happen with respective probabilities formula_4 and formula_5, with formula_6. The payoffs for each action profile in each of those states are:
where formula_10. Players would like to coordinate to play formula_0 in state of the world formula_2, and to play formula_1 in formula_3. If they coordinate in the wrong state, they only get formula_8 payoff; but if they choose different actions, the player who chose formula_1 gets a negative payoff of formula_9.
Player 1 knows the true state of nature, whereas Player 2 does not. Without communicating, the highest expected payoff they can achieve is formula_11, by always choosing formula_0. If the state of the world were common knowledge, both players would be able to achieve payoff formula_7.
Email communication.
Now assume that the players communicate via emails. Once Player 1 discovers the state of nature, her computer "automatically" sends an email to Player 2 informing him of the true state; Player 2's computer then "automatically" replies with a confirmation that he received the information; Player 1's computer then "automatically" replies with a confirmation that she received the information that he received the information, and so on. This mimics the idea of a "I know that you know that I know that you know..." chain.
However, there is an arbitrarily small probability formula_12 that some technical failure will happen and one of those emails will not arrive at its destination, after which communication will cease. If that happens, the last player to send the message does not know if 1) the other player did not get the last message, or 2) the other player got the last message, but could not send the confirmation email due to the technical failure.
Types and strategies.
Let formula_13 be the number of messages that were sent by Player formula_14's computer — since that information is only observed by Player formula_14, we can think of formula_13 as their Harsanyi type. In terms of choice, players only observe formula_15 and then must choose an action formula_16. A strategy in the electronic mail game is thus defined as a function from formula_17 to formula_16.
The distribution of types formula_18 is given by the following probabilities formula_19:
Equilibrium.
The equilibrium concept to be used is that of a Bayesian Nash Equilibrium (BNE). Rubinstein showed that, no matter how small the chance of failure formula_24 and no matter how many confirmation emails were sent, both players always choose to play formula_0, even if they know that the state of nature is formula_3.
Proposition: There is only one BNE where Player 1 plays formula_0 when the state of nature is formula_2. In this equilibrium, both players play formula_0, independetly of their types.
The result is counterintuitive, since both know that the true state is formula_3, and they can have arbitrarily precise knowledge of "knowing that the other player knows that they know that the other player knows..." that the state is formula_3. Still, since this chain of information eventually stops, their equilibrium best response still is to always play formula_0.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "a"
},
{
"math_id": 3,
"text": "b"
},
{
"math_id": 4,
"text": "1-p"
},
{
"math_id": 5,
"text": "p"
},
{
"math_id": 6,
"text": "p < 1/2"
},
{
"math_id": 7,
"text": "M"
},
{
"math_id": 8,
"text": "0"
},
{
"math_id": 9,
"text": "-L"
},
{
"math_id": 10,
"text": "L > M > 0"
},
{
"math_id": 11,
"text": "(1-p)M"
},
{
"math_id": 12,
"text": "\\varepsilon > 0"
},
{
"math_id": 13,
"text": "T_i"
},
{
"math_id": 14,
"text": "i"
},
{
"math_id": 15,
"text": "T_i \\in \\{1, 2, ..., \\}"
},
{
"math_id": 16,
"text": "\\{A, B\\}"
},
{
"math_id": 17,
"text": "\\mathbb N \\ni T_i"
},
{
"math_id": 18,
"text": "(T_1, T_2)"
},
{
"math_id": 19,
"text": "\\mathbb P(T_1, T_2)"
},
{
"math_id": 20,
"text": "\\mathbb P(0, 0) = 1-p"
},
{
"math_id": 21,
"text": "\\mathbb P(n+1, n) = p \\varepsilon(1-\\varepsilon)^{2n}"
},
{
"math_id": 22,
"text": "n+1"
},
{
"math_id": 23,
"text": "\\mathbb P(n+1, n+1) = p \\varepsilon(1-\\varepsilon)^{2n+1}"
},
{
"math_id": 24,
"text": "\\varepsilon"
}
]
| https://en.wikipedia.org/wiki?curid=77361091 |
77361219 | Normalized solution (mathematics) |
In mathematics, a normalized solution to an ordinary or partial differential equation is a solution with prescribed norm, that is, a solution which satisfies a condition like formula_0 In this article, the normalized solution is introduced by using the nonlinear Schrödinger equation. The nonlinear Schrödinger equation (NLSE) is a fundamental equation in quantum mechanics and other various fields of physics, describing the evolution of complex wave functions. In Quantum Physics, normalization means that the total probability of finding a quantum particle anywhere in the universe is unity.
Definition and variational framework.
In order to illustrate this concept, consider the following nonlinear Schrödinger equation with prescribed norm:
formula_1
where formula_2 is a Laplacian operator, formula_3 is a Lagrange multiplier and formula_4 is a nonlinearity. If we want to find a normalized solution to the equation, we need to consider the following functional: Let formula_5 be defined by
formula_6
with the constraint
formula_7
where formula_8 is the Hilbert space and formula_9 is the primitive of formula_10.
A common method of finding normalized solutions is through variational methods, i.e., finding the maxima and minima of the corresponding functional with the prescribed norm. Thus, we can find the weak solution of the equation. Moreover, if it satisfies the constraint, it's a normalized solution.
A simple example on Euclidean space.
On a Euclidean space formula_11, we define a function formula_12
formula_13 with the constraint formula_14.
By direct calculation, it is not difficult to conclude that the constrained maximum is formula_15, with solutions formula_16 and formula_17, while the constrained minimum is formula_18, with solutions formula_19 and formula_20.
History.
The exploration of normalized solutions for the nonlinear Schrödinger equation can be traced back to the study of standing wave solutions with prescribed formula_21-norm. Jürgen Moser firstly introduced the concept of normalized solutions in the study of regularity properties of solutions to elliptic partial differential equations (elliptic PDEs). Specifically, he used normalized sequences of functions to prove regularity results for solutions of elliptic equations, which was a significant contribution to the field. Inequalities developed by Emilio Gagliardo and Louis Nirenberg played a crucial role in the study of PDE solutions in formula_22 spaces. These inequalities provided important tools and background for defining and understanding normalized solutions.
For the variational problem, early foundational work in this area includes the concentration-compactness principle introduced by Pierre-Louis Lions in 1984, which provided essential techniques for solving these problems.
For variational problems with prescribed mass, several methods commonly used to deal with unconstrained variational problems are no longer available. At the same time, a new critical exponent appeared, the formula_23-critical exponent. From the Gagliardo-Nirenberg inequality, we can find that the nonlinearity satisfying formula_23-subcritical or critical or supercritical leads to a different geometry for functional. In the case the functional is bounded below, i.e., formula_23 subcritical case, the earliest result on this problem was obtained by Charles-Alexander Stuart using bifurcation methods to demonstrate the existence of solutions. Later, Thierry Cazenave and Pierre-Louis Lions obtained existence results using minimization methods. Then, Masataka Shibata considered Schrödinger equations with a general nonlinear term.
In the case the functional is not bounded below, i.e., formula_23 supcritical case, some new difficulties arise. Firstly, since formula_24 is unknown, it is impossible to construct the corresponding Nehari manifold. Secondly, it is not easy to obtain the boundedness of the Palais-Smale sequence. Furthermore, verifying the compactness of the Palais-Smale sequence is challenging because the embedding formula_25 is not compact. In 1997, Louis Jeanjean using the following transform:
formula_26
Thus, one has the following functional:
formula_27
Then,
formula_28
which corresponds exactly to the Pokhozhaev's identity of equation. Jeanjean used this additional condition to ensure the boundedness of the Palais-Smale sequence, thereby overcoming the difficulties mentioned earlier. As the first method to address the issue of normalized solutions in unbounded functional, Jeanjean's approach has become a common method for handling such problems and has been imitated and developed by subsequent researchers.
In the following decades, researchers expanded on these foundational results. Thomas Bartsch and Sébastien de Valeriola investigate the existence of multiple normalized solutions to nonlinear Schrödinger equations. The authors focus on finding solutions that satisfy a prescribed formula_29 norm constraint. Recent advancements include the study of normalized ground states for NLS equations with combined nonlinearities by Nicola Soave in 2020, who examined both subcritical and critical cases. This research highlighted the intricate balance between different types of nonlinearities and their impact on the existence and multiplicity of solutions.
In bounded domain, the situation is very different. Let's define formula_30 where formula_31. Refer to Pokhozhaev's identity,
formula_32
The boundary term will make it impossible to apply Jeanjean's method. This has led many scholars to explore the problem of normalized solutions on bounded domains in recent years. In addition, there have been a number of interesting results in recent years about normalized solutions in Schrödinger system, Choquard equation, or Dirac equation.
Some extended concepts.
Mass critical, mass subcritical, mass supcritical.
Let's consider the nonlinear term to be homogeneous, that is, let's define formula_30 where formula_31. Refer to Gagliardo-Nirenberg inequality: define
formula_33
then there exists a constant formula_34 such that for any formula_35, the following inequality holds:
formula_36
Thus, there's a concept of mass critical exponent,
formula_37
From this, we can get different concepts about mass subcritical as well as mass supercritical. It is also useful to get whether the functional is bounded below or not.
Palais-Smale sequence.
Let formula_38 be a Banach space and formula_39 be a functional. A sequence formula_40 is called a Palais-Smale sequence for formula_41 at the level formula_42 if it satisfies the following conditions:
1. Energy Bound: formula_43.
2. Gradient Condition: formula_44 as formula_45 for some formula_46.
Here, formula_47 denotes the Fréchet derivative of formula_41, and formula_48 denotes the inner product in formula_38. Palais-Smale sequence named after Richard Palais and Stephen Smale.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\int_{\\mathbb{R}^N} |u(x)|^2 \\, dx = 1."
},
{
"math_id": 1,
"text": " -\\Delta u + \\lambda u = f(u), \\quad \\int_{\\mathbb{R}^N} |u|^2 \\, dx = 1, "
},
{
"math_id": 2,
"text": " \\Delta"
},
{
"math_id": 3,
"text": "N\\ge1, \\lambda\\in \\mathbb{R} "
},
{
"math_id": 4,
"text": "f"
},
{
"math_id": 5,
"text": "I: H^{1}_{0}(\\mathbb{R}^{N})\\rightarrow \\mathbb{R}"
},
{
"math_id": 6,
"text": "I(u)=\\frac{1}{2}\\int_{\\mathbb{R}^{N}}|\\nabla u|^{2}dx -\\int_{\\mathbb{R}^{N}}F(u)dx"
},
{
"math_id": 7,
"text": "\\mathcal{M}=\\{u\\in H^{1}_{0}(\\mathbb{R}^{N}): \\int_{\\mathbb{R}^{N}} u^{2}=1\\}, \\ \\ \\ \\ "
},
{
"math_id": 8,
"text": " H^{1}_{0}(\\mathbb{R}^{N}) "
},
{
"math_id": 9,
"text": " F(s) "
},
{
"math_id": 10,
"text": " f(s) "
},
{
"math_id": 11,
"text": "\\mathbb{R}^3"
},
{
"math_id": 12,
"text": "f:\\mathbb{R}^2 \\rightarrow\\mathbb{R}: "
},
{
"math_id": 13,
"text": "f(x,y) = (x + y)^2"
},
{
"math_id": 14,
"text": " x^2 +y^2 =1"
},
{
"math_id": 15,
"text": "f=2"
},
{
"math_id": 16,
"text": " (x,y)= (\\frac{\\sqrt{2}}{2},\\frac{\\sqrt{2}}{2})"
},
{
"math_id": 17,
"text": " (x,y)= (\\frac{-\\sqrt{2}}{2},\\frac{-\\sqrt{2}}{2})"
},
{
"math_id": 18,
"text": "f=0"
},
{
"math_id": 19,
"text": " (x,y)= (\\frac{-\\sqrt{2}}{2},\\frac{\\sqrt{2}}{2})"
},
{
"math_id": 20,
"text": " (x,y)= (\\frac{\\sqrt{2}}{2},\\frac{-\\sqrt{2}}{2})"
},
{
"math_id": 21,
"text": "L^2"
},
{
"math_id": 22,
"text": " L^p "
},
{
"math_id": 23,
"text": " L^2"
},
{
"math_id": 24,
"text": "\\lambda"
},
{
"math_id": 25,
"text": "H^1(\\mathbb{R}^N) \\hookrightarrow L^2(\\mathbb{R}^N) "
},
{
"math_id": 26,
"text": " (s \\star u)(x) := e^{\\frac{Ns}{2}} u(e^s x)."
},
{
"math_id": 27,
"text": " \\tilde{I}(u, s) := I(s \\star u) = e^{2s} \\int_{\\mathbb{R}^N} |\\nabla u(x)|^2 dx - \\frac{1}{e^{sN}} \\int_{\\mathbb{R}^N} F(e^{\\frac{Ns}{2}} u(x)) dx."
},
{
"math_id": 28,
"text": "P(u) := \\partial_s \\tilde{I}(u, s)|_{s=0} = \\int_{\\mathbb{R}^N} |\\nabla u|^2 - N \\int_{\\mathbb{R}^N} \\left( \\frac{1}{2} f(u)u - F(u) \\right) "
},
{
"math_id": 29,
"text": " L^2 "
},
{
"math_id": 30,
"text": "f(s)=|s|^{p-2}s "
},
{
"math_id": 31,
"text": " p \\in (2, 2^*) "
},
{
"math_id": 32,
"text": " \\frac{2 - N}{2} \\int_{\\Omega} |\\nabla u|^2 \\, dx - \\frac{\\lambda N}{2} \\int_{\\Omega} u^2 \\, dx + \\frac{N}{p} \\int_{\\Omega} |u|^{p} \\, dx - \\frac{1}{2} \\int_{\\partial \\Omega} |\\frac{\\partial u}{\\partial \\nu}|^2 x \\cdot \\nu \\, d\\sigma = 0.\n"
},
{
"math_id": 33,
"text": " \\gamma_p := \\frac{N(p - 2)}{2p}, "
},
{
"math_id": 34,
"text": " C_{N,p} "
},
{
"math_id": 35,
"text": " u \\in H^1(\\mathbb{R}^N) "
},
{
"math_id": 36,
"text": " |u|_p \\leq C_{N,p} |\\nabla u|^{\\gamma_p}_2 |u|^{1-\\gamma_p}_2."
},
{
"math_id": 37,
"text": " p := 2+ \\frac{4}{N}. "
},
{
"math_id": 38,
"text": " X "
},
{
"math_id": 39,
"text": " I: X \\to \\mathbb{R} "
},
{
"math_id": 40,
"text": " (u_n)_n \\subset X "
},
{
"math_id": 41,
"text": " I "
},
{
"math_id": 42,
"text": " c \\in \\mathbb{R} "
},
{
"math_id": 43,
"text": "\\sup_n I(u_n) < \\infty "
},
{
"math_id": 44,
"text": " \\langle I'(u_n), u_n - u \\rangle \\to 0 "
},
{
"math_id": 45,
"text": " n \\to \\infty "
},
{
"math_id": 46,
"text": " u \\in X "
},
{
"math_id": 47,
"text": " I' "
},
{
"math_id": 48,
"text": " \\langle \\cdot, \\cdot \\rangle"
}
]
| https://en.wikipedia.org/wiki?curid=77361219 |
77364288 | Enigma-M4 | Rotor key machine used in World War II
The Enigma-M4 (also called Schlüssel" M, more precisely Schlüssel" M Form M4) is a rotor key machine that was used for encrypted communication by the German Kriegsmarine during World War II from October 1941.
In contrast to the previously used Enigma-M3 and the Enigma I and the Enigma-G used by the German Secret services, the Enigma-M4 is characterized by "four" rollers (except for the entry roller and the reverse roller). This makes its encryption cryptographic significantly stronger than that of the other Enigma variants with only "three" rotors and therefore could not be broken by the Allies for a long time.
Previous history.
All parts of the German Wehrmacht used the rotor cipher machine to encrypt their secret messages. Enigma. However, different models were used. While the army and air force used the Enigma I almost exclusively, there were different model variants of the Enigma-M for the navy, which usually referred to it as the "Schlüssel M". The main difference was the use of more rollers than in the Enigma I, where three rollers could be selected from a range of five. This resulted in 5-4-3 = 60 possible roller positions of the Enigma I.
The Enigma-M1, on the other hand, had an assortment of six different rollers (marked with the Roman numerals I to VI), three of which were inserted into the machine. This increased the combinatorial complexity to 6-5-4 = 120 possible roller positions. In the Enigma-M2, the range of rollers had been increased by a further one, so that 7-6-5 = 210 roller layers were now possible. And with the Enigma-M3, which was used by the navy at the beginning of the war, there were eight rollers, three of which were used. This meant that the M3 had 8-7-6 = 336 possible roller positions. While the M1 to M3 models only ever used "three" rollers in the machine, the M4 had "four" rollers next to each other in the machine. This significantly increased the cryptographic security of the M4 compared to its predecessor models.
The operation was described in the naval service regulation "M.Dv.Nr. 32/1" with the title "Der Schlüssel M - Verfahren M Allgemein". The key procedure used in the Enigma-M4 was precisely defined in this document.
A particularly important part of the key procedure, in addition to correct operation, was the agreement of a common cryptological key. As with all symmetric cryptosystems, the sender and receiver of an encrypted secret message not only had to have the same machine, but also had to set it identically to each other. For this purpose, secret key tables were distributed in advance to all authorized participants. To ensure security within such a large organization as the Kriegsmarine, there were many different key networks, as in the army and air force, for example Aegir for surface warships and auxiliary cruisers overseas, Hydra for warships near the coast, Medusa for U-boats in the Mediterranean, for battleships and heavy cruisers and Triton for the Atlantic submarines. There were other key nets. The Kriegsmarine initially used the Enigma-M3 exclusively for all of them.
However, the need was soon recognized to form a separate, specially secured key network for the Einheiten operating on the Hoheher See. In October 1941, a letter from the Oberkommando der Kriegsmarine (OKM) to the Commander of the Battleships introduced the "Neptun key board" as a new key and ordered the use of the Enigma-M4 for this purpose. This took place four months before the M4 was put into service for the U-boats. Unlike Triton, the Neptun key network was not broken at the time.
Structure.
The Construction of the Enigma-M4 has some special features compared to the Enigma I. The most important difference is the use of four rollers (rotors) compared to only three in the other models. The four rollers were selected from a range of eight plus two rollers. A distinction had to be made between rollers I to V, which were also used on the Enigma-I, and rollers VI to VIII, which were familiar from the Enigma-M3, and the two rollers specially designed for the Enigma-M4, which were thinner than the others and were therefore referred to as "thin" rollers.
The technical reason for the production of thin rollers was the Navy's desire to be able to continue using the same housing as Enigma I and Enigma-M3. For this purpose, the space previously occupied by a (thick) reversing roller now had to be used by a thin reversing roller and one of the newly added thin fourth rollers. Instead of Roman numerals, the thin rollers were marked with Greek letters, namely "β" (Beta) and "γ" (Gamma). Although they could each be manually rotated to one of 26 positions, unlike rollers I to VIII, they did not rotate any further during the encryption process.
The wiring diagram of the entry roller and the eight rotating rollers (I to VIII) of the Enigma-M4 was identical to the Enigma I and M3. In particular, the two non-rotating thin rollers "Beta" and "Gamma" and the two equally thin reversing rollers (UKW) "Bruno" and "Caesar". The wiring of the two thin rollers and the reversing rollers was designed in such a way that the combination of the reversing roller (UKW) with the "matching" roller (i.e. Bruno with Beta or Caesar with Gamma) results in exactly the same involutory character permutation as the reverse rollers B and C (thick) of the Enigma I and the Enigma-M3 alone. This served the purpose of backward compatibility with the earlier systems. The only requirement for this was that the spell key of the U-boats was chosen so that it began with "A". Then the left cylinder was in exactly the position in which it worked together with the matching VHF in the same way as the corresponding VHF of the other Enigma models.
As with the Model-D, the keys and lamps were arranged in a layout similar to QWERTZU. Only the letters P and L were shifted to the edges of the bottom row, in contrast to QWERTZU.
Q W E R T Z U I O
A S D F G H J K
P Y X C V B N M L
In contrast to other models such as the Model-D, Model-K or the Model-T, there was no support for digits on the keyboard. Special characters (as in the Model-G) were also not provided.
For data compression, the Navy used signal groups (four letters) and phrases from code books such as the Short signal bookletand the Weather short key which consisted purely of letters.
Operation.
To fully set the key, the navy distinguished between "outer" and "inner" key parts. The inner key included the selection of the rollers, the roller position and the ring position. The inner key settings could only be made by an officer, who opened the housing and selected, set up and arranged the rollers accordingly. He then closed the Enigma again and handed it over to the radio operator.
The radio operator's task was to make the external key settings, i.e. to insert the ten pairs of plugs into the plug board on the front panel of the M4 according to the day key, close the front flap and then turn the four rollers to the correct starting position. While the inner settings were only changed every two days, the outer settings had to be changed every day. The key change also took place on the high seas at 12:00 D.G.Z. ("German legal time"), for example early in the morning for U-boats operating off the American east coast.
The keys ordered were listed on top-secret "key boards" at the time. Here, too, a distinction was made between internal and external settings. The key board intended for the officer with the inner settings looked something like this:
Key M " T r i t o n "
Month: J u n e 1945 Test number: 123
Secret matter of command!
Key panel M-General
(Schl.T. M Allg.)
Inner attitude
Change 1200 h D.G.Z.
----------------------------------------------
|Months- | |
| ayd | Inner attitude |
| 29. |B Beta VII IV V |
| | A G N O |
| 27. |B Beta II I VIII |
| | A T Y F |
| 25. |B Beta V VI I |
| | A M Q T |
Only a few days of the month are shown above as an example, with the days sorted in descending order, as was customary at the time. This made it easy to cut off and destroy the "used" codes from previous days. The other key board, which listed the outer key parts, was similarly structured.
Example for June 27, 1945: Internal setting "B Beta II I VIII" means that the officer first had to select roller B (thin) as the reversing roller. Then he had to set the non-rotating Greek roller Beta to ring position A, roller II to ring position T, roller I to ring position Y and finally roller VIII on the far right (as a fast rotor) to ring position F and insert the rollers in the order from left to right. With a little feeling, the ring position could also be set on installed rollers. The officer locked the roller cover and handed the M4 over to the encryptor, who made the external settings using his own documents.
Key M " T r i t o n "
Month: J u n e 1945 Test number: 123
Secret commando matter!
Key panel M-General
(Schl.T. M Allg.)
External setting
Change 1200 h D.G.Z.
----------------------------------------------------------------------
| Mon- | | Home - |
| ths- | P l u g c o n n e c t i o n s | Posi- |
| day | | tion |
| 30. |18/26 17/4 21/6 3/16 19/14 22/7 8/1 12/25 5/9 10/15 |H F K D |
| 29. |20/13 2/3 10/4 21/24 12/1 6/5 16/18 15/8 7/11 23/26 |O M S R |
| 28. |9/14 4/5 18/24 3/16 20/26 23/21 12/19 13/2 22/6 1/8 |E Y D X |
| 27. |16/2 25/21 6/20 9/17 22/1 15/4 18/26 8/23 3/14 5/19 |T C X K |
| 26. |20/13 26/11 3/4 7/24 14/9 16/10 8/17 12/5 2/6 15/23 |Y S R B |
The radio master had to connect the double-pole sockets on the front panel with corresponding double-pole cables. As a rule, exactly ten cables were plugged in. Six letters remained "unplugged". The plug connections in the navy (in contrast to the other parts of the Wehrmacht) were listed numerically and not alphabetically. In the corresponding secret Naval service regulations "M.Dv.Nr. 32/1" with the title "Der Schlüssel M - Verfahren M Allgemein" a conversion table was given as an aid for the operator.
Now, the key operator had to turn the four rollers to a defined starting position and the Enigma-M4 was ready to encrypt or decrypt radio messages.
Key room.
The size of the key space of the Enigma-M4 can be calculated from the four individual partial keys and the number of different key settings possible in each case. The total key space of the Enigma-M4 results from the following four factors:
Three out of eight reels are selected for the right three places. In addition, one of two Greek rollers for the left-hand position and one of two reversing rollers on the far left. This results in 2-2-(8-7-6) = 1344 possible roller positions (corresponds to a "key length" of about 10 bit).
There are 26 different ring positions for each of the two right-hand rollers. The rings of the two left-hand rollers do not contribute to the enlargement of the key space, as the Greek roller is not indexed. A total of 262 = 676 ring positions (corresponds to about 9 bits) are relevant.
There are 26 different roller positions for each of the four rollers. (The reverse roller cannot be adjusted.) A total of 264 = 456,976 roller positions are therefore available (corresponds to just under 19 bit). If the ring position is assumed to be known, 263-262 = 16,900 initial positions to be eliminated as cryptographically redundant. This leaves 440,076 roller positions as relevant (also corresponds to about 19 bit).
A maximum of 13 plug connections can be made between the 26 letters. Based on the case of the empty connector board (considered as number zero in the table below), there are 26 options for the first connection for one connector end and then another 25 for the other end of the cable. This means that there are 26-25 different options for plugging in the first cable. However, as it does not matter in which order the two cable ends are plugged in, half of the options are omitted. This leaves 26-25/2 = 325 possibilities for the first connection. For the second, you get 24-23/2 = 276 possibilities. In general, there are (26-2"n"+2)-(26-2"n"+1)/2 possibilities for the "n"-th plug connection (see also: Gaussian summation formula).
The total number of possible plug combinations when using several plugs results from the product of the possibilities for the individual plug connections. However, since the order of execution is not important here either (it is cryptographically equivalent if, for example, A is plugged in first with X and then B with Y or vice versa, first B with Y and then A with X), the corresponding cases must not be considered as key combinations. In the case of two plug connections, these are exactly half of the cases. The previously determined product must therefore be divided by 2. With three plug connections, there are 6 possible sequences for performing the plug connections, all six of which are cryptographically equivalent. The product must therefore be divided by 6. In the general case, with "n" plug connections, the product of the previously determined possibilities is divided by "n"! (Faculty). The number of possibilities for exactly "n" plug connections is given by
formula_0
For the M4, exactly ten plug connections had to be made. According to the table above, this results in 150,738,274,937,250 (more than 150 trillion) plug-in possibilities (corresponds to about 47 bits).
The total key space of an Enigma-M4 with three rollers selected from a stock of eight, one of two Greek rollers and one of two reverse rollers and using ten connectors can be calculated from the product of the 1344 roller positions, 676 ring positions, 439,400 roller positions and 150,738,274,937,250 connector possibilities determined in sections a) to d) above. It amounts to:
1344 · 676 · 440.076 · 150.738.274.937.250 = 60.269.444.695.419.208.913.664.000
That is more than 6-1025 possibilities and corresponds to a key length of almost 86 bit.
The key room is huge. However, as explained in more detail in the Main article about the Enigma, the size of the keyspace is only a necessary, but not a sufficient condition for the security of a cryptographic method. Even a method as simple as a simple monoalphabetic substitution (using 26 letters like the M4) has a key space of 26! (factorial), which is roughly 4-1026 and corresponds to about 88 bits and is thus even slightly larger than the Enigma-M4. Nevertheless, a monoalphabetic substitution is very uncertain and can easily be broken (deciphered).
Deciphering.
British cryptanalysts had been working on the deciphering the Enigma at Bletchley Park, about northwest of London, since the outbreak of the war.
The most important tool was a special electromechanical machine called the Turing bomb, which was invented by the English mathematician Alan Turing and which could be used to determine the valid day keys. This required "probable words", i.e. text passages that appear in the plaintext to be deciphered. The cryptanalysts benefited from the German thoroughness in writing routine messages, such as weather reports, with recurring patterns that could be used for deciphering. With the help of the Turing bomb, it was possible from January 1940 to decipher first the radio messages encrypted by the Luftwaffe and later also those of the army.
The encryption method used by the Kriegsmarine, i.e. the "Schlüssel M", proved to be much more resistant to decryption attempts. Even the Enigma-M3, with its only three (and not yet four) cylinders, was more difficult to break than the Enigma I used by the Luftwaffe and the army. In addition to the use of a larger range of cylinders (eight instead of just five to choose from), this was also largely due to a particularly sophisticated "Verfahren zur Spruchschlüsselvereinbarung", which the navy used. The British codebreakers only succeeded in breaking into the "Schlüssel M" in May 1941 after capture of the German U-boat together with an intact M3 engine and all secret documents (codebooks) including the important double-letter exchange boards by the British destroyer on May 9, 1941.
There was a particularly painful interruption ("black-out") for the British when the M3 was replaced by the M4 (with four rollers) in the U-boats on February 1, 1942. This procedure, called "Schlüsselnetz Triton" by the Germans and "Shark" (German: "Hai") by the British, could not be broken for ten months, a period called by the U-boat sailors the "Second Happy Time", during which the German U-boats again scored major successes. The break-in in "Shark" was not achieved until December 1942, after the British destroyer took down the German U-boat in the Mediterranean Sea on October 30, 1942. "U-559" was boarded and important secret key documents were captured such as short signal booklet and weather shortcut key, with the help of which Bletchley Park also managed to overcome the Enigma-M4 and Triton. However, it initially took several days to decipher the messages, which reduced the information value.
From 1943, the Americans came to the rescue, under the leadership of Joseph Desch at the United States Naval Computing Machine Laboratory (NCML), based at the National Cash Register Company (NCR) in Dayton (Ohio), more than 120 units as of April 1943 produced high-speed variants of the British "bomb" that were specifically targeted against the M4. American agencies, such as the Signal Security Agency (SSA), the Communications Supplementary Activity (CSAW), and the United States Coast Guard Unit 387 (USCG Unit 387) relieved the British of much of the time-consuming day-to-day key finding and were able to break Triton quickly and routinely. The British now left the M4's key detection to their American allies and their fast "Desch Bombes". From September 1943, deciphering M4 radio messages usually took less than 24 hours. Even if a radio message could be completely deciphered, not all parts were always understandable, because position information was "overcoded" using a special procedure and thus specially protected. The Kriegsmarine had introduced the so-called address book procedure for this purpose in November 1941.
On June 4, 1944, the capture of again brought up-to-date key material: short signal booklet, identification group booklet and, above all, the so-called address book procedure. This was the urgently sought secret procedure for overcoding the U-boat locations. The yield of secret material from "U-505" equaled that of "U–110" and "U–559".
Throughout the war, more than a million naval radio messages were deciphered in Hut Eight (Barrack 8) at Bletchley Park more than one million naval radio messages were deciphered. This covers the period from the fall of 1941 until the unconditional surrender of the Wehrmacht in May 1945, whereby, with the exception of the "blackout" between February and December 1942, it was possible to maintain the uninterrupted nature of deciphering for most of the time.
Significance for the history of war.
The decoding of the M4 radio traffic was of enormous importance for the Allied Progress in der U-Boot-Abwehr. The reports from the boats with precise position and course information provided the Allies with a complete strategic situation picture. The U-boats also betrayed themselves simply by sending radio telegrams, which could be detected and localized by radio direction finding such as Huff-Duff by Allied warships. Radar as a means of radiolocation at sea and ASDIC, an early form of sonar, for sound detection under water was another important tactical tool for submarine hunting. But none of this provided as complete a picture of the situation as the radio transmissions.
The immediate consequence of the American decryptions was the sinking of eleven of the eighteen German Versorgungs-U-Boote ("Milchkühe") within a few months in 1943.
This led to a weakening of all Atlantic U-boats, which could no longer be supplied at sea, but had to make the long and dangerous journey home through the Bay of Biscay to the U-boat bases on the French west coast.
Especially for the execution of "Operation Overlord", the planned invasion of Normandy, it was crucial for the Allied leadership to have the most comprehensive, up-to-date and, of course, correct picture of the situation. After the capture of "U 505" immediately before the planned "D-Day", which then took place two days later on June 6, they feared that the German key procedures could suddenly be changed as a result of the capture of U 505 becoming known. This might have prevented the breaking of the Enigma keys on the day of the invasion, with potentially fatal consequences for the invasion forces. In fact, however, everything remained unchanged and so the day key could be broken in less than two hours after midnight using the "Cribs" "WETTERVORHERSAGEBISKAYA", which the British cryptanalysts could easily guess and assumed correctly and the invasion succeeded.
Many German U-boat drivers, above all the former chief of the B-Dienst (observation service) of the Kriegsmarine, were still very sure long after the war that "their" four-roller key machine was "unbreakable". When British information became known in 1974 which clearly proved that the opposite was the case, this caused a real shock among the survivors of the U-boat war, because of the approximately 40,000 German submariners, around 30,000 had not returned home from deployment - the highest loss rate of all German branches of the armed forces. The special historical significance of the Enigma-M4 and its decryption is underlined by a statement made by former British Prime Minister Winston Churchill: "The only thing that really frightened me during the war was the U-boat peril." (German: „"Das einzige, wovor ich im Krieg wirklich Angst hatte, war die U-Boot-Gefahr.“")
Security check.
Due to various suspicious events, the German side conducted several investigations into the security of its own machine. A significant example of the German considerations, procedures, conclusions and measures derived from them can be found in an English-language, highly secret interrogation protocol classified at the time as "TOP SECRET "ULTRA"", which was published immediately after the war, on June 21, 1945, by the Allied (British-American) TICOM (Target Intelligence Committee) at the Marine Intelligence School in Flensburg-Mürwik. It records the statements of the German naval officer, Lt.z.S Hans-Joachim Frowein, who from July to December 1944 had been assigned to the OKM/4 Skl II (Abteilung II der Seekriegsleitung) to investigate the safety of the M4. The leading German cryptanalyst Wilhelm Tranow, who was also interrogated in this context, explained that the reason for this investigation was the extremely high rate of losses suffered by the German U-boats, particularly in 1943 and the first half of 1944. The German naval command could not explain this, especially why U-boats were sunk in very specific positions, and again asked themselves the question: "Is the machine safe?"
To clarify this question, Frowein was seconded from Skl III to Skl II for six months from July 1944 with the order to carry out a thorough investigation into the safety of the four-roller Enigma. Two other officers and ten men were assigned to him for this purpose. They began their investigations with the assumption that the enemy knew the machine, including all the rollers, and that they had a suspected clear text "(Crib)" of 25 letters in length. The reason for choosing this relatively short "crib" length was their knowledge that U-Boat radio messages were often very short. The result of their investigation was that this was sufficient to open up the roller position and plug.
Frowein was able to explain in detail to the British interrogators his thoughts and procedures at the time when it came to unraveling his own machine. Although neither he nor any of his colleagues had any previous experience in the cryptanalysis of key machines, such as the commercial Enigma, they had succeeded in breaking the M4 within six months, at least in theory. The methods developed were strikingly similar to those actually used by the British in "B.P.", but of course Frowein did not know this. As he went on to explain, he had also realized that his break-in method would be severely disrupted if the left or middle roller within the "crib" were to advance. This would have required case distinctions in the cryptanalysis, which would have increased the workload by a factor of 26, which was considered to be practically unacceptably high for a potential attacker.
After presenting the results, the naval command came to the conclusion that although the M3 and even the M4 were theoretically vulnerable to attack, this would no longer be the case if it was ensured that the roller advance (of the middle roller) occurred sufficiently frequently. In December 1944, it was therefore ordered that, with immediate effect, only rollers with two Transfer notches, i.e. one of rollers VI, VII or VIII, could be used as the right-hand roller. This measure halved the number of possible roller positions (from 8-7-6 = 336 to 8-7-3 = 168), which meant a weakening of the combinatorial complexity, but at the same time strengthened the machine against the recognized weakness.
The Kriegsmarine also passed on the results of the investigation to the other parts of the Wehrmacht, which continued to use only the Enigma I, which was cryptographically weaker than the M4, with only three cylinders and the resulting 60 cylinder layers. According to Frowein's statement in the TICOM report, the "Heeresführung" was "astonished at the Navy's view based on this investigation" "(...the Army were astonished at the Navy's view based on this investigation)"'.
Timeline.
The following is a list of some important dates in the history of the Enigma-M4.
Legend: o No deciphering # Deciphering succeeds
The three gaps (o) in the deciphering ability of the Allies are striking. The reasons for this are:
The M4 was manufactured from 1941, initially at the main Konski & Krüger production plant in Berlin. (K&K) in Berlin. From 1942, production was also outsourced to other locations (see also: List of Enigma production codes). A large proportion (around 45%) of the production of the M4 from 1942 was carried out by Olympia Büromaschinenwerke AG in Erfurt. The following figures are available:
Authentic radio message.
An example is a communication from Lieutenant Captain Hartwig Looks, commander of the German U-boat "U 264", which was encrypted with an Enigma-M4 on November 19, 1942, shortly before the end of the black-out. Before the encryption, the radio operator transmitted the text in Enigma notation, which he then encrypted letter by letter with the M4 and finally sent the ciphertext in Morse code. As the Enigma can only encode uppercase letters, numbers were written out digit by digit, punctuation marks were replaced by "Y" for comma and "X" for period, proper names were enclosed in "J" and important terms or letters were doubled or tripled to protect against misunderstandings caused by transmission errors. It was also customary for the navy to arrange the text in groups of four, while the army and air force used groups of five. Short radio messages and the almost unavoidable spelling and transmission errors in practice make deciphering based on statistical analyses difficult.
Amore detailed plaintext:
By U 264 Hartwig Looks - Radio Telegram 1132/19 - Contents:
During an attack by depth charge (ordnance)/water bombs we were pushed under water. The last enemy location we detected was at 8:30 o'clock in naval square AJ 9863 (), heading 220 degrees, speed 8 knot. We are joining. Weather data: barometric pressure 1014 millibar falling. wind from north-northeast, strength 4. Visibility 10 nautical miles.
Shortened plain text:
From Looks - FT 1132/19 - Contents:
Pressed under water during attack, Wabos.
Last enemy position 0830 hrs Mar.-Qu. AJ 9863, 220 degrees, 8 nm.
Push to. 14 mb, falling. NNO 4. visibility 10.
Transcribed plain text in groups of four:
vonv onjl ooks jfff ttte inse insd reiz woyy eins
neun inha ltxx beia ngri ffun terw asse rged ruec
ktyw abos xlet zter gegn erst andn ulac htdr einu
luhr marq uant onjo tane unac htse chsd reiy zwoz
wonu lgra dyac htsm ysto ssen achx eins vier mbfa
ellt ynnn nnno oovi erys icht eins null
Ciphertext (with spelling and transcription errors):
NCZW VUSX PNYM INHZ XMQX SFWX WLKJ AHSH NMCO CCAK
UQPM KCSM HKSE INJU SBLK IOSX CKUB HMLL XCSJ USRR
DVKO HULX WCCB GVLI YXEO AHXR HKKF VDRE WEZL XOBA
FGYU JQUK GRTV UKAM EURB VEKS UHHV OYHA BCJW MAKL
FKLM YFVN RIZR VVRT KOFD ANJM OLBG FFLE OPRG TFLV
RHOW OPBE KVWM UQFM PWPA RMFH AGKX IIBG
The ciphertext could be deciphered on February 2, 2006 with the following key settings:
Basic position and key:
Roller position: UKW Bruno-Beta-II-IV-I
Ring position: AAAV
Plug: AT BL DF GJ HM NW OP QY RZ VX
Saying key: VJNA
Deciphered text (with spelling and transcription errors, breaks and spaces for readability):
von von j looks j hff ttt eins eins drei zwo yy qnns neun inhalt xx
bei angriff unter wasser gedrueckt y
wabos x letzter gegnerstand nul acht drei nul uhr mar qu anton jota neun acht seyhs drei y
zwo zwo nul grad y acht sm y
stosse nach x
ekns vier mb faellt y nnn nnn ooo vier y sicht eins null
At 232 characters (group number 58), the text is unusually long and uses neither a short signal booklet nor the weather abbreviation key.
The weather information "Air pressure 1014 mb falling, wind north-north-east with 4 Bft, visibility 10 nm" would have been shortened by a weather abbreviation (WKS) to 8 characters
hrbw apeh
instead of the 44 characters used
eins vier mbfa ellt ynnn nnno oovi erys icht eins null
External links.
Details
Documents
Decipherments
Exhibits
Photos, videos and audios
Reproduction projects
Simulations of the M4
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{1}{n!} \\prod_{i=1}^n \\frac{(26-2i+2)(26-2i+1)}{2} \\;=\\; \\frac{26!}{2^n\\cdot n!\\cdot(26-2n)!}"
}
]
| https://en.wikipedia.org/wiki?curid=77364288 |
7736733 | Seismic source | Device that generates controlled seismic energy used for seismic surveys
A seismic source is a device that generates controlled seismic energy used to perform both reflection and refraction seismic surveys. A seismic source can be simple, such as dynamite, or it can use more sophisticated technology, such as a specialized air gun. Seismic sources can provide single pulses or continuous sweeps of energy, generating seismic waves, which travel through a medium such as water or layers of rocks. Some of the waves then reflect and refract and are recorded by receivers, such as geophones or hydrophones.
Seismic sources may be used to investigate shallow subsoil structure, for engineering site characterization, or to study deeper structures, either in the search for petroleum and mineral deposits, or to map subsurface faults or for other scientific investigations. The returning signals from the sources are detected by seismic sensors (geophones or hydrophones) in known locations relative to the position of the source. The recorded signals are then subjected to specialist processing and interpretation to yield comprehensible information about the subsurface.
Source model.
A seismic source signal has the following characteristics:
The generalized equation that shows all above properties is:
formula_0
where formula_1 is the maximum frequency component of the generated waveform.
Types of sources.
Sledgehammer.
The most basic seismic source is a sledgehammer. A seismic energy is generated either by striking the ground directly, or more commonly striking a metal or polyethylene plate on the ground. Typically applied for near-surface seismic refraction surveys. Impact of sledgehammer contact with the surface can provide sufficient seismic energy for interface depths up to 30 m or more, depending on geological conditions and physical properties.
Explosives.
Explosives most widely used as seismic sources are known as gelatin dynamites. These dynamites are placed into three subcategories, straight gelatins in which nitroglycerin, also known as glyceryl trinitrate with the chemical formula C3H5(ONO2)3 is the active component, ammonia gelatins in which ammonia nitrite with chemical formula as the active component, and semi gelatins in which the composition consists mostly of nitroglycerin.
Upon detonation, explosives release large volumes of expanding gas very quickly, forcing great pressure to the surroundings in the form of seismic waves.
Using explosives as seismic sources has been in practice for decades because of the reliability and energy efficiency they provide. Such sources are most commonly used on land and swampy environments because of high thickness in sediments. Typical charge sizes used in the field for reflection surveys are 0.25 kg to 100 kg for single hole sources, 0.25 kg to 250 kg or more for multiple hole sources, and may reach 2500 kg or more for refraction surveys.
Though dynamites and other explosives are efficient seismic sources because of their reduced costs, ease of transport in difficult terrains, and lack of regular maintenance compared to other sources, the use of explosives is becoming restricted in certain areas, causing decline and increasing popularity for alternative seismic sources.
For instance, hexanitrostilbene was the main explosive fill in the "thumper" mortar round canisters used as part of the Apollo Lunar Active Seismic Experiments. Generally, the explosive charges are placed between below ground, in a hole that is drilled with dedicated drilling equipment for this purpose. This type of seismic drilling is often referred to as "Shot Hole Drilling". A common drill rig used for "Shot Hole Drilling" is the ARDCO C-1000 drill mounted on an ARDCO K 4X4 buggy. These drill rigs often use water or air to assist the drilling.
Air gun.
An air gun is used for marine reflection and refraction surveys. It consists of one or more pneumatic chambers that are pressurized with compressed air at pressures from 14 to 21 MPa (2000 to 3000 lbf/in2). Air guns are submerged below the water surface and towed behind a seismic ship. When an air gun is fired, a solenoid is triggered that releases high pressure air from one chamber to the back of a shuttle that is normally held in balance between the two equally pressurised chambers. The instant lowering of air pressure in the first chamber allows the shuttle to move rapidly into the first chamber, releasing a high pressure air reservoir that is behind the shuttle in the second chamber through ports directly into the sea producing a pulse of acoustic energy. Air gun arrays may consist of up to 48 individual air guns with different size chambers or certain air guns volumes may be clustered together. The firing of all of the array is controlled by gun controller and is usually done to within a ± 1 or 2 millisecond tolerance, the aim being to create the optimum initial shock wave followed by the minimum reverberation of the air bubble(s). Since the shuttle is magnetised, the rapid movement into the first chamber on releasing the solenoid value provides a small current that is in effect a timing signal for the firing gun that is returned to the gun controller. A near-field hydrophone located at a known measured distance from the gun port can also be used to time the first break signal into the hydrophone for accurate gun timing verification.
Air gun maintenance is important as guns can misfire; the worst case scenario being an auto-fire where the gun actually fires repeatedly out of synch because of a defect in the gun itself such as a damaged solenoid valve or a leaking gun O-ring. A single auto-firing gun can result in the total array bubble signature becoming corrupted and if undetected, can result in many seismic lines being re-shot just for one auto-firing gun when the fault is found during initial data processing.
During normal handling for deployment and recovery, air guns must never be fully pressurised to their optimum working pressure on deck and it is normal practice to air down guns to 500 psi to prevent water ingress on deployment and recovery. It is also a poor and dangerous practice to test fire guns on deck in the air at pressure. There must also be an isolation system in place to prevent the accidental firing of guns on deck by observers or navigators by mistake. High pressure air releases on deck can amputate fingers and also result in a high pressure injection injury through the skin, an almost untreatable and deadly injury in a seismic environment. Gunners should wear the required personal protective equipment to protect their eyes and their hearing and minimise exposure of uncovered skin.
Air guns are made from the highest grades of corrosion resistant stainless steel. Large chambers (i.e., greater than 1 L or 70 cu in) tend to give low frequency signals, and the small chambers (less than 1 L) give higher frequency signals.
Plasma sound source.
A plasma sound source (PSS), otherwise called a spark gap sound source, or simply a sparker, is a means of making a very low frequency sonar pulse underwater. For each firing, electric charge is stored in a large high-voltage bank of capacitors, and then released in an arc across electrodes in the water. The underwater spark discharge produces a high-pressure plasma and vapor bubble, which expands and collapses, making a loud sound. Most of the sound produced is between 20 and 200 Hz, useful for both seismic and sonar applications.
There are also plans to use PSS as a non-lethal weapon against submerged divers.
Thumper truck.
In 1953, the weight dropping thumper technique was introduced as an alternative to dynamite sources.
A thumper truck (or weight-drop) truck is a vehicle-mounted ground impact system that can be used to provide a seismic source. A heavy weight is raised by a hoist at the back of the truck and dropped, generally about three meters, to impact (or "thump") the ground. To augment the signal, the weight may be dropped more than once at the same spot, the signal may also be increased by thumping at several nearby places in an array whose dimensions may be chosen to enhance the seismic signal by spatial filtering.
More advanced thumpers use a technology called "Accelerated Weight Drop" (AWD), where a high pressure gas (min 7 MPa (1000 lbf/in2)) is used to accelerate a heavy weight hammer () to hit a base plate coupled to the ground from a distance of . Several thumps are stacked to enhance signal to noise ratio. AWD allows both more energy and more control of the source than gravitational weight-drop, providing better depth penetration, control of signal frequency content.
Thumping may be less damaging to the environment than firing explosives in shot-holes, though a heavily thumped seismic line with transverse ridges every few meters might create long-lasting disturbance of the soil. An advantage of the thumper (later shared with Vibroseis), especially in politically unstable areas, is that no explosives are required.
Electromagnetic Pulse Energy Source (Non-Explosive).
EMP sources based on the electrodynamic and electromagnetic principles.
Seismic vibrator.
A seismic vibrator propagates energy signals into the Earth over an extended period of time as opposed to the near instantaneous energy provided by impulsive sources. The data recorded in this way must be "correlated" to convert the extended source signal into an impulse. The source signal using this method was originally generated by a servo-controlled hydraulic vibrator or "shaker unit" mounted on a mobile base unit, but electro-mechanical versions have also been developed.
The "Vibroseis" exploration technique was developed by the Continental Oil Company (Conoco) during the 1950s and was a trademark until the company's patent lapsed.
Boomer sources.
Boomer sound sources are used for shallow water seismic surveys, mostly for engineering survey applications. Boomers are towed in a floating sled behind a survey vessel. Similar to the plasma source, a boomer source stores energy in capacitors, but it discharges through a flat spiral coil instead of generating a spark. A copper plate adjacent to the coil flexes away from the coil as the capacitors are discharged. This flexing is transmitted into the water as the seismic pulse.
Originally the storage capacitors were placed in a steel container (the bang box) on the survey vessel. The high voltages used, typically 3,000 V, required heavy cables and strong safety containers. Recently, low voltage boomers have become available. These use capacitors on the towed sled, allowing efficient energy recovery, lower voltage power supplies and lighter cables. The low voltage systems are generally easier to deploy and have fewer safety concerns.
Noise sources.
Correlation-based processing techniques also enable seismologists to image the interior of the Earth at multiple scales using natural (e.g., the oceanic microseism) or artificial (e.g., urban) background noise as a seismic source. For example, under ideal conditions of uniform seismic illumination, the correlation of the noise signals between two seismographs provides an estimate of the bidirectional seismic impulse response. | [
{
"math_id": 0,
"text": "s(t)=\\beta e^{-\\alpha t^2} \\sin(2 \\pi f_{max} t)"
},
{
"math_id": 1,
"text": "f_{max}"
}
]
| https://en.wikipedia.org/wiki?curid=7736733 |
7736821 | Phase factor | Type of exponential complex factor
For any complex number written in polar form (such as "r eiθ"), the phase factor is the complex exponential ("eiθ"), where the variable θ is the "phase" of a wave or other periodic function. The phase factor is a unit complex number, i.e. a complex number of absolute value 1. It is commonly used in quantum mechanics and optics. It is a special case of "phasors", which may have arbitrary magnitude (i.e. not necessarily on the unit circle in the complex plane).
Multiplying the equation of a plane wave "Ae""i"(k·r − "ωt") by a phase factor "r eiθ" shifts the phase of the wave by θ:
formula_0
In quantum mechanics, a phase factor is a complex coefficient "eiθ" that multiplies a ket formula_1 or bra formula_2. It does not, in itself, have any physical meaning, since the introduction of a phase factor does not change the expectation values of a Hermitian operator. That is, the values of formula_3 and formula_4 are the same. However, "differences" in phase factors between two interacting quantum states can sometimes be measurable (such as in the Berry phase) and this can have important consequences.
In optics, the phase factor is an important quantity in the treatment of interference.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "e^{i\\theta} A\\,e^{i({\\mathbf{k}\\cdot\\mathbf{r} - \\omega t})} = A\\,e^{i({\\mathbf{k}\\cdot\\mathbf{r}-\\omega t + \\theta})}."
},
{
"math_id": 1,
"text": "|\\psi\\rangle"
},
{
"math_id": 2,
"text": "\\langle\\phi|"
},
{
"math_id": 3,
"text": "\\langle\\phi| A |\\phi\\rangle"
},
{
"math_id": 4,
"text": "\\langle\\phi| A e^{i\\theta} |\\phi\\rangle"
}
]
| https://en.wikipedia.org/wiki?curid=7736821 |
7737301 | Peristimulus time histogram | In neurophysiology, peristimulus time histogram and poststimulus time histogram, both abbreviated PSTH or PST histogram, are histograms of the times at which neurons fire. It is also sometimes called pre event time histogram or PETH. These histograms are used to visualize the rate and timing of neuronal spike discharges in relation to an external stimulus or event. The peristimulus time histogram is sometimes called perievent time histogram, and "post-stimulus" and "peri-stimulus" are often hyphenated.
The prefix "peri," for "through," is typically used in the case of periodic stimuli, in which case the PSTH show neuron firing times wrapped to one cycle of the stimulus. The prefix "post" is used when the PSTH shows the timing of neuron firings in response to a stimulus event or onset.
To make a PSTH, a spike train recorded from a single neuron is aligned with the onset, or a fixed phase point, of an identical stimulus repeatedly presented to an animal. The aligned sequences are superimposed in time, and then used to construct a histogram.
Construction procedure.
The optimal bin size (assuming an underlying Poisson point process) Δ is a minimizer of the formula, (2k-v)/Δ2,
where k and v are mean and variance of ki.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta"
},
{
"math_id": 1,
"text": "\\frac{k_i}{n \\Delta}"
},
{
"math_id": 2,
"text": "i\\ \\Delta"
}
]
| https://en.wikipedia.org/wiki?curid=7737301 |
7738360 | Conditional event algebra | In probability theory, a conditional event algebra (CEA) is an alternative to a standard, Boolean algebra of possible events (a set of possible events related to one another by the familiar operations "and", "or", and "not") that contains not just ordinary events but also conditional events that have the form "if "A", then "B"". The usual motivation for a CEA is to ground the definition of a probability function for events, "P", that satisfies the equation "P"(if "A" then "B") = "P"("A" and "B") / "P"("A").
Motivation.
In standard probability theory the occurrence of an event corresponds to a set of possible outcomes, each of which is an outcome that corresponds to the occurrence of the event. "P"("A"), the probability of event "A", is the sum of the probabilities of all outcomes that correspond to event "A"; "P"("B") is the sum of the probabilities of all outcomes that correspond to event "B"; and "P"("A" and "B") is the sum of the probabilities of all outcomes that correspond to both "A" and "B". In other words, "and", customarily represented by the logical symbol ∧, is interpreted as set intersection: "P"("A" ∧ "B") = "P"("A" ∩ "B"). In the same vein, "or", ∨, becomes set union, ∪, and "not", ¬, becomes set complementation, ′. Any combination of events using the operations "and", "or", and "not" is also an event, and assigning probabilities to all outcomes generates a probability for every event. In technical terms, this means that the set of events and the three operations together constitute a Boolean algebra of sets, with an associated probability function.
In standard practice, "P"(if "A", then "B") is not interpreted as "P"("A"′ ∪ "B"), following the rule of material implication, but rather as the conditional probability of "B" given "A", "P"("B" | "A") = "P"("A" ∩ "B") / "P"("A"). This raises a question: what about a probability like "P"(if "A", then "B", and if "C", then "D")? For this, there is no standard answer. What would be needed, for consistency, is a treatment of "if-then" as a binary operation, →, such that for "conditional events" "A" → "B" and "C" → "D", "P"("A" → "B") = "P"("B" | "A"), "P"("C" → "D") = "P"("D" | "C"), and "P"(("A" → "B") ∧ ("C" → "D")) are well-defined and reasonable. Philosophers including Robert Stalnaker argued that ideally, a conditional event algebra, or CEA, would support a probability function that meets three conditions:
1. The probability function satisfies the usual axioms.
2. For any two ordinary events "A" and "B", if "P"("A") > 0, then "P"("A" → "B") = "P"("B" | "A") = "P"("A" ∧ "B") / "P"("A").
3. For ordinary event "A" and acceptable probability function "P", if "P"("A") > 0, then "PA" = "P" ( ⋅ | "A"), the function produced by conditioning on "A", is also an acceptable probability function.
However, David Lewis proved in 1976 a fact now known as Lewis's triviality result: these conditions can only be met with near-standard approaches in trivial examples. In particular, those conditions can only be met when there are just two possible outcomes—as with, say, a single coin flip. With three or more possible outcomes, constructing a probability function requires choosing which of the above three conditions to violate. Interpreting "A" → "B" as "A"′ ∪ "B" produces an ordinary Boolean algebra that violates 2. With CEAs, the choice is between 1 and 3.
Types of conditional event algebra.
Tri-event CEAs.
Tri-event CEAs take their inspiration from three-valued logic, where the identification of logical conjunction, disjunction, and negation with simple set operations no longer applies. For ordinary events "A" and "B", the tri-event "A" → "B" occurs when "A" and "B" both occur, fails to occur when "A" occurs but "B" does not, and is undecided when "A" fails to occur. (The term “tri-event” comes from de Finetti (1935): "triévénement".) Ordinary events, which are never undecided, are incorporated into the algebra as tri-events conditional on Ω, the vacuous event represented by the entire sample space of outcomes; thus, "A" becomes Ω → "A".
Since there are many three-valued logics, there are many possible tri-event algebras. Two types, however, have attracted more interest than the others. In one type, "A" ∧ "B" and "A" ∨ "B" are each undecided only when both "A" and "B" are undecided; when just one of them is, the conjunction or disjunction follows the other conjunct or disjunct. When negation is handled in the obvious way, with ¬"A" undecided just in case "A" is, this type of tri-event algebra corresponds to a three-valued logic proposed by Sobociński (1920) and favored by Belnap (1973), and also implied by Adams’s (1975) “quasi-conjunction” for conditionals. Schay (1968) was the first to propose an algebraic treatment, which Calabrese (1987) developed more properly.
The other type of tri-event CEA treats negation the same way as the first, but it treats conjunction and disjunction as min and max functions, respectively, with occurrence as the high value, failure as the low value, and undecidedness in between. This type of tri-event algebra corresponds to a three-valued logic proposed by Łukasiewicz (1920) and also favored by de Finetti (1935). Goodman, Nguyen and Walker (1991) eventually provided the algebraic formulation.
The probability of any tri-event is defined as the probability that it occurs divided by the probability that it either occurs or fails to occur. With this convention, conditions 2 and 3 above are satisfied by the two leading tri-event CEA types. Condition 1, however, fails. In a Sobociński-type algebra, ∧ does not distribute over ∨, so "P"("A" ∧ ("B" ∨ "C")) and "P"(("A" ∧ "B") ∨ ("A" ∧ "C")) need not be equal. In a Łukasiewicz-type algebra, ∧ distributes over ∨ but not over exclusive or, formula_0 ("A" formula_0 "B" = ("A" ∧ ¬"B") ∨ (¬"A" ∧ "B")). Also, tri-event CEAs are not complemented lattices, only pseudocomplemented, because in general, ("A" → "B") ∧ ¬("A" → "B") cannot occur but can be undecided and therefore is not identical to Ω → ∅, the bottom element of the lattice. This means that "P"("C") and "P"("C" formula_0 (("A" → "B") ∧ ¬("A" → "B"))) can differ, when classically they would not.
Product-space CEAs.
If "P"(if "A", then "B") is thought of as the probability of "A"-and-"B" occurring before "A"-and-not-"B" in a series of trials, this can be calculated as an infinite sum of simple probabilities: the probability of "A"-and-"B" on the first trial, plus the probability of not-"A" (and either "B" or not-"B") on the first trial and "A"-and-"B" on the second, plus the probability of not-"A" on the first two trials and "A"-and-"B" on the third, and so on—that is, "P"("A" ∧ "B") + "P"(¬"A")"P"("A" ∧ "B") + "P"(¬"A")2"P"("A" ∧ "B") + …, or, in factored form, "P"("A" ∧ "B")[1 + "P"(¬"A") + "P"(¬"A")2 + …]. Since the second factor is the Maclaurin series expansion of 1 / [1 – "P"("¬A")] = 1 / "P"("A"), the infinite sum equals "P"("A" ∧ "B") / "P"("A") = "P"("B" |"A").
The infinite sum is itself is a simple probability, but with the sample space now containing not ordinary outcomes of single trials but infinite sequences of ordinary outcomes. Thus the conditional probability "P"("B" |"A") is turned into simple probability "P"("B" → "A") by replacing Ω, the sample space of all ordinary outcomes, with Ω*, the sample space of all sequences of ordinary outcomes, and by identifying conditional event "A" → "B" with the set of sequences where the first ("A" ∧ "B")-outcome comes before the first ("A" ∧ ¬"B")-outcome. In Cartesian-product notation, Ω* = Ω × Ω × Ω × …, and "A" → "B" is the infinite union [("A" ∩ "B") × Ω × Ω × …] ∪ ["A"′ × ("A" ∩ "B") × Ω × Ω × …] ∪ ["A"′ × "A"′ × ("A" ∩ "B") × Ω × Ω × …] ∪ …. Unconditional event "A" is, again, represented by conditional event Ω → "A". Unlike tri-event CEAs, this type of CEA supports the identification of ∧, ∨, and ¬ with the familiar operations ∩, ∪, and ′ not just for ordinary, unconditional events but for conditional ones, as well. Because Ω* is a space defined by an infinitely long Cartesian product, the Boolean algebra of conditional-event subsets of Ω* is called a product-space CEA. This type of CEA was introduced by van Fraassen (1976), in response to Lewis’s result, and was later discovered independently by Goodman and Nguyen (1994).
The probability functions associated with product-space CEAs satisfy conditions 1 and 2 above. However, given probability function "P" that satisfies conditions 1 and 2, if "P"("A") > 0, it can be shown that "PA"("C" | "B") = "P"("C" | "A" ∧ "B") and "PA"("B" → "C") = "P"("B" ∧ "C" | "A") + "P"("B"′ | "A")"P"("C" | "B"). If "A", "B" and "C" are pairwise compatible but "P"("A" ∧ "B" ∧ "C") = 0, then "P"("C" | "A" ∧ "B") = "P"("B" ∧ "C" | "A") = 0 but "P"("B"′ | "A")"P" ("C" | "B") > 0. Therefore, "PA"("B" → "C") does not reliably equal "PA"("C" | "B"). Since "PA" fails condition 2, "P" fails condition 3.
Nested if–thens.
What about nested conditional constructions? In a tri-event CEA, right-nested constructions are handled more or less automatically, since it is natural to say that "A" → ("B" → "C") takes the value of "B" → "C" (possibly undecided) when "A" is true and is undecided when "A" is false. Left-nesting, however, requires a more deliberate choice: when "A" → "B" is undecided, should ("A" → "B") → "C" be undecided, or should it take the value of "C"? Opinions vary. Calabrese adopts the latter view, identifying ("A" → "B") → ("C" → "D") with ((¬"A" ∨ "B") ∧ "C") → "D".
With a product-space CEA, nested conditionals call for nested sequence-constructions: evaluating "P"(("A" → "B") → ("C" → "D")) requires a sample space of metasequences of sequences of ordinary outcomes. The probabilities of the ordinary sequences are calculated as before. Given a series of trials where the outcomes are sequences of ordinary outcomes, "P"(("A" → "B") → ("C" → "D")) is "P"("C" → "D" | "A" → "B") = "P"(("A" → "B") ∧ ("C" → "D")) / "P"("A" → "B"), the probability that an (("A" → "B") ∧ ("C" → "B"))-sequence will be encountered before an (("A" → "B") ∧ ¬("C" → "B"))-sequence. Higher-order-iterations of conditionals require higher-order metasequential constructions.
In either of the two leading types of tri-event CEA, "A" → ("B" → "C") = ("A" ∧ "B") → "C". Product space CEAs, on the other hand, do not support this identity. The latter fact can be inferred from the failure, already noted, of "PA"("B" → "C") to equal "PA"("C" | "B"), since "PA"("C" | "B") = "P"(("A" ∧ "B") → "C") and "PA"("B" → "C") = "P"("A" → ("B" → "C")). For a direct analysis, however, consider a metasequence whose first member-sequence starts with an ("A" ∧ ¬"B" ∧ "C")-outcome, followed by a (¬"A" ∧ "B" ∧ "C")-outcome, followed by an ("A" ∧ "B" ∧ ¬"C")-outcome. That metasequence will belong to the event "A" → ("B" → "C"), because the first member-sequence is an ("A" ∧ ("B" → "C"))-sequence, but the metasequence will not belong to the event ("A" ∧ "B") → "C", because the first member-sequence is an (("A" ∧ "B") → ¬"C")-sequence.
Applications.
The initial impetus for CEAs is theoretical—namely, the challenge of responding to Lewis's triviality result—but practical applications have been proposed. If, for instance, events "A" and "C" involve signals emitted by military radar stations and events "B" and "D" involve missile launches, an opposing military force with an automated missile defense system may want the system to be able to calculate "P"(("A" → "B") ∧ ("C" → "D")) and/or "P"(("A" → "B") → ("C" → "D")). Other applications range from image interpretation to the detection of denial-of-service attacks on computer networks.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
Adams, E. W. 1975. "The Logic of Conditionals." D. Reidel, Dordrecht.
Bamber, D., Goodman, I. R. and Nguyen, H. T. 2004. "Deduction from Conditional Knowledge". "Soft Computing" 8: 247–255.
Belnap, N. D. 1973. "Restricted quantification and conditional assertion", in H. Leblanc (ed.), "Truth, Syntax and Modality" North-Holland, Amsterdam. 48–75.
Calabrese, P. 1987. "An algebraic synthesis of the foundations of logic and probability". "Information Sciences" 42:187-237.
de Finetti, Bruno. 1935. "La logique de la probabilité". "Actes du Congrès International Philosophie Scientifique". Paris.
van Fraassen, Bas C. 1976. "Probabilities of conditionals” in W. L. Harper and C. A. Hooker (eds.), "Foundations of Probability Theory, Statistical Inference, and Statistical Theories of Science", Vol. I. D. Reidel, Dordrecht, pp. 261–308.
Goodman, I. R., Mahler, R. P. S. and Nguyen, H. T. 1999. "What is conditional event algebra and why should you care?" "SPIE Proceedings", Vol. 3720.
Goodman, I. R., Nguyen, H. T. and Walker, E .A. 1991. "Conditional Inference and Logic for Intelligent Systems: A Theory of Measure-Free Conditioning". Office of Chief of Naval Research, Arlington, Virginia.
Goodman, I. R. and Nguyen, H. T. 1994. "A theory of conditional information for probabilistic inference in intelligent systems: II, Product space approach; III Mathematical appendix". "Information Sciences" 76:13-42; 75: 253-277.
Goodman, I. R. and Nguyen, H. T. 1995. "Mathematical foundations of conditionals and their probabilistic assignments". "International Journal of Uncertainty, Fuzziness and Knowledge-based Systems" 3(3): 247-339
Kelly, P. A., Derin, H., and Gong, W.-B. 1999. "Some applications of conditional events and random sets for image estimation and system modeling". "SPIE Proceedings" 3720: 14-24.
Łukasiewicz, J. 1920. "O logice trójwartościowej" (in Polish). "Ruch Filozoficzny" 5:170–171. English translation: "On three-valued logic", in L. Borkowski (ed.), "Selected works by Jan Łukasiewicz", North–Holland, Amsterdam, 1970, pp. 87–88. ISBN 0-7204-2252-3
Schay, Geza. 1968. "An algebra of conditional events". "Journal of Mathematical Analysis and Applications" 24: 334-344.
Sobociński, B. 1952. "Axiomatization of a partial system of three-valued calculus of propositions". "Journal of Computing Systems" 1(1):23-55.
Sun, D., Yang, K., Jing, X., Lv, B., and Wang, Y. 2014. "Abnormal network traffic detection based on conditional event algebra". "Applied Mechanics and Materials" 644-650: 1093-1099. | [
{
"math_id": 0,
"text": "\\oplus"
}
]
| https://en.wikipedia.org/wiki?curid=7738360 |
77387454 | Trifolium curve | Type of quartic plane curve
The trifolium curve (also three-leafed clover curve, 3-petaled rose curve, and paquerette de mélibée) is a type of quartic plane curve. The name comes from the Latin terms for 3-leaved, defining itself as a folium shape with 3 equally sized leaves.
It is described as
formula_0
By solving for "y", the curve can be described by the following function:
formula_1
Due to the separate ± symbols, it is possible to solve for 4 different answers at a given point.
It has a polar equation offormula_2
and a Cartesian equation of
formula_3
The area of the trifolium shape is defined by the following equation:
formula_4
And it has a length of
formula_5
The trifolium was described by J. Lawrence as a form of Kepler's folium when
formula_7
A more present definition is when formula_8
The trifolium was described by Dana-Picard as
formula_6
He defines the trifolium as having three leaves and having a triple point at the origin made up of 4 arcs. The trifolium is a sextic curve meaning that any line through the origin will have it pass through the curve again and through its complex conjugate twice.
The trifolium is a type of rose curve when formula_9
Gaston Albert Gohierre de Longchamps was the first to study the trifolium, and it was given the name Torpille because of its resemblance to fish.
The trifolium was later studied and given its name by Henry Cundy and Arthur Rollett.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nx^4+2x^2y^2+y^4-x^3+3xy^2=0. \\,\n"
},
{
"math_id": 1,
"text": "\ny=\\pm\\sqrt{\\frac{-2x^2-3x\\pm\\sqrt{16x^3+9x^2}}{2}},\n"
},
{
"math_id": 2,
"text": " r = -a\\cos 3 \\theta"
},
{
"math_id": 3,
"text": " (x^2 + y^2)[y^2 + x (x + a)] = 4 a x y^2"
},
{
"math_id": 4,
"text": "A = \\frac{1}{2}\\int_{0}^{\\pi} cos^2(3\\theta) d \\theta"
},
{
"math_id": 5,
"text": "6a\\int_{0}^{\\tfrac{\\pi}{2}} \\sqrt{1-\\frac{8}{9}sin^2t}*dt\\thickapprox6,7 a"
},
{
"math_id": 6,
"text": "(x^2+y^2)^3-x(x^2-3y^2)=0"
},
{
"math_id": 7,
"text": "b \\in (0, 4, a)"
},
{
"math_id": 8,
"text": " a = b"
},
{
"math_id": 9,
"text": "k = 3"
}
]
| https://en.wikipedia.org/wiki?curid=77387454 |
77390020 | Kotzig's conjecture | <templatestyles src="Unsolved/styles.css" />
Unsolved problem in mathematics:
Is there a finite graph on at least two vertices in which each pair of distinct vertices is connected by "exactly one" path of length formula_0, where formula_1 is fixed?
Kotzig's conjecture is an unproven assertion in graph theory which states that finite graphs with certain properties do not exist.
A graph is a formula_2-graph if each pair of distinct vertices is connected by "exactly one" path of length formula_0.
Kotzig's conjecture asserts that for formula_1 there are no finite formula_2-graphs with two or more vertices.
The conjecture was first formulated by Anton Kotzig in 1974.
It has been verified for formula_3 by Alexandr Kostochka, but remains open in the general case (as of July 2024).
The conjecture is stated for formula_1 because formula_2-graphs do exist for smaller values of formula_0.
formula_4-graphs are precisely the complete graphs.
The friendship theorem states that formula_5-graphs are precisely the (triangular) windmill graphs (that is, finitely many triangles joined at a common vertex; also known as friendship graphs).
History.
Kotzig's conjecture was first listed as an open problem by Bondy & Murty in 1976, attributed to Kotzig and dated to 1974.
Kotzig's first own writing on the conjecture appeared in 1979.
He later verified the conjecture for formula_6 and claimed solution, though unpublished, for formula_7.
The conjecture is now known to hold for formula_3 due to work of Alexandr Kostochka.
Kostochka stated that his techniques extend to formula_8, but a proof of this has not been published.
A survey on formula_2-graphs was written by John A. Bondy, including proofs for many statements previously made by Kotzig without written proof.
In 1990 Xing & Hu claimed a proof of Kotzig's conjecture for formula_9.
This seemed to resolve the conjecture at the time, and still today leads many to believe that the problem is settled.
However, Xing and Hu's proof relied on a misunderstanding of a statement proven by Kotzig. Kotzig showed that a formula_2-graph must contain a formula_10-cycle for "some" formula_11, which Xing and Hu used in the form that cycles of "all" these lengths must exist.
In their paper Xing and Hu show that for formula_9 a formula_2-graph must "not" contain a formula_12-cycle.
Since this is in contradiction to their reading of Kotzig's result, they conclude (incorrectly) that formula_2-graphs with formula_9 cannot exist.
This mistake was first pointed out by Roland Häggkvist in 2000.
Kotzig's conjecture is mentioned in Proofs from THE BOOK in the chapter on the friendship theorem.
It is stated that a general proof for the conjecture seems "out of reach". | [
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "k\\ge 3"
},
{
"math_id": 2,
"text": "P_k"
},
{
"math_id": 3,
"text": "k\\le 20"
},
{
"math_id": 4,
"text": "P_1"
},
{
"math_id": 5,
"text": "P_2"
},
{
"math_id": 6,
"text": "k\\le 8"
},
{
"math_id": 7,
"text": "k\\le 9"
},
{
"math_id": 8,
"text": "k\\le 33"
},
{
"math_id": 9,
"text": "k\\ge 12"
},
{
"math_id": 10,
"text": "2\\ell"
},
{
"math_id": 11,
"text": "\\ell\\in\\{3,...,k-4\\}"
},
{
"math_id": 12,
"text": "(2k-8)"
},
{
"math_id": 13,
"text": "n"
},
{
"math_id": 14,
"text": "\\textstyle {n\\choose 2}"
},
{
"math_id": 15,
"text": "(k+1)"
},
{
"math_id": 16,
"text": "v,w"
},
{
"math_id": 17,
"text": "k-\\ell"
},
{
"math_id": 18,
"text": "\\ell\\in\\{3,...,k-5\\}"
},
{
"math_id": 19,
"text": "4"
},
{
"math_id": 20,
"text": "2k"
},
{
"math_id": 21,
"text": "2k-2"
},
{
"math_id": 22,
"text": "2k-4"
},
{
"math_id": 23,
"text": "2k-6"
},
{
"math_id": 24,
"text": "2k-8"
},
{
"math_id": 25,
"text": "\\tfrac43 k-2"
},
{
"math_id": 26,
"text": "k+5"
},
{
"math_id": 27,
"text": "k\\ge 21"
},
{
"math_id": 28,
"text": "k-2"
},
{
"math_id": 29,
"text": "G"
},
{
"math_id": 30,
"text": "c_{k+1}"
},
{
"math_id": 31,
"text": "c_{k+1}\\ge 3"
},
{
"math_id": 32,
"text": "c_{k+1}\\ge 4"
},
{
"math_id": 33,
"text": "c_{k+1}\\le \\tfrac12(k-1)"
}
]
| https://en.wikipedia.org/wiki?curid=77390020 |
77390149 | Balanced group | In group theory, a balanced group is a topological group whose left and right uniform structres coincide.
Definition.
A topological group formula_0 is said to be balanced if it satisfies the following equivalent conditions.
Properties.
The completion of a balanced group formula_0 with respect to its uniform structure admits a unique topological group structure extending that of formula_0. This generalizes the case of abelian groups and is a special case of the two-sided completion of an arbitrary topological group, which is with respect to the coarsest uniform structure finer than both the left and the right uniform structures.
For a unimodular group (i.e., a Hausdorff locally compact group whose left and right Haar measures coincide) formula_0, the following two conditions are equivalent.
Examples.
Trivially every Abelian topological group is balanced. Every compact topological group (not necessarily Hausdorff) is balanced, which follows from the Heine–Cantor theorem for uniform spaces. Neither of these two sufficient conditions is not necessary, for there are non-Abelian compact groups (such as the orthogonal group formula_5) and there are non-compact abelian groups (such as formula_6).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "1_G\\in G"
},
{
"math_id": 2,
"text": "gUg^{-1}=U"
},
{
"math_id": 3,
"text": "g\\in G"
},
{
"math_id": 4,
"text": "\\cdot\\colon G\\times G\\to G"
},
{
"math_id": 5,
"text": "\\operatorname{O}(2)"
},
{
"math_id": 6,
"text": "\\mathbb R"
}
]
| https://en.wikipedia.org/wiki?curid=77390149 |
77391914 | Hering's Paradox | Experiment testing Maxwell's equations
Hering's paradox describes a physical experiment in the field of electromagnetism that seems to contradict Maxwell's equation in general, and Faraday's Law of Induction and the flux rule in particular. In his study on the subject, Carl Hering concluded in 1908 that the usual statement of Faraday's Law (at the turn of the century) was imperfect and that it required to be modified in order to become universal.
Since then, Hering's paradox has been used repeatedly in physics didactics to demonstrate the application of Faraday's Law of Induction, and it can be considered to be completely understood within the theory of classical electrodynamics. Grabinski criticizes, however, that most of the presentations in introductory textbooks were problematical. Either, Faraday's Law was misinterpreted in a way that leads to confusion, or solely such frames of reference were chosen that avoid the need of an explanation. In the following, Hering's paradox is first shown experimentally in a video and -- in a similar way as suggested by Grabinski -- it is shown, that when carefully treated with full mathematical consistency, the experiment does not contradict Faraday's Law of Induction. Finally, the typical pitfalls of applying Faraday's Law are mentioned.
Experiment.
The experiment is shown in the video on the right side. In the experiment, a slotted iron core is used, where a coil fed with a direct current generates a constant magnetic field in the core and in its slot.
Two different experiments are carried out in parallel:
*In the lower part, an ordinary conductor loop is passed through the slot of the iron core. As there is a magnetic field in this slot, a voltage is generated at the ends of the conductor loop, which is amplified and displayed in the lower oscilloscope image.
*A modified conductor loop is realized in the upper part. The conductor loop is split at one point and the split ends are fitted with a metal wheel. During the experiment, the metal wheels move around the magnetic core and exert a certain contact pressure on each other and on the core, respectively. As the magnetic core is electrically conductive, there is always an electrical connection between the wheels and therefore between the separated ends of the loop. The oscilloscope does not show any voltage despite the otherwise identical conditions as in the first experiment.
In both experiments, the same change in magnetic flux occurs at the same time. However, the oscilloscope only shows a voltage in one experiment, although one would expect the same "induced voltage" to be present in both experiments. This unexpected result is called "Hering's paradox", named after Carl Hering.
Explanation.
Moving wires/oscilloscope, magnet at rest.
The easiest way to understand the outcome of the experiment is to view it from the rest frame of the magnet, i. e. the magnet is at rest, and the oscilloscope and the wires are at motion. In this frame of reference, there is no reason for a voltage to arise, because the set-up consists of a magnet at rest and some wires moving in a field free space around the magnet, which scratch the magnet a little.
To conclude, there is
* no change of the magnetic field anywhere (formula_0) and thus no current-driving force formula_1 on the charges formula_2 in the circuit due to rest induction,
* and those parts of the circuit having charges being at motion (formula_3) are not exposed to a magnetic field (formula_4) and vice versa, so that there is no magnetic force formula_5 on the charges formula_2 anywhere in the circuit.
Moving magnet, wires/oscilloscope at rest.
While the perspective from the rest frame of the magnet causes no difficulties in understanding, this is not the case when viewed from a frame of reference in which the oscilloscope and the cables are at rest and an electrically conductive permanent magnet moves into a conductor loop at a speed of formula_8. Under these circumstances, there is rest induction due to the movement of the magnet (formula_9 at the front edge of the magnet), and beyond that, the magnet is also a moving conductor. The double function of the magnet as a conductor at motion on the one hand, and as the root cause for the magnetic field on the other hand raises an essential question: Does the magnetic field of the magnet exert a Lorentz force on the charges inside the magnet? The correct answer to this question is "Yes, it does", and it is one of the pitfalls concerning the application of Faraday's Law. For some people it is contraintuitive to assume that a Lorentz force is exerted to a charge although there is no relative motion between the magnet and the charge.
An essential step of solving the paradox is the realization that the inside of the conductive moving magnet is not field-free, but that a non-zero electric field strength formula_10 prevails there. If this field strength is integrated along the line formula_11, the result is the desired "induced voltage". However, the "induced voltage" is not localized in the oscilloscope, but in the magnet.
The equation formula_10 can be derived from the consideration that there is obviously no current-driving force acting on any section of the circuit. Since the absence of forces also applies in particular to the inside of the magnet, the total electromagnetic force for a charge formula_2 located inside the magnet equals formula_12. If we assume that the charge formula_2 moves “slip-free” with the magnet (formula_13), the following also applies: formula_14. The last equation, however, is mathematically equivalent to formula_10.
Finally, the following electric field strengths result for the various sections of the conductor loop:
To check whether the outcome of the experiment is compatible with Maxwell's equations, we first write down the Maxwell Faraday equation in integral notation:
formula_18
Here formula_19 is the induction surface, and formula_20 is its boundary curve, which is assumed to be composed of the (stationary) sections formula_15, formula_11, formula_16 and formula_17, respectively. The dot formula_21 indicates the dot product between two vectors. The direction of integration (clockwise) and the surface orientation (pointing into the screen) are right-handed to each other as assumed in the Maxwell Faraday equation.
Considering the electrical field strengths shown in the table, the left side of the Maxwell Faraday equation can be written as:
formula_22
The minus sign is due to the fact that the direction of integration is opposite to the direction of the electric field strength (formula_23).
To calculate the right-hand side of the equation, we state that within the time formula_24 the magnetic field of the induction surface increases from formula_25 to formula_26 (formula_27) within a strip of length formula_28 and width formula_29 (formula_30).
Thus the right side of the equation equals
formula_31
The right and left sides of the equation are obviously identical. This shows that Hering's paradox is in perfect agreement with the Maxwell Faraday equation.
Note that the speed of the boundary curve formula_20 has no physical importance whatsoever. This can be seen most easily in the differential notation formula_32 of the Maxwell-Faraday equation where neither the induction area nor its boundary occurs. From a mathematical point of view, the boundary curve is just an imaginary line that had to be introduced to convert the Maxwell-Faraday equation to its integral notation such as to establish a relationship to electical voltages.
Because the boundary curve is physically of no importance, the outcome of an experiment does not depends on the speed of this curve and it is not affected by whether or not the speed of the boundary curve corresponds to the speed of a conductor wire being located at the same place. For reasons of simplicity, the speed of the boundary curve is assumed to be zero in this article.
The movement that actually counts is the movement of the (electrically conducting) magnet. It affects the value of the electric field strength inside the magnet and is thus accounted for in the Maxwell-Faraday equation via the numerical value of the vector field formula_7.
Pitfalls.
The difficulties in understanding Hering's paradox and similar problems are usually based on three misunderstandings:
(1) the lack of distinction between the velocity of the boundary curve and the velocity of a conductor present at the location of the boundary curve,
(2) the uncertainty as to whether the term formula_20 in the Maxwell-Faraday equation is just an imaginary boundary line or a conductor (correct is: formula_20 is a boundary curve without any physical properties) and
(3) ignoring the fact that in an ideal conductor moving in a magnetic field with flux density formula_6, there is a non-zero electric field strength formula_10.
If these points are consistently considered, Hering's paradox turns out to be in perfect agreement to Faraday's law of induction (given by the Maxwell Faraday equation) viewed from any frame of reference whatsoever. Furthermore, the difficulties in understanding the (thought) experiments described in the chapter "Exceptions to the flow rule" in the "Feynman Lectures" are due to the same misunderstandings.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{rot}\\vec E = -\\frac{\\partial \\vec B}{\\partial t} = \\vec 0"
},
{
"math_id": 1,
"text": "\\vec F = q \\cdot \\vec E"
},
{
"math_id": 2,
"text": "q"
},
{
"math_id": 3,
"text": "v_q \\ne 0"
},
{
"math_id": 4,
"text": "B=0"
},
{
"math_id": 5,
"text": "\\vec F_{q} = q \\cdot \\vec v_{q} \\times \\vec B"
},
{
"math_id": 6,
"text": "\\vec B"
},
{
"math_id": 7,
"text": "\\vec E"
},
{
"math_id": 8,
"text": "v"
},
{
"math_id": 9,
"text": "\\frac{\\partial \\vec B}{\\partial t}\\ne 0"
},
{
"math_id": 10,
"text": "\\vec E = -\\vec v \\times \\vec B"
},
{
"math_id": 11,
"text": "\\overline{\\mathrm{BC}}"
},
{
"math_id": 12,
"text": "\\vec F_q = q \\cdot (\\vec E + \\vec v_q \\times \\vec B) = \\vec 0"
},
{
"math_id": 13,
"text": "\\vec v_q = \\vec v"
},
{
"math_id": 14,
"text": "q \\cdot (\\vec E + \\vec v \\times \\vec B) = \\vec 0"
},
{
"math_id": 15,
"text": "\\overline{\\mathrm{AB}}"
},
{
"math_id": 16,
"text": "\\overline{\\mathrm{CD}}"
},
{
"math_id": 17,
"text": "\\overline{\\mathrm{DA}}"
},
{
"math_id": 18,
"text": "\\oint\\limits_{\\partial A}\\vec E\\cdot\\mathrm{d}\\vec s = -\\!\\!\\iint\\limits_{A} \\frac{\\partial \\vec B}{\\partial t} \\cdot\\mathrm{d}\\vec A"
},
{
"math_id": 19,
"text": "A"
},
{
"math_id": 20,
"text": "\\partial A"
},
{
"math_id": 21,
"text": "\\cdot"
},
{
"math_id": 22,
"text": "\\oint\\limits_{\\partial A}\\vec E\\cdot\\mathrm{d}\\vec s = \\underbrace{\\int\\limits_{\\mathrm{A}}^{\\mathrm{B}}\\vec E\\cdot\\mathrm{d}\\vec s}_{=0} + \\underbrace{\\int\\limits_{\\mathrm{B}}^{\\mathrm{C}}\\vec E\\cdot\\mathrm{d}\\vec s}_{=-v \\cdot B \\cdot L} + \\underbrace{\\int\\limits_{\\mathrm{C}}^{\\mathrm{D}}\\vec E\\cdot\\mathrm{d}\\vec s}_{=0} + \\underbrace{\\int\\limits_{\\mathrm{D}}^{\\mathrm{A}}\\vec E\\cdot\\mathrm{d}\\vec s}_{=0} = -v \\cdot B \\cdot L"
},
{
"math_id": 23,
"text": "\\angle(\\mathrm{d}\\vec s, \\vec E) = 180^\\circ"
},
{
"math_id": 24,
"text": "\\mathrm{d}t"
},
{
"math_id": 25,
"text": "0"
},
{
"math_id": 26,
"text": "B"
},
{
"math_id": 27,
"text": "\\partial B = B"
},
{
"math_id": 28,
"text": "L"
},
{
"math_id": 29,
"text": "v\\cdot \\mathrm{d}t"
},
{
"math_id": 30,
"text": "\\mathrm{d}A = L \\cdot v \\cdot \\mathrm{d}t"
},
{
"math_id": 31,
"text": "-\\iint\\limits_{A} \\frac{\\partial \\vec B}{\\partial t} \\cdot\\mathrm{d}\\vec A = -\\frac{B}{\\mathrm{d}t} \\cdot L \\cdot v \\cdot \\mathrm{d}t = -v \\cdot B \\cdot L"
},
{
"math_id": 32,
"text": "\\mathrm{rot} \\vec E = -\\dot \\vec B"
}
]
| https://en.wikipedia.org/wiki?curid=77391914 |
7739252 | Elasticity tensor | Stress-strain relation in a linear elastic material
The elasticity tensor is a fourth-rank tensor describing the stress-strain relation in
a linear elastic material. Other names are elastic modulus tensor and stiffness tensor. Common symbols include formula_0 and formula_1.
The defining equation can be written as
formula_2
where formula_3 and formula_4 are the components of the Cauchy stress tensor and infinitesimal strain tensor, and formula_5 are the components of the elasticity tensor. Summation over repeated indices is implied. This relationship can be interpreted as a generalization of Hooke's law to a 3D continuum.
A general fourth-rank tensor formula_6 in 3D has 34 = 81 independent components formula_7, but the elasticity tensor has at most 21 independent components. This fact follows from the symmetry of the stress and strain tensors, together with the requirement that the stress derives from an elastic energy potential. For isotropic materials, the elasticity tensor has just two independent components, which can be chosen to be the bulk modulus and shear modulus.
Definition.
The most general linear relation between two second-rank tensors formula_8 is
formula_2
where formula_5 are the components of a fourth-rank tensor formula_0. The elasticity tensor is defined as formula_0 for the case where formula_9 and formula_10 are the stress and strain tensors, respectively.
The compliance tensor formula_11 is defined from the inverse stress-strain relation:
formula_12
The two are related by
formula_13
where formula_14 is the Kronecker delta.
Unless otherwise noted, this article assumes formula_0 is defined from the stress-strain relation of a linear elastic material, in the limit of small strain.
Special cases.
Isotropic.
For an isotropic material, formula_0 simplifies to
formula_15
where formula_16 and formula_17 are scalar functions of the material coordinates
formula_18, and formula_19 is the metric tensor in the reference frame of the material. In an orthonormal Cartesian coordinate basis, there is no distinction between upper and lower indices, and the metric tensor can be replaced with the Kronecker delta:
formula_20
Substituting the first equation into the stress-strain relation and summing over repeated indices gives
formula_21
where formula_22 is the trace of formula_10.
In this form, formula_17 and formula_16 can be identified with the first and second Lamé parameters.
An equivalent expression is
formula_23
where formula_24 is the bulk modulus, and
formula_25
are the components of the shear tensor formula_26.
Cubic crystals.
The elasticity tensor of a cubic crystal has components
formula_27
where formula_28, formula_29, and formula_30 are unit vectors corresponding to the three mutually perpendicular axes of the crystal unit cell. The coefficients formula_16, formula_17, and formula_31 are scalars; because they are coordinate-independent, they are intrinsic material constants. Thus, a crystal with cubic symmetry is described by three independent elastic constants.
In an orthonormal Cartesian coordinate basis, there is no distinction between upper and lower indices, and formula_32 is the Kronecker delta, so the expression simplifies to
formula_33
Other crystal classes.
There are similar expressions for the components of formula_0 in other crystal symmetry classes. The number of independent elastic constants for several of these is given in table 1.
Properties.
Symmetries.
The elasticity tensor has several symmetries that follow directly from its defining equation formula_34. The symmetry of the stress and strain tensors implies that
formula_35
Usually, one also assumes that the stress derives from an elastic energy potential formula_36:
formula_37
which implies
formula_38
Hence, formula_0 must be symmetric under interchange of the first and second pairs of indices:
formula_39
The symmetries listed above reduce the number of independent components from 81 to 21. If a material has additional symmetries, then this number is further reduced.
Transformations.
Under rotation, the components formula_5 transform as
formula_40
where formula_41 are the covariant components in the rotated basis, and formula_42
are the elements of the corresponding rotation matrix. A similar transformation rule holds for other linear transformations.
Invariants.
The components of formula_0 generally acquire different values under a change of basis. Nevertheless, for certain types of transformations,
there are specific combinations of components, called invariants, that remain unchanged. Invariants are defined with respect to a given set of transformations, formally known as a group operation. For example, an invariant with respect to the group of proper orthogonal transformations, called SO(3), is a quantity that remains constant under arbitrary 3D rotations.
formula_0 possesses two linear invariants and seven quadratic invariants with respect to SO(3). The linear invariants are
formula_43
and the quadratic invariants are
formula_44
These quantities are linearly independent, that is, none can be expressed as a linear combination of the others.
They are also complete, in the sense that there are no additional independent linear or quadratic invariants.
Decompositions.
A common strategy in tensor analysis is to decompose a tensor into simpler components that can be analyzed separately. For example, the
displacement gradient tensor formula_45 can be decomposed as
formula_46
where formula_47 is a rank-0 tensor (a scalar), equal to the trace of formula_48;
formula_26 is symmetric and trace-free; and formula_49 is antisymmetric. Component-wise,
formula_50
Here and later, symmeterization and antisymmeterization are denoted by formula_51 and formula_52, respectively. This decomposition is irreducible, in the sense of being invariant under rotations, and is an important tool in the conceptual development of continuum mechanics.
The elasticity tensor has rank 4, and its decompositions are more complex and varied than those of a rank-2 tensor. A few examples are described below.
M and N tensors.
This decomposition is obtained by symmeterization and antisymmeterization of the middle two indices:
formula_53
where
formula_54
A disadvantage of this decomposition is that formula_55 and formula_56 do not
obey all original symmetries of formula_5, as they are not symmetric under interchange of the first two indices. In addition, it is not irreducible, so it is not invariant under linear transformations such as rotations.
Irreducible representations.
An irreducible representation can be built by considering the notion of a totally symmetric tensor, which is invariant under the interchange of any two indices. A totally symmetric tensor formula_57 can be constructed from
formula_0 by summing over all formula_58 permutations of the indices
formula_59
where formula_60 is the set of all permutations of the four indices. Owing to the symmetries of formula_5, this sum reduces to
formula_61
The difference
formula_62
is an asymmetric tensor ("not" antisymmetric). The decomposition formula_63 can be shown to be unique and irreducible with respect to formula_60. In other words, any additional symmetrization operations on formula_57 or formula_64 will either leave it unchanged or evaluate to zero. It is also irreducible with respect to arbitrary linear transformations, that is, the general linear group formula_65.
However, this decomposition is not irreducible with respect to the group of rotations SO(3). Instead, formula_57 decomposes into three irreducible parts, and formula_64 into two:
formula_66
See Itin (2020) for explicit expressions in terms of the components of formula_0.
This representation decomposes the space of elasticity tensors into a direct sum of subspaces:
formula_67
with dimensions
formula_68
These subspaces are each isomorphic to a harmonic tensor space formula_69. Here, formula_69 is the space of 3D, totally symmetric, traceless tensors of rank formula_70. In particular, formula_71 and formula_72 correspond to formula_73, formula_74 and formula_75 correspond to formula_76, and formula_77 corresponds to formula_78.
Footnotes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{C}"
},
{
"math_id": 1,
"text": "\\mathbf{Y}"
},
{
"math_id": 2,
"text": "\nT^{ij} = C^{ijkl} E_{kl}\n"
},
{
"math_id": 3,
"text": "T^{ij}"
},
{
"math_id": 4,
"text": "E_{kl}"
},
{
"math_id": 5,
"text": "C^{ijkl}"
},
{
"math_id": 6,
"text": "\\mathbf{F}"
},
{
"math_id": 7,
"text": "F_{ijkl}"
},
{
"math_id": 8,
"text": "\\mathbf{T}, \\mathbf{E}"
},
{
"math_id": 9,
"text": "\\mathbf{T}"
},
{
"math_id": 10,
"text": "\\mathbf{E}"
},
{
"math_id": 11,
"text": "\\mathbf{K}"
},
{
"math_id": 12,
"text": "\nE^{ij} = K^{ijkl} T_{kl}\n"
},
{
"math_id": 13,
"text": "\nK_{ijpq} C^{pqkl} = \\frac{1}{2} \\left(\\delta_{i}^{k} \\delta_{j}^{l} + \\delta_{i}^{l} \\delta_{j}^{k} \\right)\n"
},
{
"math_id": 14,
"text": "\\delta_{n}^{m}"
},
{
"math_id": 15,
"text": "\nC^{ijkl} = \\lambda \\!\\left( X \\right) g^{ij} g^{kl} + \\mu\\!\\left( X \\right) \\left(g^{ik} g^{jl} + g^{il} g^{kj} \\right)\n"
},
{
"math_id": 16,
"text": "\\lambda"
},
{
"math_id": 17,
"text": "\\mu"
},
{
"math_id": 18,
"text": "X"
},
{
"math_id": 19,
"text": "\\mathbf{g}"
},
{
"math_id": 20,
"text": "\nC_{ijkl} = \\lambda \\!\\left( X \\right) \\delta_{ij} \\delta_{kl} + \\mu\\!\\left( X \\right) \\left(\\delta_{ik} \\delta_{jl} + \\delta_{il} \\delta_{kj} \\right) \\quad \\text{[Cartesian coordinates]}"
},
{
"math_id": 21,
"text": "\nT^{ij} = \\lambda \\!\\left( X \\right) \\cdot \\left(\\mathrm{Tr} \\, \\mathbf{E} \\right) g^{ij} + 2 \\mu\\!\\left( X \\right) E^{ij}\n"
},
{
"math_id": 22,
"text": "\\mathrm{Tr}\\, \\mathbf{E} \\equiv E^i_{\\,i}"
},
{
"math_id": 23,
"text": "\nT^{ij} = K \\!\\left( X \\right) \\cdot \\left(\\mathrm{Tr}\\, \\mathbf{E} \\right) g^{ij} + 2 \\mu\\!\\left( X \\right)\\Sigma^{ij}\n"
},
{
"math_id": 24,
"text": "K = \\lambda + (2/3) \\mu"
},
{
"math_id": 25,
"text": "\n\\Sigma^{ij} \\equiv E^{ij} - (1/3)\\left(\\mathrm{Tr}\\, \\mathbf{E} \\right) g^{ij}\n"
},
{
"math_id": 26,
"text": "\\mathbf{\\Sigma}"
},
{
"math_id": 27,
"text": "\n\\begin{align}\nC^{ijkl} &= \\lambda g^{ij} g^{kl} + \\mu \\left(g^{ik} g^{jl} + g^{il} g^{kj} \\right) \\\\ &+ \\alpha \\left(a^i a^j a^k a^l + b^i b^j b^k b^l + c^i c^j c^k c^l\\right)\n\\end{align}\n"
},
{
"math_id": 28,
"text": "\\mathbf{a}"
},
{
"math_id": 29,
"text": "\\mathbf{b}"
},
{
"math_id": 30,
"text": "\\mathbf{c}"
},
{
"math_id": 31,
"text": "\\alpha"
},
{
"math_id": 32,
"text": "g^{ij}"
},
{
"math_id": 33,
"text": "\n\\begin{align}\nC_{ijkl} &= \\lambda \\delta_{ij} \\delta_{kl} + \\mu \\left(\\delta_{ik} \\delta_{jl} + \\delta_{il} \\delta_{kj} \\right) \\\\ &+ \\alpha \\left(a_i a_j a_k a_l + b_i b_j b_k b_l + c_i c_j c_k c_l\\right)\n\\end{align}\n"
},
{
"math_id": 34,
"text": "T^{ij} = C^{ijkl} E_{kl}"
},
{
"math_id": 35,
"text": "\nC_{ijkl} = C_{jikl} \\qquad \\text{and} \\qquad C_{ijkl} = C_{ijlk},\n"
},
{
"math_id": 36,
"text": "U"
},
{
"math_id": 37,
"text": "\nT^{ij} = \\frac{\\partial U}{\\partial E_{ij}}\n"
},
{
"math_id": 38,
"text": "\nC_{ijkl} = \\frac{\\partial^2 U}{\\partial E_{ij}\\partial E_{kl}}\n"
},
{
"math_id": 39,
"text": "\nC_{ijkl} = C_{klij}\n"
},
{
"math_id": 40,
"text": "\nC'_{ijkl} = R_{ip} R_{jq} R_{kr} R_{ls} C^{pqrs}"
},
{
"math_id": 41,
"text": "C'_{ijkl}"
},
{
"math_id": 42,
"text": "R_{ij}"
},
{
"math_id": 43,
"text": "\n\\begin{align}\nL_1 &= C^{ij}_{\\,\\,\\,ij} \\\\\nL_2 &= C^{ii}_{\\,\\,\\,jj}\n\\end{align}\n"
},
{
"math_id": 44,
"text": "\n\\left\\{ L_1^2, \\, L_2^2, \\, L_1 L_2, \\, C_{ijkl} C^{ijkl}, \\,C_{iikl} C^{jjkl}, \\,C_{iikl} C^{jkjl}, \\,C_{kiil} C^{kjjl} \\right\\}\n"
},
{
"math_id": 45,
"text": "\\mathbf{W} = \\mathbf{\\nabla} \\mathbf{\\xi}"
},
{
"math_id": 46,
"text": "\n\\mathbf{W} = \\frac{1}{3} \\Theta \\mathbf{g} + \\mathbf{\\Sigma} + \\mathbf{R}\n"
},
{
"math_id": 47,
"text": "\\Theta"
},
{
"math_id": 48,
"text": "\\mathbf{W}"
},
{
"math_id": 49,
"text": "\\mathbf{R}"
},
{
"math_id": 50,
"text": "\n\\begin{align}\n\\Sigma^{ij} \\equiv W^{(ij)} &= \\frac{1}{2} \\left(W^{ij} + W^{ji} \\right) - \\frac{1}{3} \\left(\\mathrm{Tr}\\, \\mathbf{W} \\right) g^{ij} \\\\\nR^{ij} \\equiv W^{[ij]} &= \\frac{1}{2} \\left(W^{ij} - W^{ji} \\right)\n\\end{align}\n"
},
{
"math_id": 51,
"text": "(ij)"
},
{
"math_id": 52,
"text": "[ij]"
},
{
"math_id": 53,
"text": "\nC^{ijkl} = M^{ijkl} + N^{ijkl}\n"
},
{
"math_id": 54,
"text": "\n\\begin{align}\nM^{ijkl} \\equiv C^{i(jk)l} = \\frac{1}{2}\\left(C^{ijkl} + C^{ikjl} \\right) \\\\\nN^{ijkl} \\equiv C^{i[jk]l} = \\frac{1}{2}\\left(C^{ijkl} - C^{ikjl} \\right)\n\\end{align}\n"
},
{
"math_id": 55,
"text": "M^{ijkl}"
},
{
"math_id": 56,
"text": "N^{ijkl}"
},
{
"math_id": 57,
"text": "\\mathbf{S}"
},
{
"math_id": 58,
"text": "4! = 24"
},
{
"math_id": 59,
"text": "\n\\begin{align}\nS^{ijkl} &= \\frac{1}{4!}\\sum_{(i,j,k,l)\\in S_4} C^{ijkl} \\\\ &=\\frac{1}{4!}\\left(C^{ijkl} + C^{jikl}+ C^{ikjl} + \\ldots \\right)\n\\end{align}\n"
},
{
"math_id": 60,
"text": "\\mathbb{S}_4"
},
{
"math_id": 61,
"text": "\nS^{ijkl} = \\frac{1}{3}\\left(C^{ijkl} + C^{iklj} + C^{iljk} \\right)\n"
},
{
"math_id": 62,
"text": "\n A^{ijkl} \\equiv C^{ijkl} - S^{ijkl} = \\frac{1}{3}\\left(2 C^{ijkl} - C^{ilkj} - C^{iklj} \\right)\n "
},
{
"math_id": 63,
"text": "C^{ijkl} = S^{ijkl} + A^{ijkl}"
},
{
"math_id": 64,
"text": "\\mathbf{A}"
},
{
"math_id": 65,
"text": "G(3,\\mathbb{R})"
},
{
"math_id": 66,
"text": "\n\\begin{align}\nC^{ijkl} &= S^{ijkl} + A^{ijkl} \\\\\n&= \\left(^{(1)}\\!S^{ijkl} + \\, ^{(2)}\\!S^{ijkl} + \\, ^{(3)}\\!S^{ijkl} \\right) + \\, \\left(^{(1)}\\!A^{ijkl} + ^{(2)}\\!A^{ijkl} \\right)\n\\end{align}\n"
},
{
"math_id": 67,
"text": "\n\\mathcal{C} = \\left(^{(1)}\\!\\mathcal{C} \\oplus \\, ^{(2)}\\!\\mathcal{C} \\oplus \\, ^{(3)}\\!\\mathcal{C} \\right) \\oplus \\, \\left(^{(4)}\\!\\mathcal{C} \\oplus \\, ^{(5)}\\!\\mathcal{C} \\right)\n"
},
{
"math_id": 68,
"text": "\n21 = (1 \\oplus 5 \\oplus 9) \\oplus (1 \\oplus 5)\n"
},
{
"math_id": 69,
"text": "\\mathbb{H}_n(\\mathbb{R}^3)"
},
{
"math_id": 70,
"text": "n"
},
{
"math_id": 71,
"text": "^{(1)}\\!\\mathcal{C}"
},
{
"math_id": 72,
"text": "^{(4)}\\!\\mathcal{C}"
},
{
"math_id": 73,
"text": "\\mathbb{H}_1"
},
{
"math_id": 74,
"text": "^{(2)}\\!\\mathcal{C}"
},
{
"math_id": 75,
"text": "^{(5)}\\!\\mathcal{C}"
},
{
"math_id": 76,
"text": "\\mathbb{H}_2"
},
{
"math_id": 77,
"text": "^{(3)}\\!\\mathcal{C}"
},
{
"math_id": 78,
"text": "\\mathbb{H}_4"
}
]
| https://en.wikipedia.org/wiki?curid=7739252 |
77395174 | Pellet (steel industry) | Form of iron ore
Pellets are a processed form of iron ore utilized in the steel industry, specifically designed for direct application in blast furnaces or direct reduction plants. These pellets are spherical in shape, with diameters ranging from 8 to 18 millimeters.
The production of iron ore pellets involves a series of chemical and thermal processes aimed at enhancing the iron content of the ore to levels between 67% and 72%. This enrichment process not only increases the iron concentration but also imparts specific chemical and mechanical properties that are essential for efficient steel production.
History.
The pelletizing of powdered iron ores was first introduced at the end of the nineteenth century, utilizing tar as a binding agent, comprising 1% by weight. This method involved firing the mixture in a rotating drum to create pellets suitable for blast furnaces, while also facilitating the removal of undesirable elements such as sulfur and arsenic through the emitted fumes.
During this period, pellet sintering developed alongside grate sintering as an alternative process to address the agglomeration challenges faced by high-quality iron ore products. The concept of pellet agglomeration was initially patented by A. Anderson in Sweden in 1912, followed by a similar patent in Germany in 1913. The resultant product was named "GEROELL," derived from the German word for "rolling." It was noted that pellets produced through this method demonstrated faster reduction rates compared to calibrated ores and agglomerates created from the same feedstock. In 1926, an industrial pilot plant was constructed by Krupp in Rheinhausen to explore the potential of this pelletizing technology. However, the plant was later dismantled to make way for the installation of a large-scale grate sintering line, which emerged as a competing process in the industry.
Pellet sintering has remained a viable method for processing iron ore. In the United States, this technique was employed to process fine concentrates from the Mesabi Range during World War II. This was necessary as naturally rich iron ores (containing over 50% iron) were being depleted. The development of pelletizing fine magnetite ores, which typically have less than 44 mm in size and are around 85% iron, began around 1943 with support from the University of Minnesota. The process was later adopted in Europe, particularly in Sweden, to facilitate the production of pre-reduced iron ore.
Pellet production saw substantial growth between 1960 and 1980 but eventually plateaued at approximately 300 million tons annually. The following data illustrates pellet production over several years:
Production.
Pellets are produced directly at the extraction site by mining companies and are marketed as a distinct product, unlike agglomerates which are typically manufactured at blast furnace sites through the mixing of iron ores from various sources. Pellets are generally more robust and better suited to handling compared to agglomerates, which are relatively fragile. The production process for pellets can vary significantly depending on the local characteristics of the iron ore, and some facilities may include additional stages, such as arsenic removal. The pellet production process involves several key stages:
These processes ensure that the pellets are produced to meet specific quality standards and can withstand the demands of handling and transportation.
The ore concentrate is formed into pellets through a compaction process. This can be performed using various types of mixing equipment, though saucers are the most commonly employed tool. Before being subjected to sintering, the pellets are referred to as "green" or "raw" pellets, and their typical diameter ranges from 5 to 20 mm.
Following pellet formation, they are either sent to a consumption plant or directed to a cooking oven. Due to their inherent fragility, which persists despite the binder used, pellets are generally more suitable for processing in a cooking oven rather than a consumption plant. After cooking, the pellets are cooled.
The cooking process involves passing the pellets through a chain of contiguous ovens, where they are heated to temperatures of up to 1,200°C. This can be achieved using different methods: a straight grate process for a single, uninterrupted chain or a grate kiln process that includes a rotating cooling tray at the end of the chain. The required heat for this process is supplied by burners, which can either add fuel to the ore concentrate or facilitate the oxidation of the ore, depending on the specific type of ore being processed.
Benefits and limitations.
Benefits.
Pelletizing ore enhances the efficiency of blast furnaces and direct reduction plants by providing several advantages over raw iron ore:
Pellets generally contain a higher iron content than agglomerated ore, leading to increased plant productivity and reduced fuel consumption. They are also more durable and capable of withstanding repeated handling. Despite their higher cost—typically about 70% more than raw ore—the benefits they offer in terms of efficiency and performance justify the expense. In steelmaking, pellets are often mixed with sinter in varying proportions to optimize the process.
Similar to sinter, the high-temperature roasting and sintering of pellets effectively eliminate undesirable elements such as sulfur. It is also an efficient method for removing zinc, which can otherwise hinder the operation of blast furnaces. With a vaporization temperature of 907°C, zinc is effectively removed during the roasting process, making pelletizing a suitable method for this application.
Limitations.
Pellets are vulnerable to sulfur-induced damage during the reduction process in blast furnaces. Even low levels of sulfur dioxide (SO₂) can interfere with furnace operations, with effects observed at concentrations as low as 5 to 50 parts per million (ppm) in the reduction gas. The detailed mechanism behind this issue was only fully understood towards the end of the 20th century. Initially, sulfur accelerates the extraction of oxygen from the iron oxide, but this effect reverses once metallic iron begins to form, significantly slowing the oxygen extraction process. This unusual behavior is attributed to sulfur's strong affinity for the metallic iron that forms on the pellet surface, which inhibits the penetration of carbon.
Furthermore, the reaction between wustite (FeO) and carbon monoxide (CO) occurs not only on the surface of FeO but also beneath the surface of the reduced iron. Due to iron's superior absorption characteristics, a substantial portion of gas transport happens at the iron/iron oxide phase boundary. This process depends on the iron's ability to absorb sufficient carbon (carburization). If sulfur obstructs carbon absorption, reduction is limited to the surface of the iron oxide. This restriction results in the formation of elongated, fibrous iron crystals, as iron crystallization can only proceed in the direction of the reducing iron oxide. Consequently, the structure of the granules becomes reinforced and can expand to two or three times their original volume. This expansion, or "swelling," of the granules can lead to blockage or significant damage to the blast furnace, highlighting the challenges associated with using pellets in blast furnace operations.
Composition.
Pellets, similar to agglomerates, are classified based on their chemical properties as either acidic or basic. To determine the basicity index ("ic"), the following ratio of mass concentrations is used:
formula_0
This ratio helps in assessing the relative basicity of the pellets, which is important for optimizing their use in blast furnaces and other metallurgical processes.
In practice, a simplified basicity index ("i") is commonly used to classify pellets based on their chemical properties. This index is calculated using the ratio of calcium oxide (CaO) to silicon dioxide (SiO2):
</chem>
Pellets can contain high levels of hematite, but the proportion must be controlled. Excessive hematite can weaken the pellet structure during reduction, leading to the pellets breaking down into dust under the weight of stacked charges. This is due to the fact that a high hematite content can cause the pellets to disintegrate, compromising their integrity and usability in the reduction process.
Acid pellets.
Acid pellets are produced without the addition of additives, resulting in a specific chemical composition. Typically, the composition of acid pellets is as follows: 2.2% SiO2 and 0.2% CaO. In the United States during the 1990s, the typical characteristics of acid pellets were:
Unlike agglomerated ores, which may be acidified through various silicates in the binder used during pelletizing, pellets are less prone to disintegration due to their solid spherical shape. This design helps maintain their acidic composition and mechanical properties.
Acid pellets exhibit notable mechanical strength with a crush resistance exceeding 250 kg per pellet. However, their reducibility could be improved. Additionally, they are prone to swelling when exposed to lime, especially when the basicity index ("i" = CaO / SiO2) exceeds 0.25, which may potentially cause issues in a blast furnace.
Self-melting pellets.
Self-melting pellets, also known as basic pellets, are a type of iron ore pellet that was developed in the United States in the 1990s. These pellets are designed for use in blast furnaces and are produced by adding lime (calcium oxide) and magnesia (magnesium oxide) to iron ore concentrate, enhancing their metallurgical properties. Self-melting pellets typically have the following properties:
These pellets are recognized for their high compressive strength and ease of reduction, making them well-suited for blast furnace operations. The production process of self-melting pellets involves incorporating limestone into the iron ore concentrate. This inclusion affects the productivity of pellet plants due to the calcination process, which involves the endothermic process of limestone. As a result, the overall productivity of the pellet plant can decrease by approximately 10 to 15% compared to the production of acid pellets, which do not include lime. Self-melting pellets are appreciated for their enhanced performance in blast furnaces but require consideration of the trade-offs in production efficiency.
Pellets with low silica content.
These pellets are designed for use in direct reduction plants. The typical composition of the pellets includes 67.8% iron (Fe), 1.7% silicon dioxide (SiO2 ), 0.40% aluminum oxide (Al2O3), 0.50% calcium oxide (CaO), 0.30% magnesium oxide (MgO), and 0.01% phosphorus (P).
Low-silica pellets can self-fuse when doped with lime. A typical composition for these self-fusing pellets is 65.1% iron (Fe), 2.5% silicon dioxide (SiO2), 0.45% aluminum oxide (Al2O3 ), 2.25% calcium oxide (CaO), 1.50% magnesium oxide (MgO), and 0.01% phosphorus (P).
Other types of pellets.
To cater to specific customer needs, manufacturers have developed alternative pellet types that offer distinct properties and performance characteristics:
These alternative pellet types are designed to address different operational requirements and enhance the flexibility of iron-making processes.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "i_c = \\frac{[CaO]+[MgO]} {[SiO_2]+[Al_2O_3]}"
}
]
| https://en.wikipedia.org/wiki?curid=77395174 |
7739570 | Wigner D-matrix | Irreducible representation of the rotation group SO
The Wigner D-matrix is a unitary matrix in an irreducible representation of the groups SU(2) and SO(3). It was introduced in 1927 by Eugene Wigner, and plays a fundamental role in the quantum mechanical theory of angular momentum. The complex conjugate of the D-matrix is an eigenfunction of the Hamiltonian of spherical and symmetric rigid rotors. The letter D stands for "Darstellung", which means "representation" in German.
Definition of the Wigner D-matrix.
Let "Jx", "Jy", "Jz" be generators of the Lie algebra of SU(2) and SO(3). In quantum mechanics, these three operators are the components of a vector operator known as "angular momentum". Examples are the angular momentum of an electron in an atom, electronic spin, and the angular momentum of a rigid rotor.
In all cases, the three operators satisfy the following commutation relations,
formula_0
where "i" is the purely imaginary number and Planck's constant ħ has been set equal to one. The Casimir operator
formula_1
commutes with all generators of the Lie algebra. Hence, it may be diagonalized together with Jz.
This defines the spherical basis used here. That is, there is a "complete set" of kets (i.e. orthonormal basis of joint eigenvectors labelled by quantum numbers that define the eigenvalues) with
formula_2
where "j" = 0, 1/2, 1, 3/2, 2, ... for SU(2), and "j" = 0, 1, 2, ... for SO(3). In both cases, "m"
−"j", −"j" + 1, ..., "j".
A 3-dimensional rotation operator can be written as
formula_3
where "α", "β", "γ" are Euler angles (characterized by the keywords: z-y-z convention, right-handed frame, right-hand screw rule, active interpretation).
The Wigner D-matrix is a unitary square matrix of dimension 2"j" + 1 in this spherical basis with elements
formula_4
where
formula_5
is an element of the orthogonal Wigner's (small) d-matrix.
That is, in this basis,
formula_6
is diagonal, like the "γ" matrix factor, but unlike the above "β" factor.
Wigner (small) d-matrix.
Wigner gave the following expression:
formula_7
The sum over "s" is over such values that the factorials are nonnegative, i.e. formula_8, formula_9.
"Note:" The d-matrix elements defined here are real. In the often-used z-x-z convention of Euler angles, the factor formula_10 in this formula is replaced by formula_11 causing half of the functions to be purely imaginary. The realness of the d-matrix elements is one of the reasons that the z-y-z convention, used in this article, is usually preferred in quantum mechanical applications.
The d-matrix elements are related to Jacobi polynomials formula_12 with nonnegative formula_13 and formula_14 Let
formula_15
If
formula_16
Then, with formula_17 the relation is
formula_18
where formula_19
It is also useful to consider the relations formula_20, where formula_21 and formula_22, which lead to:
formula_23
Properties of the Wigner D-matrix.
The complex conjugate of the D-matrix satisfies a number of differential properties that can be formulated concisely by introducing the following operators with formula_24
formula_25
which have quantum mechanical meaning: they are space-fixed rigid rotor angular momentum operators.
Further,
formula_26
which have quantum mechanical meaning: they are body-fixed rigid rotor angular momentum operators.
The operators satisfy the commutation relations
formula_27
and the corresponding relations with the indices permuted cyclically. The formula_28 satisfy "anomalous commutation relations" (have a minus sign on the right hand side).
The two sets mutually commute,
formula_29
and the total operators squared are equal,
formula_30
Their explicit form is,
formula_31
The operators formula_32 act on the first (row) index of the D-matrix,
formula_33
The operators formula_28 act on the second (column) index of the D-matrix,
formula_34
and, because of the anomalous commutation relation the raising/lowering operators are defined with reversed signs,
formula_35
Finally,
formula_36
In other words, the rows and columns of the (complex conjugate) Wigner D-matrix span irreducible representations of the isomorphic Lie algebras generated by formula_37 and formula_38.
An important property of the Wigner D-matrix follows from the commutation of
formula_39 with the time reversal operator
T,
formula_40
or
formula_41
Here, we used that formula_42 is anti-unitary (hence the complex conjugation after moving formula_43 from ket to bra), formula_44 and formula_45.
A further symmetry implies
formula_46
Orthogonality relations.
The Wigner D-matrix elements formula_47 form a set of orthogonal functions of the Euler angles formula_48 and formula_49:
formula_50
This is a special case of the Schur orthogonality relations.
Crucially, by the Peter–Weyl theorem, they further form a "complete" set.
The fact that formula_47 are matrix elements of a unitary transformation from one spherical basis formula_51 to another formula_52 is represented by the relations:
formula_53
formula_54
The group characters for SU(2) only depend on the rotation angle "β", being class functions, so, then, independent of the axes of rotation,
formula_55
and consequently satisfy simpler orthogonality relations, through the Haar measure of the group,
formula_56
The completeness relation (worked out in the same reference, (3.95)) is
formula_57
whence, for formula_58
formula_59
Kronecker product of Wigner D-matrices, Clebsch-Gordan series.
The set of Kronecker product matrices
formula_60
forms a reducible matrix representation of the groups SO(3) and SU(2). Reduction into irreducible components is by the following equation:
formula_61
The symbol formula_62 is a Clebsch–Gordan coefficient.
Relation to spherical harmonics and Legendre polynomials.
For integer values of formula_63, the D-matrix elements with second index equal to zero are proportional
to spherical harmonics and associated Legendre polynomials, normalized to unity and with Condon and Shortley phase convention:
formula_64
This implies the following relationship for the d-matrix:
formula_65
A rotation of spherical harmonics formula_66 then is effectively a composition of two rotations,
formula_67
When both indices are set to zero, the Wigner D-matrix elements are given by ordinary Legendre polynomials:
formula_68
In the present convention of Euler angles, formula_69 is
a longitudinal angle and formula_70 is a colatitudinal angle (spherical polar angles
in the physical definition of such angles). This is one of the reasons that the "z"-"y"-"z"
convention is used frequently in molecular physics.
From the time-reversal property of the Wigner D-matrix follows immediately
formula_71
There exists a more general relationship to the spin-weighted spherical harmonics:
formula_72
Connection with transition probability under rotations.
The absolute square of an element of the D-matrix,
formula_73
gives the probability that a system with spin formula_74 prepared in a state with spin projection formula_75 along
some direction will be measured to have a spin projection formula_76 along a second direction at an angle formula_70
to the first direction. The set of quantities formula_77 itself forms a real symmetric matrix, that
depends only on the Euler angle formula_70, as indicated.
Remarkably, the eigenvalue problem for the formula_78 matrix can be solved completely:
formula_79
Here, the eigenvector, formula_80, is a scaled and shifted discrete Chebyshev polynomial, and the corresponding eigenvalue, formula_81, is the Legendre polynomial.
Relation to Bessel functions.
In the limit when formula_82 we have
formula_83
where formula_84 is the Bessel function and formula_85 is finite.
List of d-matrix elements.
Using sign convention of Wigner, et al. the d-matrix elements formula_86
for "j" = 1/2, 1, 3/2, and 2 are given below.
for "j" = 1/2
formula_87
for "j" = 1
formula_88
for "j" = 3/2
formula_89
for "j" = 2
formula_90
Wigner d-matrix elements with swapped lower indices are found with the relation:
formula_91
formula_92
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " [J_x,J_y] = i J_z,\\quad [J_z,J_x] = i J_y,\\quad [J_y,J_z] = i J_x, "
},
{
"math_id": 1,
"text": " J^2 = J_x^2 + J_y^2 + J_z^2 "
},
{
"math_id": 2,
"text": " J^2 |jm\\rangle = j(j+1) |jm\\rangle,\\quad J_z |jm\\rangle = m |jm\\rangle,"
},
{
"math_id": 3,
"text": "\\mathcal{R}(\\alpha,\\beta,\\gamma) = e^{-i\\alpha J_z}e^{-i\\beta J_y}e^{-i\\gamma J_z},"
},
{
"math_id": 4,
"text": "D^j_{m'm}(\\alpha,\\beta,\\gamma) \\equiv \\langle jm' | \\mathcal{R}(\\alpha,\\beta,\\gamma)| jm \\rangle =e^{-im'\\alpha } d^j_{m'm}(\\beta)e^{-i m\\gamma},"
},
{
"math_id": 5,
"text": "d^j_{m'm}(\\beta)= \\langle jm' |e^{-i\\beta J_y} | jm \\rangle = D^j_{m'm}(0,\\beta,0) "
},
{
"math_id": 6,
"text": " D^j_{m'm}(\\alpha,0,0) = e^{-im'\\alpha } \\delta_{m'm} "
},
{
"math_id": 7,
"text": "d^j_{m'm}(\\beta) =[(j+m')!(j-m')!(j+m)!(j-m)!]^{\\frac{1}{2}} \\sum_{s=s_{\\mathrm{min}}}^{s_{\\mathrm{max}}} \\left[\\frac{(-1)^{m'-m+s} \\left(\\cos\\frac{\\beta}{2}\\right)^{2j+m-m'-2s}\\left(\\sin\\frac{\\beta}{2}\\right)^{m'-m+2s}}{(j+m-s)!s!(m'-m+s)!(j-m'-s)!} \\right]."
},
{
"math_id": 8,
"text": "s_{\\mathrm{min}}=\\mathrm{max}(0,m-m')"
},
{
"math_id": 9,
"text": "s_{\\mathrm{max}}=\\mathrm{min}(j+m,j-m')"
},
{
"math_id": 10,
"text": "(-1)^{m'-m+s}"
},
{
"math_id": 11,
"text": "(-1)^s i^{m-m'},"
},
{
"math_id": 12,
"text": "P^{(a,b)}_k(\\cos\\beta)"
},
{
"math_id": 13,
"text": "a"
},
{
"math_id": 14,
"text": "b."
},
{
"math_id": 15,
"text": " k = \\min(j+m, j-m, j+m', j-m')."
},
{
"math_id": 16,
"text": "k = \\begin{cases}\n j+m: & a=m'-m;\\quad \\lambda=m'-m\\\\\n j-m: & a=m-m';\\quad \\lambda= 0 \\\\\n j+m': & a=m-m';\\quad \\lambda= 0 \\\\\n j-m': & a=m'-m;\\quad \\lambda=m'-m \\\\\n\\end{cases}"
},
{
"math_id": 17,
"text": "b=2j-2k-a,"
},
{
"math_id": 18,
"text": "d^j_{m'm}(\\beta) = (-1)^{\\lambda} \\binom{2j-k}{k+a}^{\\frac{1}{2}} \\binom{k+b}{b}^{-\\frac{1}{2}} \\left(\\sin\\frac{\\beta}{2}\\right)^a \\left(\\cos\\frac{\\beta}{2}\\right)^b P^{(a,b)}_k(\\cos\\beta),"
},
{
"math_id": 19,
"text": " a,b \\ge 0."
},
{
"math_id": 20,
"text": "a = |m'-m|, b = |m'+m|, \\lambda = \\frac{m-m'-|m-m'|}{2}, k = j - M"
},
{
"math_id": 21,
"text": "M = \\max(|m|,|m'|)"
},
{
"math_id": 22,
"text": "N = \\min(|m|,|m'|)"
},
{
"math_id": 23,
"text": "d^j_{m'm}(\\beta) =(-1)^{\\frac{m-m'-|m-m'|}{2}}\\left[ \\frac{(j+M)!(j-M)!}{(j+N)!(j-N)!}\\right]^{\\frac{1}{2}} \\left(\\sin\\frac{\\beta}{2}\\right)^{|m-m'|} \\left(\\cos\\frac{\\beta}{2}\\right)^{|m+m'|} P_{j-M}^{(|m-m'|,|m+m'|)}(\\cos \\beta)."
},
{
"math_id": 24,
"text": "(x, y, z) = (1, 2, 3),"
},
{
"math_id": 25,
"text": "\\begin{align}\n\\hat{\\mathcal{J}}_1 &= i \\left( \\cos \\alpha \\cot \\beta \\frac{\\partial}{\\partial \\alpha} + \\sin \\alpha {\\partial \\over \\partial \\beta} - {\\cos \\alpha \\over \\sin \\beta} {\\partial \\over \\partial \\gamma} \\right) \\\\\n\\hat{\\mathcal{J}}_2 &= i \\left( \\sin \\alpha \\cot \\beta {\\partial \\over \\partial \\alpha} - \\cos \\alpha {\\partial \\over \\partial \\beta} - {\\sin \\alpha \\over \\sin \\beta} {\\partial \\over \\partial \\gamma} \\right) \\\\\n\\hat{\\mathcal{J}}_3 &= - i {\\partial \\over \\partial \\alpha}\n\\end{align}"
},
{
"math_id": 26,
"text": "\\begin{align}\n\\hat{\\mathcal{P}}_1 &= i \\left( {\\cos \\gamma \\over \\sin \\beta}{\\partial \\over \\partial \\alpha } - \\sin \\gamma {\\partial \\over \\partial \\beta }- \\cot \\beta \\cos \\gamma {\\partial \\over \\partial \\gamma} \\right)\\\\\n\\hat{\\mathcal{P}}_2 &= i \\left( - {\\sin \\gamma \\over \\sin \\beta} {\\partial \\over \\partial \\alpha} - \\cos \\gamma\n {\\partial \\over \\partial \\beta} + \\cot \\beta \\sin \\gamma {\\partial \\over \\partial \\gamma} \\right) \\\\\n\\hat{\\mathcal{P}}_3 &= - i {\\partial\\over \\partial \\gamma}, \\\\\n\\end{align}"
},
{
"math_id": 27,
"text": " \\left[\\mathcal{J}_1, \\mathcal{J}_2\\right] = i \\mathcal{J}_3, \\qquad \\hbox{and}\\qquad \\left[\\mathcal{P}_1, \\mathcal{P}_2\\right] = -i \\mathcal{P}_3,"
},
{
"math_id": 28,
"text": "\\mathcal{P}_i"
},
{
"math_id": 29,
"text": "\\left[\\mathcal{P}_i, \\mathcal{J}_j\\right] = 0,\\quad i, j = 1, 2, 3,"
},
{
"math_id": 30,
"text": "\\mathcal{J}^2 \\equiv \\mathcal{J}_1^2+ \\mathcal{J}_2^2 + \\mathcal{J}_3^2 = \\mathcal{P}^2 \\equiv \\mathcal{P}_1^2+ \\mathcal{P}_2^2 + \\mathcal{P}_3^2."
},
{
"math_id": 31,
"text": "\\mathcal{J}^2= \\mathcal{P}^2 =-\\frac{1}{\\sin^2\\beta} \\left( \\frac{\\partial^2}{\\partial \\alpha^2} +\\frac{\\partial^2}{\\partial \\gamma^2} -2\\cos\\beta\\frac{\\partial^2}{\\partial\\alpha\\partial \\gamma} \\right)-\\frac{\\partial^2}{\\partial \\beta^2} -\\cot\\beta\\frac{\\partial}{\\partial \\beta}."
},
{
"math_id": 32,
"text": "\\mathcal{J}_i"
},
{
"math_id": 33,
"text": "\\begin{align}\n\\mathcal{J}_3 D^j_{m'm}(\\alpha,\\beta,\\gamma)^* &=m' D^j_{m'm}(\\alpha,\\beta,\\gamma)^* \\\\\n(\\mathcal{J}_1 \\pm i \\mathcal{J}_2) D^j_{m'm}(\\alpha,\\beta,\\gamma)^* &= \\sqrt{j(j+1)-m'(m'\\pm 1)} D^j_{m'\\pm 1, m}(\\alpha,\\beta,\\gamma)^* \n\\end{align}"
},
{
"math_id": 34,
"text": "\\mathcal{P}_3 D^j_{m'm}(\\alpha,\\beta,\\gamma)^* = m D^j_{m'm}(\\alpha,\\beta,\\gamma)^* ,"
},
{
"math_id": 35,
"text": "(\\mathcal{P}_1 \\mp i \\mathcal{P}_2) D^j_{m'm}(\\alpha,\\beta,\\gamma)^* = \\sqrt{j(j+1)-m(m\\pm 1)} D^j_{m', m\\pm1}(\\alpha,\\beta,\\gamma)^* ."
},
{
"math_id": 36,
"text": "\\mathcal{J}^2 D^j_{m'm}(\\alpha,\\beta,\\gamma)^* =\\mathcal{P}^2 D^j_{m'm}(\\alpha,\\beta,\\gamma)^* = j(j+1) D^j_{m'm}(\\alpha,\\beta,\\gamma)^*."
},
{
"math_id": 37,
"text": "\\{\\mathcal{J}_i\\}"
},
{
"math_id": 38,
"text": "\\{-\\mathcal{P}_i\\}"
},
{
"math_id": 39,
"text": " \\mathcal{R}(\\alpha,\\beta,\\gamma) "
},
{
"math_id": 40,
"text": "\\langle jm' | \\mathcal{R}(\\alpha,\\beta,\\gamma)| jm \\rangle = \\langle jm' | T^{ \\dagger} \\mathcal{R}(\\alpha,\\beta,\\gamma) T| jm \\rangle =(-1)^{m'-m} \\langle j,-m' | \\mathcal{R}(\\alpha,\\beta,\\gamma)| j,-m \\rangle^*,"
},
{
"math_id": 41,
"text": "D^j_{m'm}(\\alpha,\\beta,\\gamma) = (-1)^{m'-m} D^j_{-m',-m}(\\alpha,\\beta,\\gamma)^*."
},
{
"math_id": 42,
"text": "T "
},
{
"math_id": 43,
"text": "T^\\dagger "
},
{
"math_id": 44,
"text": " T | jm \\rangle = (-1)^{j-m} | j,-m \\rangle"
},
{
"math_id": 45,
"text": "(-1)^{2j-m'-m} = (-1)^{m'-m}"
},
{
"math_id": 46,
"text": "(-1)^{m'-m}D^{j}_{mm'}(\\alpha,\\beta,\\gamma)=D^{j}_{m'm}(\\gamma,\\beta,\\alpha)~. "
},
{
"math_id": 47,
"text": "D^j_{mk}(\\alpha,\\beta,\\gamma)"
},
{
"math_id": 48,
"text": "\\alpha, \\beta,"
},
{
"math_id": 49,
"text": "\\gamma"
},
{
"math_id": 50,
"text": "\\int_0^{2\\pi} d\\alpha \\int_0^\\pi d\\beta \\sin \\beta \\int_0^{2\\pi} d\\gamma \\,\\, D^{j'}_{m'k'}(\\alpha,\\beta,\\gamma)^\\ast D^j_{mk}(\\alpha, \\beta, \\gamma) = \\frac{8\\pi^2}{2j+1} \\delta_{m'm}\\delta_{k'k}\\delta_{j'j}."
},
{
"math_id": 51,
"text": " | lm \\rangle"
},
{
"math_id": 52,
"text": " \\mathcal{R}(\\alpha,\\beta,\\gamma) | lm \\rangle"
},
{
"math_id": 53,
"text": "\\sum_k D^j_{m'k}(\\alpha, \\beta, \\gamma)^* D^j_{mk}(\\alpha, \\beta, \\gamma) = \\delta_{m,m'},"
},
{
"math_id": 54,
"text": "\\sum_k D^j_{k m'}(\\alpha, \\beta, \\gamma)^* D^j_{km}(\\alpha, \\beta, \\gamma) = \\delta_{m,m'}."
},
{
"math_id": 55,
"text": "\\chi^j (\\beta)\\equiv \\sum_m D^j_{mm}(\\beta)=\\sum_m d^j_{mm}(\\beta) = \\frac{\\sin\\left (\\frac{(2j+1)\\beta}{2} \\right )}{\\sin \\left (\\frac{\\beta}{2} \\right )},"
},
{
"math_id": 56,
"text": "\\frac{1}{\\pi} \\int _0^{2\\pi} d\\beta \\sin^2 \\left (\\frac{\\beta}{2} \\right ) \\chi^j (\\beta) \\chi^{j'}(\\beta)= \\delta_{j'j}."
},
{
"math_id": 57,
"text": "\\sum_j \\chi^j (\\beta) \\chi^j (\\beta')= \\delta (\\beta -\\beta'),"
},
{
"math_id": 58,
"text": "\\beta' =0,"
},
{
"math_id": 59,
"text": "\\sum_j \\chi^j (\\beta) (2j+1)= \\delta (\\beta )."
},
{
"math_id": 60,
"text": "\n \\mathbf{D}^j(\\alpha,\\beta,\\gamma)\\otimes \\mathbf{D}^{j'}(\\alpha,\\beta,\\gamma)\n"
},
{
"math_id": 61,
"text": "\n D^j_{m k}(\\alpha,\\beta,\\gamma) D^{j'}_{m' k'}(\\alpha,\\beta,\\gamma) =\n \\sum_{J=|j-j'|}^{j+j'} \\langle j m j' m' | J \\left(m + m'\\right) \\rangle\n \\langle j k j' k' | J \\left(k + k'\\right) \\rangle\n D^J_{\\left(m + m'\\right) \\left(k + k'\\right)}(\\alpha,\\beta,\\gamma)\n"
},
{
"math_id": 62,
"text": "\\langle j_1 m_1 j_2 m_2 | j_3 m_3 \\rangle"
},
{
"math_id": 63,
"text": "l"
},
{
"math_id": 64,
"text": "\nD^{\\ell}_{m 0}(\\alpha,\\beta,\\gamma) = \\sqrt{\\frac{4\\pi}{2\\ell+1}} Y_{\\ell}^{m*} (\\beta, \\alpha ) = \\sqrt{\\frac{(\\ell-m)!}{(\\ell+m)!}} \\, P_\\ell^m ( \\cos{\\beta} ) \\, e^{-i m \\alpha }.\n"
},
{
"math_id": 65,
"text": "\nd^{\\ell}_{m 0}(\\beta) = \\sqrt{\\frac{(\\ell-m)!}{(\\ell+m)!}} \\, P_\\ell^m ( \\cos{\\beta} ).\n"
},
{
"math_id": 66,
"text": " \\langle \\theta, \\phi| \\ell m'\\rangle"
},
{
"math_id": 67,
"text": "\n \\sum^\\ell_{m'=-\\ell} Y_{\\ell}^ {m'} (\\theta, \\phi ) ~ D^{\\ell}_{m' ~m }(\\alpha,\\beta,\\gamma).\n"
},
{
"math_id": 68,
"text": "\n D^{\\ell}_{0,0}(\\alpha,\\beta,\\gamma) = d^{\\ell}_{0,0}(\\beta) = P_{\\ell}(\\cos\\beta).\n"
},
{
"math_id": 69,
"text": "\\alpha"
},
{
"math_id": 70,
"text": "\\beta"
},
{
"math_id": 71,
"text": "\n\\left( Y_{\\ell}^m \\right) ^* = (-1)^m Y_{\\ell}^{-m}.\n"
},
{
"math_id": 72,
"text": "\nD^{\\ell}_{m s}(\\alpha,\\beta,-\\gamma) =(-1)^s \\sqrt\\frac{4\\pi}{2{\\ell}+1} {}_sY_{\\ell}^m(\\beta,\\alpha) e^{is\\gamma}.\n"
},
{
"math_id": 73,
"text": "\n F_{mm'}(\\beta) = | D^j_{mm'}(\\alpha,\\beta,\\gamma) |^2,\n"
},
{
"math_id": 74,
"text": "j"
},
{
"math_id": 75,
"text": "m"
},
{
"math_id": 76,
"text": "m'"
},
{
"math_id": 77,
"text": "F_{mm'}"
},
{
"math_id": 78,
"text": "F"
},
{
"math_id": 79,
"text": "\n \\sum_{m' = -j}^j F_{mm'}(\\beta) f^j_{\\ell}(m') = P_{\\ell}(\\cos\\beta) f^j_{\\ell}(m) \\qquad (\\ell = 0, 1, \\ldots, 2j).\n"
},
{
"math_id": 80,
"text": "f^j_{\\ell}(m)"
},
{
"math_id": 81,
"text": "P_{\\ell}(\\cos\\beta)"
},
{
"math_id": 82,
"text": "\\ell \\gg m, m^\\prime"
},
{
"math_id": 83,
"text": "D^\\ell_{mm'}(\\alpha,\\beta,\\gamma) \\approx e^{-im\\alpha-im'\\gamma}J_{m-m'}(\\ell\\beta)"
},
{
"math_id": 84,
"text": "J_{m-m'}(\\ell\\beta)"
},
{
"math_id": 85,
"text": "\\ell\\beta"
},
{
"math_id": 86,
"text": "d^j_{m'm}(\\theta) "
},
{
"math_id": 87,
"text": "\\begin{align}\nd_{\\frac{1}{2},\\frac{1}{2}}^{\\frac{1}{2}} &= \\cos \\frac{\\theta}{2} \\\\[6pt]\nd_{\\frac{1}{2},-\\frac{1}{2}}^{\\frac{1}{2}} &= -\\sin \\frac{\\theta}{2}\n\\end{align}"
},
{
"math_id": 88,
"text": "\\begin{align}\nd_{1,1}^{1} &= \\frac{1}{2} (1+\\cos \\theta) \\\\[6pt]\nd_{1,0}^{1} &= -\\frac{1}{\\sqrt{2}} \\sin \\theta \\\\[6pt]\nd_{1,-1}^{1} &= \\frac{1}{2} (1-\\cos \\theta) \\\\[6pt]\nd_{0,0}^{1} &= \\cos \\theta\n\\end{align}"
},
{
"math_id": 89,
"text": "\\begin{align}\nd_{\\frac{3}{2}, \\frac{3}{2}}^{\\frac{3}{2}} &= \\frac{1}{2} (1+\\cos \\theta) \\cos \\frac{\\theta}{2} \\\\[6pt]\nd_{\\frac{3}{2}, \\frac{1}{2}}^{\\frac{3}{2}} &= -\\frac{\\sqrt{3}}{2} (1+\\cos \\theta) \\sin \\frac{\\theta}{2} \\\\[6pt]\nd_{\\frac{3}{2},-\\frac{1}{2}}^{\\frac{3}{2}} &= \\frac{\\sqrt{3}}{2} (1-\\cos \\theta) \\cos \\frac{\\theta}{2} \\\\[6pt]\nd_{\\frac{3}{2},-\\frac{3}{2}}^{\\frac{3}{2}} &= -\\frac{1}{2} (1-\\cos \\theta) \\sin \\frac{\\theta}{2} \\\\[6pt]\nd_{\\frac{1}{2}, \\frac{1}{2}}^{\\frac{3}{2}} &= \\frac{1}{2} (3\\cos \\theta - 1) \\cos \\frac{\\theta}{2} \\\\[6pt]\nd_{\\frac{1}{2},-\\frac{1}{2}}^{\\frac{3}{2}} &= -\\frac{1}{2} (3\\cos \\theta + 1) \\sin \\frac{\\theta}{2}\n\\end{align}"
},
{
"math_id": 90,
"text": "\\begin{align}\nd_{2,2}^{2} &= \\frac{1}{4}\\left(1 +\\cos \\theta\\right)^2 \\\\[6pt]\nd_{2,1}^{2} &= -\\frac{1}{2}\\sin \\theta \\left(1 + \\cos \\theta\\right) \\\\[6pt]\nd_{2,0}^{2} &= \\sqrt{\\frac{3}{8}}\\sin^2 \\theta \\\\[6pt]\nd_{2,-1}^{2} &= -\\frac{1}{2}\\sin \\theta \\left(1 - \\cos \\theta\\right) \\\\[6pt]\nd_{2,-2}^{2} &= \\frac{1}{4}\\left(1 -\\cos \\theta\\right)^2 \\\\[6pt]\nd_{1,1}^{2} &= \\frac{1}{2}\\left(2\\cos^2\\theta + \\cos \\theta-1 \\right) \\\\[6pt]\nd_{1,0}^{2} &= -\\sqrt{\\frac{3}{8}} \\sin 2 \\theta \\\\[6pt]\nd_{1,-1}^{2} &= \\frac{1}{2}\\left(- 2\\cos^2\\theta + \\cos \\theta +1 \\right) \\\\[6pt]\nd_{0,0}^{2} &= \\frac{1}{2} \\left(3 \\cos^2 \\theta - 1\\right)\n\\end{align}"
},
{
"math_id": 91,
"text": "d_{m', m}^j = (-1)^{m-m'}d_{m, m'}^j = d_{-m,-m'}^j."
},
{
"math_id": 92,
"text": "\\begin{align}\nd_{m',m}^{j}(\\pi) &= (-1)^{j-m} \\delta_{m',-m} \\\\[6pt]\nd_{m',m}^{j}(\\pi-\\beta) &= (-1)^{j+m'} d_{m',-m}^{j}(\\beta)\\\\[6pt]\nd_{m',m}^{j}(\\pi+\\beta) &= (-1)^{j-m} d_{m',-m}^{j}(\\beta)\\\\[6pt]\nd_{m',m}^{j}(2\\pi+\\beta) &= (-1)^{2j} d_{m',m}^{j}(\\beta)\\\\[6pt]\nd_{m',m}^{j}(-\\beta) &= d_{m,m'}^{j}(\\beta) = (-1)^{m'-m} d_{m',m}^{j}(\\beta)\n\\end{align}"
}
]
| https://en.wikipedia.org/wiki?curid=7739570 |
773959 | Diatomaceous earth | Soft, siliceous sedimentary rock
Diatomaceous earth ( ), diatomite ( ), celite or kieselgur/kieselguhr is a naturally occurring, soft, siliceous sedimentary rock that can be crumbled into a fine white to off-white powder. It has a particle size ranging from more than 3 mm to less than 1 μm, but typically 10 to 200 μm. Depending on the granularity, this powder can have an abrasive feel, similar to pumice powder, and has a low density as a result of its high porosity. The typical chemical composition of oven-dried diatomaceous earth is 80–90% silica, with 2–4% alumina (attributed mostly to clay minerals), and 0.5–2% iron oxide.
Diatomaceous earth consists of the fossilized remains of diatoms, a type of hard-shelled microalgae. It is used as a filtration aid, mild abrasive in products including metal polishes and toothpaste, mechanical insecticide, absorbent for liquids, matting agent for coatings, reinforcing filler in plastics and rubber, anti-block in plastic films, porous support for chemical catalysts, cat litter, activator in coagulation studies, a stabilizing component of dynamite, a thermal insulator, and a soil for potted plants and trees as in the art of bonsai. It is also used in gas chromatography packed columns made with glass or metal as stationary phase.
Composition.
Each deposit of diatomaceous earth is different, with varying blends of pure diatomaceous earth combined with other natural clays and minerals. The diatoms in each deposit contain different amounts of silica, depending on the sedimentation conditions, on the presence of other sediments (clay, sand, volcanic ashes), and on the age of the deposit (diagenesis, silica (SiO2) dissolution/precipitation, diatoms tests ageing). The species of diatom may also differ among deposits. The species of diatom is dependent upon the age and paleoecology of the deposit. In turn, the shape of a diatom is determined by its species.
Many deposits throughout British Columbia, such as Red Lake Earth, are from the Miocene epoch and contain a species of diatom known as "Melosira granulata". These diatoms have a small globular shape. A deposit containing diatoms from this epoch can provide certain benefits over others. For example, diatoms from the Eocene epoch are not as effective in their ability to absorb fluids because as older diatoms recrystallize, their small pores become filled with silica.
Formation.
Diatomite forms by the accumulation of the amorphous silica (opal, ) remains of dead diatoms (microscopic single-celled algae) in lake sediment or marine sediments. The fossil remains consist of a pair of symmetrical shells or frustules. Marine diatomites are found in association with a wide variety of other rock types but lacustrine diatomites are almost always associated with volcanic rock. Diatomaceous chert consists of diatomite that has been cemented with silica.
Diatoms are able to extract silica from water that is less than 1% saturated in amorphous silica (saturation index (SI): -2). Their frustules remain undissolved because they are surrounded by an organic matrix. Clay minerals may also precipitate on the frustules and protect them from dissolution in sea water. When the diatom dies, the frustule is stripped of its organic layer and exposed to sea water. As a result, only 1% to 10% of frustules survive long enough to be buried under sediments and some of this is dissolved within the sediments. Only an estimated 0.05% to 0.15% of the original amount of silica produced by diatoms is preserved in the sedimentary record.
Discovery.
In 1836 or 1837, German peasant Peter Kasten discovered diatomaceous earth (German: "Kieselgur") when sinking a well on the northern slopes of the Haußelberg hill, on Lüneburg Heath in North Germany.
The extraction site on Lüneburg Heath was 1863–1994 Neuohe, while the storage sites were:
The deposits are up to thick and are all of freshwater diatomaceous earth.
Until World War I, almost the entire worldwide production of diatomaceous earth was from this region.
Other deposits.
In Poland diatomaceous earth deposits are found in Jawornik, and are composed mostly of diatomaceous skeletons (frustules).
In Germany, diatomaceous earth was also extracted at Altenschlirf on the Vogelsberg (Upper Hesse) and at Klieken (Saxony-Anhalt).
There is a layer of diatomaceous earth more than thick in the nature reserve of Soos in the Czech Republic.
Deposits on the Isle of Skye, off the west coast of Scotland, were mined until 1960.
In Colorado and in Clark County, Nevada, United States, there are deposits that are up to several hundred meters thick in places. Marine deposits have been worked in the Sisquoc Formation in Santa Barbara County, California near Lompoc and along the Southern California coast. This is the world's largest deposit of diatomite. Additional marine deposits have been worked in Maryland, Virginia, Algeria and the MoClay of Denmark. Freshwater lake deposits occur in Nevada, Oregon, Washington and California. Lake deposits also occur in interglacial lakes in the eastern United States, in Canada and in Europe in Germany, France, Denmark and the Czech Republic. The worldwide association of diatomite deposits and volcanic deposits suggests that the availability of silica from volcanic ash may be necessary for thick diatomite deposits.
Diatomaceous earth is sometimes found on desert surfaces. Research has shown that the erosion of diatomaceous earth in such areas (such as the Bodélé Depression in the Sahara) is one of the most important sources of climate-affecting dust in the atmosphere.
The siliceous frustules of diatoms accumulate in fresh and brackish wetlands and lakes. Some peats and mucks contain a sufficient abundance of frustules such that they can be mined. Most of Florida's diatomaceous earths have been found in the muck of wetlands or lakes. The American Diatomite Corporation, from 1935 to 1946, refined a maximum of 145 tons per year from their processing plant near Clermont, Florida. Muck from several locations in Lake County, Florida was dried and burned (calcined) to produce the diatomaceous earth. It was formerly extracted from Lake Mývatn in Iceland.
The commercial deposits of diatomite are restricted to Tertiary or Quaternary periods. Older deposits from as early as the Cretaceous Period are known, but are of low quality.
Diatomite deposits rich in fossils have been located in New Zealand, but mining of the Foulden Maar deposits on an industrial scale, for conversion to animal feed, has drawn strong opposition.
Commercial form.
Diatomaceous earth is available commercially in several formats:
Usage.
Explosives.
In 1866, Alfred Nobel discovered that nitroglycerin could be made much more stable if absorbed in diatomite ("kieselguhr" in German). This allowed much safer transport and handling than pure nitroglycerin under the liquid form. Nobel patented this mixture as dynamite in 1867; the mixture is also called guhr dynamite in reference to the kieselguhr.
Filtration.
The Celle engineer, Wilhelm Berkefeld, recognized the ability of the diatomaceous earth to filter and developed tubular filters (known as filter candles) fired from diatomaceous earth. During the cholera epidemic in Hamburg in 1892, these Berkefeld filters were used successfully.
One form of diatomaceous earth is used as a filter medium, especially for swimming pools. It has a high porosity because it is composed of microscopically small, hollow particles. Diatomaceous earth (sometimes referred to by trademarked brand names such as Celite) is used in chemistry as a filtration aid, to increase flow rate, and filter very fine particles that would otherwise pass through or clog filter paper. It is also used to filter water, particularly in the drinking water treatment process and in fish tanks, and other liquids, such as beer and wine. It can also filter syrups, sugar, and honey without removing or altering their color, taste, or nutritional properties.
Abrasive.
The oldest use of diatomite is as a very mild abrasive and has been used in toothpaste, metal polishes, and some facial scrubs.
Pest control.
Diatomite is of value as an insecticide because of its abrasive and physico-sorptive properties. The fine powder adsorbs lipids from the waxy outer layer of the exoskeletons of many species of insects; this layer acts as a barrier that resists the loss of water vapour from the insect's body. Damaging the layer increases the evaporation of water from their bodies, so that they dehydrate, often fatally.
This also works against gastropods and is commonly employed in gardening to defeat slugs. However, since slugs inhabit humid environments, efficacy is very low. Diatomaceous earth is sometimes mixed with an attractant or other additives to increase its effectiveness.
The shape of the diatoms contained in a deposit has not been proven to affect their functionality when it comes to the adsorption of lipids; however, certain applications, such as that for slugs and snails, do work best when a particularly shaped diatom is used, suggesting that lipid adsorption is not the only factor involved. For example, in the case of slugs and snails, large, spiny diatoms work best to lacerate the epithelium of the mollusk. Diatom shells will work to some degree on the vast majority of animals that undergo ecdysis in shedding cuticle, such as arthropods or nematodes. It also may have other effects on lophotrochozoans, such as mollusks or annelids.
Medical-grade diatomite has been studied for its efficacy as a deworming agent in cattle; in both studies cited the groups being treated with diatomaceous earth did not fare any better than control groups. It is commonly used in lieu of boric acid and can be used to help control and possibly eliminate bed bugs, house dust mite, cockroach, ant, and flea infestations.
Diatomaceous earth is widely applied for insect control in grain storage. It is used to control cannibalistic behaviors in confused flour beetles, which infest flour storages.
In order to be effective as an insecticide, diatomaceous earth must be uncalcinated (i.e., it must not be heat-treated prior to application) and have a mean particle size below about 12 μm (i.e., food grade—see below).
Although considered to be relatively low-risk, pesticides containing diatomaceous earth are not exempt from regulation in the United States under the Federal Insecticide, Fungicide, and Rodenticide Act and must be registered with the Environmental Protection Agency.
Thermal.
Its thermal properties enable it to be used as the barrier material in some fire-resistant safes. It is also used in evacuated powder insulation for use with cryogenics. Diatomaceous earth powder is inserted into the vacuum space to aid in the effectiveness of vacuum insulation. It was used in the classical AGA cookers as a thermal heat barrier.
Catalyst support.
Diatomaceous earth also finds some use as a support for catalysts, generally serving to maximize a catalyst's surface area and activity. For example, nickel can be supported on the material—the combination is called Ni–Kieselguhr—to improve its activity as a hydrogenation catalyst.
Agriculture.
Natural freshwater diatomaceous earth is used in agriculture for grain storage as an anticaking agent, as well as an insecticide. It is approved by the Food and Drug Administration as a feed additive to prevent caking.
Some believe it may be used as a natural anthelmintic (dewormer), although studies have not shown it to be effective. Some farmers add it to their livestock and poultry feed to prevent the caking of feed. "Food-Grade Diatomaceous Earth" is widely available in agricultural feed supply stores.
Freshwater diatomite can be used as a growing medium in hydroponic gardens.
It is also used as a growing medium in potted plants, particularly as bonsai soil. Bonsai enthusiasts use it as a soil additive, or pot a bonsai tree in 100% diatomaceous earth. In vegetable gardening it is sometimes used as a soil conditioner, because like perlite, vermiculite, and expanded clay, it retains water and nutrients, while draining fast and freely, allowing high oxygen circulation within the growing medium.
Marker in livestock nutrition experiments.
Natural dried, not calcinated diatomaceous earth is regularly used in livestock nutrition research as a source of acid-insoluble ash (AIA), which is used as an indigestible marker. By measuring the content of AIA relative to nutrients in test diets and feces or digesta sampled from the terminal ileum (last third of the small intestine) the percentage of that nutrient digested can be calculated using the following equation:
<templatestyles src="Block indent/styles.css"/>formula_0
where:
<templatestyles src="Block indent/styles.css"/>N is the nutrient digestibility (%)
<templatestyles src="Block indent/styles.css"/>Nf is the amount of nutrients in the feces (%)
<templatestyles src="Block indent/styles.css"/>NF is the amount of nutrients in the feed (%)
<templatestyles src="Block indent/styles.css"/>Af is the amount of AIA in the feces (%)
<templatestyles src="Block indent/styles.css"/>AF is the amount of AIA in the feed (%)
Natural freshwater diatomaceous earth is preferred by many researchers over chromic oxide, which has been widely used for the same purpose, the latter being a known carcinogen and, therefore, a potential hazard to research personnel.
Construction.
Spent diatomaceous earth from the brewing process can be added to ceramic mass for the production of red bricks with higher open porosity.
Diatomaceous earth is considered a very prominent inorganic non-metallic material that can be used for the production of various ceramics, including production of porous ceramics under low temperature hydrothermal technology.
Home goods.
Diatomaceous earth is used in some home products where dryness or the ability to wick away moisture is critical. In particular there are bath mats made of DE which absorb water from the bather and allow it to spread the material and rapidly evaporate away. There are also spoons made of DE for scooping sugar and other hydroscopic kitchen ingredients.
Microbial degradation.
Certain species of bacteria in oceans and lakes can accelerate the rate of dissolution of silica in dead and living diatoms by using hydrolytic enzymes to break down the organic algal material.
Climatologic importance.
The Earth's climate is affected by dust in the atmosphere, so locating major sources of atmospheric dust is important for climatology. Recent research indicates that surface deposits of diatomaceous earth play an important role. Research shows that significant dust comes from the Bodélé Depression in Chad, where storms push diatomite gravel over dunes, generating dust by abrasion.
Safety considerations.
Inhalation of "crystalline" silica is harmful to the lungs, causing silicosis. "Amorphous" silica is considered to have low toxicity, but prolonged inhalation causes changes to the lungs. Diatomaceous earth is mostly amorphous silica but contains some crystalline silica, especially in the saltwater forms. In a 1978 study of workers, those exposed to natural diatomaceous earth for over five years had no significant lung changes while 40% of those exposed to the calcined form had developed pneumoconiosis. Today's common diatomaceous earth formulations are safer to use, as they are predominantly made up of amorphous silica and contain little or no crystalline silica.
The crystalline silica content of diatomaceous earth is regulated in the United States by the Occupational Safety and Health Administration (OSHA) and there are guidelines from the National Institute for Occupational Safety and Health that set maximum amounts allowable in the product (1%) and in the air near the breathing zone of workers, with a recommended exposure limit at 6 mg/m3 over an 8-hour workday. OSHA has set a permissible exposure limit for diatomaceous earth as 20 mppcf (80 mg/m3/%SiO2). At levels of 3,000 mg/m3, diatomaceous earth is immediately dangerous to life and health.
In the 1930s, long-term occupational exposure among workers in the cristobalite diatomaceous earth industry who were exposed to high levels of airborne crystalline silica over decades were found to have an increased risk of silicosis.
Diatomite produced for pool filters is treated with high heat (calcination) and a fluxing agent (soda ash), causing the formerly harmless amorphous silicon dioxide to assume its crystalline form.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N = \\left ( 1 - \\frac{N_f}{N_F} \\times \\frac{A_F}{A_f} \\right ) \\times 100 "
}
]
| https://en.wikipedia.org/wiki?curid=773959 |
7739701 | Semiprime ring | Generalizations of prime ideals and prime rings
In ring theory, a branch of mathematics, semiprime ideals and semiprime rings are generalizations of prime ideals and prime rings. In commutative algebra, semiprime ideals are also called radical ideals and semiprime rings are the same as reduced rings.
For example, in the ring of integers, the semiprime ideals are the zero ideal, along with those ideals of the form formula_0 where "n" is a square-free integer. So, formula_1 is a semiprime ideal of the integers (because 30 = 2 × 3 × 5, with no repeated prime factors), but formula_2 is not (because 12 = 22 × 3, with a repeated prime factor).
The class of semiprime rings includes semiprimitive rings, prime rings and reduced rings.
Most definitions and assertions in this article appear in and .
Definitions.
For a commutative ring "R", a proper ideal "A" is a semiprime ideal if "A" satisfies either of the following equivalent conditions:
The latter condition that the complement is "closed under powers" is analogous to the fact that complements of prime ideals are closed under multiplication.
As with prime ideals, this is extended to noncommutative rings "ideal-wise". The following conditions are equivalent definitions for a semiprime ideal "A" in a ring "R":
Here again, there is a noncommutative analogue of prime ideals as complements of m-systems. A nonempty subset "S" of a ring "R" is called an n-system if for any "s" in "S", there exists an "r" in "R" such that "srs" is in "S". With this notion, an additional equivalent point may be added to the above list:
The ring "R" is called a semiprime ring if the zero ideal is a semiprime ideal. In the commutative case, this is equivalent to "R" being a reduced ring, since "R" has no nonzero nilpotent elements. In the noncommutative case, the ring merely has no nonzero nilpotent right ideals. So while a reduced ring is always semiprime, the converse is not true.
General properties of semiprime ideals.
To begin with, it is clear that prime ideals are semiprime, and that for commutative rings, a semiprime primary ideal is prime.
While the intersection of prime ideals is not usually prime, it "is" a semiprime ideal. Shortly it will be shown that the converse is also true, that every semiprime ideal is the intersection of a family of prime ideals.
For any ideal "B" in a ring "R", we can form the following sets:
formula_3
The set formula_4 is the definition of the radical of "B" and is clearly a semiprime ideal containing "B", and in fact is the smallest semiprime ideal containing "B". The inclusion above is sometimes proper in the general case, but for commutative rings it becomes an equality.
With this definition, an ideal "A" is semiprime if and only if formula_5. At this point, it is also apparent that every semiprime ideal is in fact the intersection of a family of prime ideals. Moreover, this shows that the intersection of any two semiprime ideals is again semiprime.
By definition "R" is semiprime if and only if formula_6, that is, the intersection of all prime ideals is zero. This ideal formula_7 is also denoted by formula_8 and also called Baer's lower nilradical or the Baer-Mccoy radical or the prime radical of "R".
Semiprime Goldie rings.
A right Goldie ring is a ring that has finite uniform dimension (also called "finite rank") as a right module over itself, and satisfies the ascending chain condition on right annihilators of its subsets. Goldie's theorem states that the "semiprime" right Goldie rings are precisely those that have a semisimple Artinian right classical ring of quotients. The Artin–Wedderburn theorem then completely determines the structure of this ring of quotients.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n\\mathbb Z"
},
{
"math_id": 1,
"text": "30\\mathbb Z"
},
{
"math_id": 2,
"text": "12\\mathbb Z\\,"
},
{
"math_id": 3,
"text": "\\sqrt{B}:=\\bigcap\\{ P\\subseteq R \\mid B \\subseteq P, P \\mbox{ a prime ideal} \\}\\subseteq\\{x\\in R\\mid x^n\\in B \\mbox{ for some }n\\in\\mathbb{N}^+ \\} \\,"
},
{
"math_id": 4,
"text": "\\sqrt{B}"
},
{
"math_id": 5,
"text": "\\sqrt{A}=A"
},
{
"math_id": 6,
"text": "\\sqrt{\\{0\\}}=\\{0\\}"
},
{
"math_id": 7,
"text": "\\sqrt{\\{0\\}}"
},
{
"math_id": 8,
"text": "Nil_*(R)\\,"
}
]
| https://en.wikipedia.org/wiki?curid=7739701 |
7740082 | Parry–Sullivan invariant | In mathematics, the Parry–Sullivan invariant (or Parry–Sullivan number) is a numerical quantity of interest in the study of incidence matrices in graph theory, and of certain one-dimensional dynamical systems. It provides a partial classification of non-trivial irreducible incidence matrices.
It is named after the English mathematician Bill Parry and the American mathematician Dennis Sullivan, who introduced the invariant in a joint paper published in the journal "Topology" in 1975.
Definition.
Let "A" be an "n" × "n" incidence matrix. Then the Parry–Sullivan number of "A" is defined to be
formula_0
where "I" denotes the "n" × "n" identity matrix.
Properties.
It can be shown that, for nontrivial irreducible incidence matrices, flow equivalence is completely determined by the Parry–Sullivan number and the Bowen–Franks group.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{PS} (A) = \\det (I - A), "
}
]
| https://en.wikipedia.org/wiki?curid=7740082 |
77402391 | Geometric discrepancy | Geometric discrepancy theory is a sub-field of discrepancy theory, that deals with balancing geometric sets, such as intervals or rectangles. The general research question in this field is: given a set of "points" in a geometric space, and a set of "objects" in the same space, can we color each point in one of two different colors (e.g. black and white), such that each object contains roughly the same number of points of each color?
Formally, the "discrepancy" of an object is defined as the difference between the number of white points and the number of black points in that object; the objective is to color the points such that the maximum discrepancy of an object is as small as possible.
Intervals.
In the simplest geometric discrepancy setting, the set of objects is the set of all sub-intervals of the real interval [0,1]. In this setting, it is possible to attain discrepancy 1: simply color the points alternately black - white - black - white - etc. Then, the discrepancy of every interval is either 0 or 1.
The problem becomes more challenging when the points are not available in advance, but arrive one by one, and each point should be colored immediately when it arrives. This setting is called the "Online Interval Discrepancy". Jiang, Kulkarni and Singla prove that:Sec.3.2
Their proof uses a reduction to the problem of Online Tree Balancing, which is a problem of discrepancy in which the set of objects is the set of sub-trees of a complete "m"-ary tree with height "h". For this problem, they prove that, if formula_3 for a sufficiently large constant "C", and "m" ≥ 100, then there is an online algorithm that attains discrepancy formula_4.Sec.2
Rectangles and boxes.
Tusnady asked what is the discrepancy when the set of objects is the set of axes-parallel rectangles contained in the unit square.
When the set of objects is the set of all rectangles (possibly rotated), then:
Matousek studied the "d"-dimensional extension of Tusnady's problem. Improving previous results by Roth, Schmidt, Beck, Bohus, and Srinivasan, he proved an upper bound of formula_5 with a simple proof.
Stripes.
When the set of objects is the set of "stripes"—rectangles of the form [a,b]x[0,1] and [0,1]x[a,b], the setting is equivalent to the problem of "two permutations": given two permutations on a set of "n" elements, we should color each element either black or white, such that the discrepancy in each interval of each permutation is minimized (the two permutations are the order of the x coordinates and the order of the y coordinates of the points).
Jiang, Kulkarni and Singla study the online setting with stochastic point arrival, and prove that:
Convex polytopes.
Matousek and Nikolov studied a more general setting, where the set of objects is induced by dilations and translations of a fixed convex polytope. He proved upper and lower bounds on the discrepancy. The results are analogous to the results for rectangles and boxes.
Half-spaces.
When the set of objects is the set of half-spaces in the Euclidean "d"-dimensional space:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\tilde{O}(\\sqrt{n})"
},
{
"math_id": 1,
"text": "\\Omega(\\sqrt{n})"
},
{
"math_id": 2,
"text": "O(n^{c/\\log{\\log{n}}})"
},
{
"math_id": 3,
"text": "h \\leq {\\log\\log n}/C"
},
{
"math_id": 4,
"text": "O(\\log^2 n)"
},
{
"math_id": 5,
"text": "O_d((\\log n)^{d+1/2} \\sqrt{\\log\\log n})"
},
{
"math_id": 6,
"text": "\\Omega(n^{1/2 - d/2})"
},
{
"math_id": 7,
"text": "C_d n^{1/2 - d/2}"
}
]
| https://en.wikipedia.org/wiki?curid=77402391 |
77402447 | Thermo-acoustic instability | Thermo-acoustic instability refers to an instabiltiy arising due to acoustics field and unsteady heat release process. This instability is very relevant in combustion instabilities in systems such as rocket engines, etc.
Rayleigh criterion.
A very simple mechanism of acoustic amplification was first identified by Lord Rayleigh in 1878. In simple terms, Rayleigh criterion states that "amplification results if, on the average, heat addition occurs in phase with the pressure increases during the oscillation".. That is, if formula_0 is the pressure perturbation (with respect to its mean value formula_1) and formula_2 is the rate of heat release per unit volume (with respect to its mean value formula_3), then the Rayleigh criterion says that acoustic amplification occurs if
formula_4
Rayleigh criterion is used to many explain phenomena such as singing flames in tubes, sound amplification in Rijke tube and others. In complex systems, Rayleigh criterion, may not ne strictly valid, as there exists many damping factors such as viscous/wall/nozzle/relaxation/homogeneous/particle damping, mean-flow effects, et, that are not accounted in Rayleigh's analysis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p'"
},
{
"math_id": 1,
"text": "\\langle p\\rangle"
},
{
"math_id": 2,
"text": "\\dot q'"
},
{
"math_id": 3,
"text": "\\langle \\dot q\\rangle"
},
{
"math_id": 4,
"text": "\\langle p' \\dot q' \\rangle >0."
}
]
| https://en.wikipedia.org/wiki?curid=77402447 |
77402909 | Matalon–Matkowsky–Clavin–Joulin theory | Matalon–Matkowsky–Clavin–Joulin theory refers to a theoretical hydrodynamic model of a premixed flame with a large-amplitude flame wrinkling, developed independently by Moshe Matalon & Bernard J. Matkowsky and Paul Clavin & Guy Joulin. The theory, for the first time, calculated the burning rate of the curved flame that differs from the burning rate of the planar flame due to flame stretch, associated with the flame curvature and the strain imposed on the flame by the flow field.
Burning rate formula.
According to Matalon–Matkowsky–Clavin–Joulin theory, if formula_0 and formula_1 are the laminar burning speed and thickness of a planar flame (and formula_2 be the corresponding flame residence time with formula_3 being the thermal diffusivity in the unburnt gas), then the burning speed formula_4 for the curved flame with respect to the unburnt gas is given by
formula_5
where formula_6 is the unit normal to the flame surface (pointing towards the burnt gas side), formula_7 is the flow velocity field evalauted at the flame surface and formula_8 and formula_9 are the two Markstein numbers, associated with the curvature term formula_10 and the term formula_11 corresponding to flow strain imposed on the flame.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S_L"
},
{
"math_id": 1,
"text": "\\delta_L"
},
{
"math_id": 2,
"text": "\\tau_L=D_{T,u}/S_L^2"
},
{
"math_id": 3,
"text": "D_{T,u}"
},
{
"math_id": 4,
"text": "S_T"
},
{
"math_id": 5,
"text": "\\frac{S_T}{S_L} = 1 + \\mathcal{M}_c \\delta_L \\nabla \\cdot \\mathbf{n} + \\mathcal{M}_s \\tau_L \\mathbf{n}\\cdot \\nabla\\mathbf v\\cdot \\mathbf{n}"
},
{
"math_id": 6,
"text": "\\mathbf{n}"
},
{
"math_id": 7,
"text": "\\mathbf{v}"
},
{
"math_id": 8,
"text": "\\mathcal{M}_c"
},
{
"math_id": 9,
"text": "\\mathcal{M}_s"
},
{
"math_id": 10,
"text": "\\nabla \\cdot \\mathbf{n}"
},
{
"math_id": 11,
"text": "\\mathbf{n}\\cdot \\nabla\\mathbf v\\cdot \\mathbf{n}"
}
]
| https://en.wikipedia.org/wiki?curid=77402909 |
7740302 | Automatic sequence | Infinite sequence of terms characterized by a finite automaton
In mathematics and theoretical computer science, an automatic sequence (also called a k"-automatic sequence or a k"-recognizable sequence when one wants to indicate that the base of the numerals used is "k") is an infinite sequence of terms characterized by a finite automaton. The "n"-th term of an automatic sequence "a"("n") is a mapping of the final state reached in a finite automaton accepting the digits of the number "n" in some fixed base "k".
An automatic set is a set of non-negative integers "S" for which the sequence of values of its characteristic function χ"S" is an automatic sequence; that is, "S" is "k"-automatic if χ"S"("n") is "k"-automatic, where χ"S"("n") = 1 if "n" formula_0 "S" and 0 otherwise.
Definition.
Automatic sequences may be defined in a number of ways, all of which are equivalent. Four common definitions are as follows.
Automata-theoretic.
Let "k" be a positive integer, and let "D" = ("Q", Σ"k", δ, "q0", Δ, τ) be a deterministic finite automaton "with output", where
Extend the transition function δ from acting on single digits to acting on strings of digits by defining the action of δ on a string "s" consisting of digits "s"1"s"2..."s""t" as:
δ("q","s") = δ(δ("q", "s"1"s"2..."s""t"-1), "s""t").
Define a function "a" from the set of positive integers to the output alphabet Δ as follows:
"a"("n") = τ(δ("q0","s"("n"))),
where "s"("n") is "n" written in base "k". Then the sequence "a" = "a"(1)"a"(2)"a"(3)... is a "k"-automatic sequence.
An automaton reading the base "k" digits of "s"("n") starting with the most significant digit is said to be "direct reading", while an automaton starting with the least significant digit is "reverse reading". The above definition holds whether "s"("n") is direct or reverse reading.
Substitution.
Let formula_1 be a "k"-uniform morphism of a free monoid formula_2 and let formula_3 be a "coding" (that is, a formula_4-uniform morphism), as in the automata-theoretic case. If formula_5 is a fixed point of formula_1—that is, if formula_6—then formula_7 is a "k"-automatic sequence. Conversely, every "k"-automatic sequence is obtainable in this way. This result is due to Cobham, and it is referred to in the literature as "Cobham's little theorem".
"k"-kernel.
Let "k" ≥ 2. The "k-kernel" of the sequence "s"("n") is the set of subsequences
formula_8
In most cases, the "k"-kernel of a sequence is infinite. However, if the "k"-kernel is finite, then the sequence "s"("n") is "k"-automatic, and the converse is also true. This is due to Eilenberg.
It follows that a "k"-automatic sequence is necessarily a sequence on a finite alphabet.
Formal power series.
Let "u"("n") be a sequence over an alphabet Σ and suppose that there is an injective function β from Σ to the finite field F"q", where "q" = "p""n" for some prime "p". The associated formal power series is
formula_9
Then the sequence "u" is "q"-automatic if and only if this formal power series is algebraic over F"q"("X"). This result is due to Christol, and it is referred to in the literature as "Christol's theorem".
History.
Automatic sequences were introduced by Büchi in 1960, although his paper took a more logico-theoretic approach to the matter and did not use the terminology found in this article. The notion of automatic sequences was further studied by Cobham in 1972, who called these sequences "uniform tag sequences".
The term "automatic sequence" first appeared in a paper of Deshouillers.
Examples.
The following sequences are automatic:
Thue–Morse sequence.
The Thue–Morse sequence "t"("n") (OEIS: ) is the fixed point of the morphism 0 → 01, 1 → 10. Since the "n"-th term of the Thue–Morse sequence counts the number of ones modulo 2 in the base-2 representation of "n", it is generated by the two-state deterministic finite automaton with output pictured here, where being in state "q"0 indicates there are an even number of ones in the representation of "n" and being in state "q"1 indicates there are an odd number of ones.
Hence, the Thue–Morse sequence is 2-automatic.
Period-doubling sequence.
The "n"-th term of the period-doubling sequence "d"("n") (OEIS: ) is determined by the parity of the exponent of the highest power of 2 dividing "n". It is also the fixed point of the morphism 0 → 01, 1 → 00. Starting with the initial term "w" = 0 and iterating the 2-uniform morphism φ on "w" where φ(0) = 01 and φ(1) = 00, it is evident that the period-doubling sequence is the fixed-point of φ("w") and thus it is 2-automatic.
Rudin–Shapiro sequence.
The "n"-th term of the Rudin–Shapiro sequence "r"("n") (OEIS: ) is determined by the number of consecutive ones in the base-2 representation of "n". The 2-kernel of the Rudin–Shapiro sequence is
formula_10
Since the 2-kernel consists only of "r"("n"), "r"(2"n" + 1), "r"(4"n" + 3), and "r"(8"n" + 3), it is finite and thus the Rudin–Shapiro sequence is 2-automatic.
Other sequences.
Both the Baum–Sweet sequence (OEIS: ) and the regular paperfolding sequence (OEIS: ) are automatic. In addition, the general paperfolding sequence with a periodic sequence of folds is also automatic.
Properties.
Automatic sequences exhibit a number of interesting properties. A non-exhaustive list of these properties is presented below.
Proving and disproving automaticity.
Given a candidate sequence formula_11, it is usually easier to disprove its automaticity than to prove it. By the "k"-kernel characterization of "k"-automatic sequences, it suffices to produce infinitely many distinct elements in the "k"-kernel formula_12 to show that formula_13 is not "k"-automatic. Heuristically, one might try to prove automaticity by checking the agreement of terms in the "k"-kernel, but this can occasionally lead to wrong guesses. For example, let
formula_14
be the Thue–Morse word. Let formula_13 be the word given by concatenating successive terms in the sequence of run-lengths of formula_15. Then formula_13 begins
formula_16.
It is known that formula_13 is the fixed point formula_17 of the morphism
formula_18
The word formula_13 is not 2-automatic, but certain elements of its 2-kernel agree for many terms. For example,
formula_19
but not for formula_20.
Given a sequence that is conjectured to be automatic, there are a few useful approaches to proving it actually is. One approach is to directly construct a deterministic automaton with output that gives the sequence. Let formula_21 written in the alphabet formula_22, and let formula_23 denote the base-formula_24 expansion of formula_25. Then the sequence formula_11 is formula_24-automatic if and only each of the fibres
formula_26
is a regular language. Checking regularity of the fibres can often be done using the pumping lemma for regular languages.
If formula_27 denotes the sum of the digits in the base-formula_24 expansion of formula_25 and formula_28 is a polynomial with non-negative integer coefficients, and if formula_29, formula_30 are integers, then the sequence
formula_31
is formula_24-automatic if and only if formula_32 or formula_33.
1-automatic sequences.
"k"-automatic sequences are normally only defined for "k" ≥ 2. The concept can be extended to "k" = 1 by defining a 1-automatic sequence to be a sequence whose "n"-th term depends on the unary notation for "n"; that is, (1)"n". Since a finite state automaton must eventually return to a previously visited state, all 1-automatic sequences are ultimately periodic.
Generalizations.
Automatic sequences are robust against variations to either the definition or the input sequence. For instance, as noted in the automata-theoretic definition, a given sequence remains automatic under both direct and reverse reading of the input sequence. A sequence also remains automatic when an alternate set of digits is used or when the base is negated; that is, when the input sequence is represented in base −"k" instead of in base "k". However, in contrast to using an alternate set of digits, a change of base may affect the automaticity of a sequence.
The domain of an automatic sequence can be extended from the natural numbers to the integers via "two-sided" automatic sequences. This stems from the fact that, given "k" ≥ 2, every integer can be represented uniquely in the form formula_34 where formula_35. Then a two-sided infinite sequence "a"("n")"n" formula_36 is (−"k")-automatic if and only if its subsequences "a"("n")n ≥ 0 and "a"(−"n")n ≥ 0 are "k"-automatic.
The alphabet of a "k"-automatic sequence can be extended from finite size to infinite size via "k"-regular sequences. The "k"-regular sequences can be characterized as those sequences whose "k"-kernel is finitely-generated. Every bounded "k"-regular sequence is automatic.
Logical approach.
For many 2-automatic sequences formula_11, the map formula_37 has the property that the first-order theory formula_38 is decidable. Since many non-trivial properties of automatic sequences can be written in first-order logic, it is possible to prove these properties mechanically by executing the decision procedure.
For example, the following properties of the Thue–Morse word can all be verified mechanically in this way:
The software Walnut, developed by Hamoon Mousavi, implements a decision procedure for deciding many properties of certain automatic words, such as the Thue–Morse word. This implementation is a consequence of the above work on the logical approach to automatic sequences.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\in"
},
{
"math_id": 1,
"text": "\\varphi"
},
{
"math_id": 2,
"text": "\\Sigma^*"
},
{
"math_id": 3,
"text": "\\tau"
},
{
"math_id": 4,
"text": "1"
},
{
"math_id": 5,
"text": "w"
},
{
"math_id": 6,
"text": "w = \\varphi(w)"
},
{
"math_id": 7,
"text": "s = \\tau(w)"
},
{
"math_id": 8,
"text": "K_{k}(s) = \\{s(k^e n + r) : e \\geq 0 \\text{ and } 0 \\leq r \\leq k^e - 1\\}."
},
{
"math_id": 9,
"text": " \\sum_{i \\geq 0} \\beta(u(i)) X^i . "
},
{
"math_id": 10,
"text": "\n\\begin{align}\nr(2n) &= r(n), \\\\\nr(4n+1) &= r(n), \\\\\nr(8n+7) &= r(2n+1), \\\\\nr(16n+3) &= r(8n+3), \\\\\nr(16n+11) &= r(4n+3).\n\\end{align}\n"
},
{
"math_id": 11,
"text": "s = (s_n)_{n \\ge 0}"
},
{
"math_id": 12,
"text": "K_k(s)"
},
{
"math_id": 13,
"text": "s"
},
{
"math_id": 14,
"text": "t = 011010011\\dots"
},
{
"math_id": 15,
"text": "t"
},
{
"math_id": 16,
"text": "s = 12112221\\dots."
},
{
"math_id": 17,
"text": "h^{\\omega}(1)"
},
{
"math_id": 18,
"text": "h(1) = 121, h(2) = 12221."
},
{
"math_id": 19,
"text": "s_{16n+1} = s_{64n+1} \\text{ for } 0 \\le n \\le 1864134"
},
{
"math_id": 20,
"text": "n = 1864135"
},
{
"math_id": 21,
"text": "(s_n)_{n \\ge 0}"
},
{
"math_id": 22,
"text": "\\Delta"
},
{
"math_id": 23,
"text": "(n)_k"
},
{
"math_id": 24,
"text": "k"
},
{
"math_id": 25,
"text": "n"
},
{
"math_id": 26,
"text": "I_k(s,d) := \\{(n)_k \\mid s_n = d\\}"
},
{
"math_id": 27,
"text": "s_k(n)"
},
{
"math_id": 28,
"text": "p(X)"
},
{
"math_id": 29,
"text": "k \\ge 2"
},
{
"math_id": 30,
"text": " m \\ge 1"
},
{
"math_id": 31,
"text": "(s_k(p(n)) \\pmod{m})_{n \\ge 0}"
},
{
"math_id": 32,
"text": "\\deg p \\le 1"
},
{
"math_id": 33,
"text": "m \\mid k-1"
},
{
"math_id": 34,
"text": " \\sum_{0 \\leq i \\leq r} a_{i}(-k)^{i} , "
},
{
"math_id": 35,
"text": "a_{i} \\in \\{0, \\dots, k-1\\}"
},
{
"math_id": 36,
"text": "\\in \\mathbb{Z}"
},
{
"math_id": 37,
"text": "n \\mapsto s_n"
},
{
"math_id": 38,
"text": "\\text{FO}(\\mathbb{N},+,0,1,n \\mapsto s_n)"
},
{
"math_id": 39,
"text": "cxcxc"
},
{
"math_id": 40,
"text": "c"
},
{
"math_id": 41,
"text": "x"
},
{
"math_id": 42,
"text": "y"
},
{
"math_id": 43,
"text": "x = wyw"
},
{
"math_id": 44,
"text": "(n)_2 \\notin 1(01^*0)^*10^*1"
},
{
"math_id": 45,
"text": "(n)_2"
}
]
| https://en.wikipedia.org/wiki?curid=7740302 |
77405256 | Fradkin tensor | The Fradkin tensor, or Jauch-Hill-Fradkin tensor, named after Josef-Maria Jauch and Edward Lee Hill and David M. Fradkin, is a conservation law used in the treatment of the isotropic multidimensional harmonic oscillator in classical mechanics. For the treatment of the quantum harmonic oscillator in quantum mechanics, it is replaced by the tensor-valued Fradkin operator.
The Fradkin tensor provides enough conserved quantities to make the oscillator's equations of motion maximally superintegrable. This implies that to determine the trajectory of the system, no differential equations need to be solved, only algebraic ones.
Similarly to the Laplace–Runge–Lenz vector in the Kepler problem, the Fradkin tensor arises from a hidden symmetry of the harmonic oscillator.
Definition.
Suppose the Hamiltonian of a harmonic oscillator is given by
formula_0
with
then the Fradkin tensor (up to an arbitrary normalisation) is defined as
formula_5
In particular, formula_6 is given by the trace: formula_7. The Fradkin Tensor is a thus a symmetric matrix, and for an formula_8-dimensional harmonic oscillator has formula_9 independent entries, for example 5 in 3 dimensions.
where formula_15 is the Poisson bracket, formula_16 is the Kronecker delta, and formula_17 is the Levi-Civita symbol.
Proof of conservation.
In Hamiltonian mechanics, the time evolution of any function formula_18 defined on phase space is given by
formula_19,
so for the Fradkin tensor of the harmonic oscillator,
formula_20.
The Fradkin tensor is the conserved quantity associated to the transformation
formula_21
by Noether's theorem.
Quantum mechanics.
In quantum mechanics, position and momentum are replaced by the position- and momentum operators and the Poisson brackets by the commutator. As such the Hamiltonian becomes the Hamiltonian operator, angular momentum the angular momentum operator, and the Fradkin tensor the Fradkin operator. All of the above properties continue to hold after making these replacements. | [
{
"math_id": 0,
"text": "H = \\frac{\\vec p^2}{2m} + \\frac{1}{2} m \\omega^2 \\vec x^2"
},
{
"math_id": 1,
"text": "\\vec p"
},
{
"math_id": 2,
"text": "m"
},
{
"math_id": 3,
"text": "\\omega"
},
{
"math_id": 4,
"text": "\\vec x"
},
{
"math_id": 5,
"text": "F_{ij} = \\frac{p_i p_j}{2m} + \\frac{1}{2} m \\omega^2 x_i x_j ."
},
{
"math_id": 6,
"text": "H"
},
{
"math_id": 7,
"text": "H = \\operatorname{Tr}(F)"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "\\tfrac{n(n+1)}{2} - 1"
},
{
"math_id": 10,
"text": "\\vec L = \\vec x \\times \\vec p"
},
{
"math_id": 11,
"text": "F_{ij} L_j = 0"
},
{
"math_id": 12,
"text": "x_i F_{ij} x_j = E\\vec x^2 - \\frac{\\vec L^2}{2m}"
},
{
"math_id": 13,
"text": "SU(3)"
},
{
"math_id": 14,
"text": "\\begin{align} \\{L_i, L_j\\} &= \\varepsilon_{ijk} L_k \\\\\n \\{L_i, F_{jk}\\} &= \\varepsilon_{ijn} F_{nk} + \\varepsilon_{ikn} F_{jn} \\\\\n \\{F_{ij},F_{kl}\\} &= \\frac{\\omega^2}{4} \\left(\\delta_{ik} \\varepsilon_{jln} + \\delta_{il}\\varepsilon_{jkn} + \\delta_{jk} \\varepsilon_{iln} + \\delta_{jl} \\varepsilon_{ikn}\\right) L_n\\,,\\end{align}"
},
{
"math_id": 15,
"text": "\\{\\cdot,\\cdot\\}"
},
{
"math_id": 16,
"text": "\\delta"
},
{
"math_id": 17,
"text": "\\varepsilon"
},
{
"math_id": 18,
"text": "A"
},
{
"math_id": 19,
"text": "\\frac{\\mathrm dA}{\\mathrm dt} = \\{A,H\\} = \\sum_k \\left(\\frac{\\partial A}{\\partial x_k} \\frac{\\partial H}{\\partial p_k} - \\frac{\\partial A}{\\partial p_k} \\frac{\\partial H}{\\partial x_k}\\right) + \\frac{\\partial A}{\\partial t}"
},
{
"math_id": 20,
"text": "\\frac{\\mathrm dF_{ij}}{\\mathrm dt} = \\frac{1}{2} \\omega^2 \\sum_k \\Big((x_j \\delta_{ik} + x_i \\delta_{jk}) p_k - (p_j \\delta_{ik} + p_i \\delta_{jk}) x_k \\Big) = 0 ."
},
{
"math_id": 21,
"text": "x_i \\to x_i' = x_i + \\frac 12 \\omega^{-1} \\varepsilon_{jk} \\left(\\dot x_j \\delta_{ik} + \\dot x_k \\delta_{ij}\\right)"
}
]
| https://en.wikipedia.org/wiki?curid=77405256 |
7740682 | Magnetic domain | Region of a magnetic material in which the magnetization has uniform direction
A magnetic domain is a region within a magnetic material in which the magnetization is in a uniform direction. This means that the individual magnetic moments of the atoms are aligned with one another and they point in the same direction. When cooled below a temperature called the Curie temperature, the magnetization of a piece of ferromagnetic material spontaneously divides into many small regions called magnetic domains. The magnetization within each domain points in a uniform direction, but the magnetization of different domains may point in different directions. Magnetic domain structure is responsible for the magnetic behavior of ferromagnetic materials like iron, nickel, cobalt and their alloys, and ferrimagnetic materials like ferrite. This includes the formation of permanent magnets and the attraction of ferromagnetic materials to a magnetic field. The regions separating magnetic domains are called domain walls, where the magnetization rotates coherently from the direction in one domain to that in the next domain. The study of magnetic domains is called micromagnetics.
Magnetic domains form in materials which have magnetic ordering; that is, their dipoles spontaneously align due to the exchange interaction. These are the ferromagnetic, ferrimagnetic and antiferromagnetic materials. Paramagnetic and diamagnetic materials, in which the dipoles align in response to an external field but do not spontaneously align, do not have magnetic domains.
Development of domain theory.
Magnetic domain theory was developed by French physicist Pierre-Ernest Weiss who, in 1906, suggested existence of magnetic domains in ferromagnets. He suggested that large number of atomic magnetic moments (typically 1012-1018) were aligned parallel. The direction of alignment varies from domain to domain in a more or less random manner, although certain crystallographic axis may be preferred by the magnetic moments, called easy axes. Weiss still had to explain the reason for the spontaneous alignment of atomic moments within a ferromagnetic material, and he came up with the so-called Weiss mean field. He assumed that a given magnetic moment in a material experienced a very high effective magnetic field He due to the magnetization of its neighbors. In the original Weiss theory the mean field was proportional to the bulk magnetization M, so that formula_0 where formula_1 is the mean field constant. However this is not applicable to ferromagnets due to the variation of magnetization from domain to domain. In this case, the interaction field is formula_2 where formula_3 is the saturation magnetization at 0K.
Later, the quantum theory made it possible to understand the microscopic origin of the Weiss field. The exchange interaction between localized spins favored a parallel (in ferromagnets) or an anti-parallel (in anti-ferromagnets) state of neighboring magnetic moments
Domain structure.
Why domains form.
The reason a piece of magnetic material such as iron spontaneously divides into separate domains, rather than exist in a state with magnetization in the same direction throughout the material, is to minimize its internal energy. A large region of ferromagnetic material with a constant magnetization throughout will create a large magnetic field extending into the space outside itself "(diagram a)". This requires a lot of "magnetostatic energy" stored in the field. To reduce this energy, the sample can split into two domains, with the magnetization in opposite directions in each domain "(diagram b)". The magnetic field lines pass in loops in opposite directions through each domain, reducing the field outside the material. To reduce the field energy further, each of these domains can split also, resulting in smaller parallel domains with magnetization in alternating directions, with smaller amounts of field outside the material.
The domain structure of actual magnetic materials does not usually form by the process of large domains splitting into smaller ones as described here. When a sample is cooled below the Curie temperature, for example, the equilibrium domain configuration simply appears. But domains can split, and the description of domains splitting is often used to reveal the energy tradeoffs in domain formation.
Size of domains.
As explained above a domain which is too big is unstable, and will divide into smaller domains. But a small enough domain will be stable and will not split, and this determines the size of the domains created in a material. This size depends on the balance of several energies within the material. Each time a region of magnetization splits into two domains, it creates a domain wall between the domains, where magnetic dipoles (molecules) with magnetization pointing in different directions are adjacent. The exchange interaction which creates the magnetization is a force which tends to align nearby dipoles so they point in the same direction. Forcing adjacent dipoles to point in different directions requires energy. Therefore, a domain wall requires extra energy, called the domain wall energy, which is proportional to the area of the wall.
Thus the net amount that the energy is reduced when a domain splits is equal to the difference between the magnetic field energy saved, and the additional energy required to create the domain wall. The field energy is proportional to the cube of the domain size, while the domain wall energy is proportional to the square of the domain size. So as the domains get smaller, the net energy saved by splitting decreases. The domains keep dividing into smaller domains until the energy cost of creating an additional domain wall is just equal to the field energy saved. Then the domains of this size are stable. In most materials the domains are microscopic in size, around 10−4 - 10−6 m.
Magnetic anisotropy.
An additional way for the material to further reduce its magnetostatic energy is to form domains with magnetization at right angles to the other domains "(diagram c)", instead of just in opposing parallel directions. These domains, called "flux closure domains", allow the field lines to turn 180° within the material, forming closed loops entirely within the material, reducing the magnetostatic energy to zero. However, forming these domains incurs two additional energy costs. First, the crystal lattice of most magnetic materials has magnetic anisotropy, which means it has an "easy" direction of magnetization, parallel to one of the crystal axes. Changing the magnetization of the material to any other direction takes additional energy, called the "magnetocrystalline anisotropy energy".
Magnetostriction.
The other energy cost to creating domains with magnetization at an angle to the "easy" direction is caused by the phenomenon called magnetostriction. When the magnetization of a piece of magnetic material is changed to a different direction, it causes a slight change in its shape. The change in magnetic field causes the magnetic dipole molecules to change shape slightly, making the crystal lattice longer in one dimension and shorter in other dimensions. However, since the magnetic domain is "squished in" with its boundaries held rigid by the surrounding material, it cannot actually change shape. So instead, changing the direction of the magnetization induces tiny mechanical stresses in the material, requiring more energy to create the domain. This is called "magnetoelastic anisotropy energy".
To form these closure domains with "sideways" magnetization requires additional energy due to the aforementioned two factors. So flux closure domains will only form where the magnetostatic energy saved is greater than the sum of the "exchange energy" to create the domain wall, the magnetocrystalline anisotropy energy, and the magnetoelastic anisotropy energy. Therefore, most of the volume of the material is occupied by domains with magnetization either "up" or "down" along the "easy" direction, and the flux closure domains only form in small areas at the edges of the other domains where they are needed to provide a path for magnetic field lines to change direction "(diagram c, above)".
Grain structure.
The above describes magnetic domain structure in a perfect crystal lattice, such as would be found in a single crystal of iron. However most magnetic materials are polycrystalline, composed of microscopic crystalline grains. These grains are "not" the same as domains. Each grain is a little crystal, with the crystal lattices of separate grains oriented in random directions. In most materials, each grain is big enough to contain several domains. Each crystal has an "easy" axis of magnetization, and is divided into domains with the axis of magnetization parallel to this axis, in alternate directions.
"Magnetized" states.
It can be seen from the above discussion that, although on a microscopic scale almost all the magnetic dipoles in a piece of ferromagnetic material are lined up parallel to their neighbors in domains, creating strong "local" magnetic fields, energy minimization results in a domain structure that minimizes the "large-scale" magnetic field. In its lowest energy state, the magnetization of neighboring domains point in different directions, confining the field lines to microscopic loops between neighboring domains within the material, so the combined fields cancel at a distance. Therefore, a bulk piece of ferromagnetic material in its lowest energy state has little or no external magnetic field. The material is said to be "unmagnetized".
However, the domains can also exist in other configurations in which their magnetization mostly points in the same direction, creating an external magnetic field. Although these are not minimum energy configurations, due to a phenomenon where the domain walls become "pinned" to defects in the crystal lattice they can be "local" minimums of the energy, and therefore can be very stable. Applying an external magnetic field to the material can make the domain walls move, causing the domains aligned with the field to grow, and the opposing domains to shrink. When the external field is removed, the domain walls remain pinned in their new orientation and the aligned domains produce a magnetic field. This is what happens when a piece of ferromagnetic material is "magnetized" and becomes a permanent magnet.
Heating a magnet, subjecting it to vibration by hammering it, or applying a rapidly oscillating magnetic field from a degaussing coil, tends to pull the domain walls free from their pinned states, and they will return to a lower energy configuration with less external magnetic field, thus "demagnetizing" the material.
Landau-Lifshitz energy equation.
The contributions of the different internal energy factors described above is expressed by the free energy equation proposed by Lev Landau and Evgeny Lifshitz in 1935, which forms the basis of the modern theory of magnetic domains. The domain structure of a material is the one which minimizes the Gibbs free energy of the material. For a crystal of magnetic material, this is the Landau-Lifshitz free energy, "E", which is the sum of these energy terms:
formula_4
where
Some sources define a wall energy "EW" equal to the sum of the exchange energy and the magnetocrystalline anisotropy energy, which replaces "Eex" and "Ek" in the above equation.
A stable domain structure is a magnetization function M(x), considered as a continuous vector field, which minimizes the total energy "E" throughout the material. To find the minimums a variational method is used, resulting in a set of nonlinear differential equations, called "Brown's equations" after William Fuller Brown Jr. Although in principle these equations can be solved for the stable domain configurations M(x), in practice only the simplest examples can be solved. Analytic solutions do not exist, and numerical solutions calculated by the finite element method are computationally intractable because of the large difference in scale between the domain size and the wall size. Therefore, micromagnetics has evolved approximate methods which assume that the magnetization of dipoles in the bulk of the domain, away from the wall, all point in the same direction, and numerical solutions are only used near the domain wall, where the magnetization is changing rapidly.
Domain imaging techniques.
There are a number of microscopy methods that can be used to visualize the magnetization at the surface of a magnetic material, revealing the magnetic domains. Each method has a different application because not all domains are the same. In magnetic materials, domains can be circular, square, irregular, elongated, and striped, all of which have varied sizes and dimensions.
Magneto-optic Kerr effect (MOKE).
Large domains, within the range of 25-100 micrometers can be easily seen by Kerr microscopy, which uses the magneto-optic Kerr effect, which is the rotation of the polarization of light reflected from a magnetized surface.
Lorentz microscopy.
Lorentz microscopy is a collection of transmission electron microscopy techniques used to study magnetic domain structures down to the nanoscale. Most common techniques include Fresnel mode, Foucault mode and low-angle electron diffraction (LAD) in parallel beam TEM mode, and differential phase contrast (DPC) in scanning TEM mode. Off-axis electron holography is a related technique used to observe magnetic structures by detecting nanoscale magnetic fields.
Magnetic force microscopy (MFM).
Another technique for viewing sub-microscopic domain structures down to a scale of a few nanometers is magnetic force microscopy. MFM is a form of atomic force microscopy that uses a magnetically coated probe tip to scan the sample surface.
Bitter method.
Bitter patterns are a technique for imaging magnetic domains that were first observed by Francis Bitter. The technique involves placing a small quantity of ferrofluid on the surface of a ferromagnetic material. The ferrofluid arranges itself along magnetic domain walls, which have higher magnetic flux than the regions of the material located within domains.
A modified Bitter technique has been incorporated into a widely used device, the Large Area Domain Viewer, which is particularly useful in the examination of grain-oriented silicon steels.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H_e = \\alpha M"
},
{
"math_id": 1,
"text": " \\alpha "
},
{
"math_id": 2,
"text": "H_e = \\alpha M_s"
},
{
"math_id": 3,
"text": "M_s"
},
{
"math_id": 4,
"text": "E = E_{ex} + E_D + E_{\\lambda} + E_k + E_H\\,"
}
]
| https://en.wikipedia.org/wiki?curid=7740682 |
77408222 | Discrepancy of permutations | Discrepancy of permutations is a sub-field of discrepancy theory, that deals with balancing intervals induced by permutations of elements. There is a set of "n" elements, and there are m different permutations on this set. The general research question is: can we color each element in one of two different colors (e.g. black and white), such that in each permutation, every interval contains roughly the same number of elements of each color?
Formally, the "discrepancy" of an interval is defined as the difference between the number of white elements and the number of black elements in that interval; the objective is to color the elements such that the maximum discrepancy of an interval in each of the permutations is as small as possible.
Definitions.
Let "p"1, ...,"pm" be permutations of ["n"]. The "interval set" of a permutation is the set of all subsets of ["n"], that are adjacent to each other in the permutation. For example, if "n"=4 and one of the permutations is (1,2,3,4), then its interval set of contains e.g. the edges (1,2), (1,2,3), (2,3), (2,3,4), etc.
The discrepancy of the permutations "p"1, ...,"pm" is the minimum, over all black-white colorings of the integers in ["n"], of the maximum over all intervals, of the difference between the number of white and black integers in the interval.
Offline colorings.
Jiang, Kulkarni and Singla study the online setting with stochastic point arrival, and prove that:
Online colorings.
Sometimes the elements are not available in advance, but arrive one by one, and each elements should be colored immediately when it arrives. This online setting is challenging even for a single permutation. Jiang, Kulkarni and Singla call the setting with one permutation Online Interval Discrepancy. They prove that:Sec.3.2
The proof extends for the case of two permutations, which they call Online Stripe Discrepancy.
Applications.
Results in discrepancy of permutations have been used in the computation of Agreeable subsets, as well as in Online fair division.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " 8 m\\log {n}"
},
{
"math_id": 1,
"text": "\\lceil (\\log_3{n})/3 + 1 \\rceil"
},
{
"math_id": 2,
"text": "(-\\log_3{n} + 2 d - 2)/3"
},
{
"math_id": 3,
"text": "\\tilde{O}(\\sqrt{n})"
},
{
"math_id": 4,
"text": "O(n^{c/\\log{\\log{n}}})"
},
{
"math_id": 5,
"text": "\\Omega(\\sqrt{n})"
}
]
| https://en.wikipedia.org/wiki?curid=77408222 |
77411704 | Generalized metric space | In mathematics, specifically in category theory, a generalized metric space is a metric space but without the symmetry property and some other properties. Precisely, it is a category enriched over formula_0, the one-point compactification of formula_1. The notion was introduced in 1973 by Lawvere who noticed that a metric space can be viewed as a particular kind of a category.
The categorical point of view is useful since by Yoneda's lemma, a generalized metric space can be embedded into a much larger category in which, for instance, one can construct the Cauchy completion of the space.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "[0, \\infty]"
},
{
"math_id": 1,
"text": "\\mathbb{R}"
}
]
| https://en.wikipedia.org/wiki?curid=77411704 |
7741863 | Commutative magma | In mathematics, there exist magmas that are commutative but not associative. A simple example of such a magma may be derived from the children's game of rock, paper, scissors. Such magmas give rise to non-associative algebras.
A magma which is both commutative and associative is a commutative semigroup.
Example: rock, paper, scissors.
In the game of rock paper scissors, let formula_0 , standing for the "rock", "paper" and "scissors" gestures respectively, and consider the binary operation formula_1 derived from the rules of the game as follows:
For all formula_2:
* If formula_3 and formula_4 beats formula_5 in the game, then formula_6
* formula_7 I.e. every formula_4 is idempotent.
So that for example:
* formula_8 "paper beats rock";
* formula_9 "scissors tie with scissors".
This results in the Cayley table:
formula_10
By definition, the magma formula_11 is commutative, but it is also non-associative, as shown by:
formula_12
but
formula_13
i.e.
formula_14
It is the simplest non-associative magma that is "conservative", in the sense that the result of any magma operation is one of the two values given as arguments to the operation.
Applications.
The arithmetic mean, and generalized means of numbers or of higher-dimensional quantities, such as Frechet means, are often commutative but non-associative.
Commutative but non-associative magmas may be used to analyze genetic recombination.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M := \\{ r, p, s \\}"
},
{
"math_id": 1,
"text": "\\cdot : M \\times M \\to M"
},
{
"math_id": 2,
"text": "x, y \\in M"
},
{
"math_id": 3,
"text": "x \\neq y"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "y"
},
{
"math_id": 6,
"text": "x \\cdot y = y \\cdot x = x"
},
{
"math_id": 7,
"text": "x \\cdot x = x"
},
{
"math_id": 8,
"text": "r \\cdot p = p \\cdot r = p"
},
{
"math_id": 9,
"text": "s \\cdot s = s"
},
{
"math_id": 10,
"text": "\\begin{array}{c|ccc}\n\\cdot & r & p & s\\\\\n\\hline\nr & r & p & r\\\\\np & p & p & s\\\\\ns & r & s & s\n\\end{array}"
},
{
"math_id": 11,
"text": "(M, \\cdot)"
},
{
"math_id": 12,
"text": "r \\cdot (p \\cdot s) = r \\cdot s = r"
},
{
"math_id": 13,
"text": "(r \\cdot p) \\cdot s = p \\cdot s = s"
},
{
"math_id": 14,
"text": "r \\cdot (p \\cdot s) \\neq (r \\cdot p) \\cdot s"
}
]
| https://en.wikipedia.org/wiki?curid=7741863 |
774201 | Cooper test | Physical fitness test
The cooper test which was designed by Kenneth H. Cooper in 1968 for US military use is a physical fitness test. In its original form, the point of the test is to run as far as possible within 12 minutes. Pacing is important, as the participant will not cover a maximal distance if they begin with a pace too close to an all out sprint. The outcome is based on the distance the test person ran, their age and their sex.
It is an easy test to perform in larger groups. For athletes, the length of the run is considered to be that of a long distance run, since everything above 3 km is rated "long distance"—which means the runner will primarily use their "red", slow oxidative muscle cells.
For comparison the 2 miles (3,218.7 meters) world best is 7:54.10 set by Jakob Ingebrigtsen and the 5000 meters outdoor world record of Joshua Cheptegei is 12:35.36. With the same average speed, this would give 4768 metres for the Cooper test. The 2 miles world best for women is held by Meseret Defar with 8:58.58 and Faith Kipyegon's world record time for the outdoor 5000 meters is 14:05.20.
Interpretation of results.
The following is an example of the many tables that exist for the test:
VO2 max estimate.
The results can be correlated with VO2 max by inverting the linear regression values presented in the original publication.
Formula:
formula_0
where d12 is distance (in metres) covered in 12 minutes, alternatively
formula_1
where d(miles)12 is distance (in miles) covered in 12 minutes.
Practical use.
For practical use, precise monitoring presents a challenge. Not all military bases have a running track, and tracking soldiers' laps and positions after 12 minutes is difficult. Testing is easier to administer when the distance is fixed and the finishing time measured. In his original book, Cooper also provided an alternate version of the test, based on the time to complete a 1.5 mile run.
Most armies and police agencies of the world use a fixed distance. This is not exactly a Cooper test but a reasonable practical compromise as long as the distance is of sufficient length to put a continuous load on the cardiovascular system for 10 or more minutes. For example, the British Army uses 1.5 miles, the Australian Army uses 2.4 kilometers, the US Army uses 2 miles and the US Marine Corps 3 miles. For each base the course is measured and local corrections (elevation, conditions, etc.) applied. Soldiers are sent off in waves, and timed over the finish line by some PTIs with a stopwatch.
For personal trainers, the Cooper Test, when carried out on a treadmill, is a reliable and repeatable method for measuring a client's progress.
As a standard test this test should to be performed only under standard conditions:
The test formula given by Cooper is not considered to be useful for untrained pupils.
Regression analysis within in a study of sedentary male subjects revealed a significant correlation (r = 0.93, P<0.001) with direct VO2Max measurements with a modified formula:
formula_2
where d(kilometers)12 is distance (in kilometers) covered in 12 minutes.
Football referees.
The Cooper test was one of the most commonly used fitness tests to measure the fitness levels of both amateur and professional football referees, including referees from the FA (English Football Association). More recently, many countries have decided to stop relying on the Cooper Test, claiming that the Cooper test does not correlate well to a real football match, where players run short sprints rather than at a regular pace. Thus it may not truly indicate if a referee will be able to perform well in a football match. All FIFA referees are now required to pass the HI Intensity Fitness Test. National associations are gradually requiring some of their top-tier officials to do the HI Intensity Fitness Test also. Lower level referees are often given a choice to either perform the HI Intensity Fitness Test or the Cooper Test. Nevertheless, the recent trend seems to indicate that the Cooper Test is slowly being phased out.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{VO_2\\; max} = {d_{12} - 504.9 \\over 44.73}"
},
{
"math_id": 1,
"text": "\\mathrm{VO_2\\; max} = {35.97 \\cdot d(\\text{miles})_{12} - 11.29}"
},
{
"math_id": 2,
"text": "\\mathrm{VO_2\\; max} = {21.01 \\cdot d(\\text{kilometers})_{12} - 11.04}"
}
]
| https://en.wikipedia.org/wiki?curid=774201 |
77421883 | Orders of magnitude (magnetic moment) | This page lists examples of magnetic moments produced by various sources, grouped by orders of magnitude. The magnetic moment of an object is an intrinsic property and does not change with distance, and thus can be used to measure "how strong" a magnet is. For example, Earth possesses an enormous magnetic moment, however we are very distant from its center and experience only a tiny magnetic flux density (measured in tesla) on its surface.
Knowing the magnetic moment of an object (formula_0) and the distance from its centre (formula_1) it is possible to calculate the magnetic flux density experienced (formula_2) using the following approximation:
formula_3,
where formula_4 is the constant of vacuum permeability. | [
{
"math_id": 0,
"text": "\\mathbf{m}"
},
{
"math_id": 1,
"text": "r"
},
{
"math_id": 2,
"text": "\\mathbf{B}"
},
{
"math_id": 3,
"text": "\\mathbf{B}\\approx\\mu_{o}\\frac{\\mathbf{m}}{2\\pi r^{3}}"
},
{
"math_id": 4,
"text": "\\mu_{o}"
}
]
| https://en.wikipedia.org/wiki?curid=77421883 |
774220 | Diamond anvil cell | Device for generating extremely high pressures
A diamond anvil cell (DAC) is a high-pressure device used in geology, engineering, and materials science experiments. It permits the compression of a small (sub-millimeter-sized) piece of material to extreme pressures, typically up to around 100–200 gigapascals, although it is possible to achieve pressures up to 770 gigapascals (7,700,000 bars or 7.7 million atmospheres).
The device has been used to recreate the pressure existing deep inside planets to synthesize materials and phases not observed under normal ambient conditions. Notable examples include the non-molecular ice X, polymeric nitrogen and metallic phases of xenon, lonsdaleite, and potentially metallic hydrogen.
A DAC consists of two opposing diamonds with a sample compressed between the polished culets (tips). Pressure may be monitored using a reference material whose behavior under pressure is known. Common pressure standards include ruby fluorescence, and various structurally simple metals, such as copper or platinum. The uniaxial pressure supplied by the DAC may be transformed into uniform hydrostatic pressure using a pressure-transmitting medium, such as argon, xenon, hydrogen, helium, paraffin oil or a mixture of methanol and ethanol. The pressure-transmitting medium is enclosed by a gasket and the two diamond anvils. The sample can be viewed through the diamonds and illuminated by X-rays and visible light. In this way, X-ray diffraction and fluorescence; optical absorption and photoluminescence; Mössbauer, Raman and Brillouin scattering; positron annihilation and other signals can be measured from materials under high pressure. Magnetic and microwave fields can be applied externally to the cell allowing nuclear magnetic resonance, electron paramagnetic resonance and other magnetic measurements. Attaching electrodes to the sample allows electrical and magnetoelectrical measurements as well as heating up the sample to a few thousand degrees. Much higher temperatures (up to 7000 K) can be achieved with laser-induced heating, and cooling down to millikelvins has been demonstrated.
Principle.
The operation of the diamond anvil cell relies on a simple principle:
formula_0
where p is the pressure, F the applied force, and A the area. Typical culet sizes for diamond anvils are 100–250 micrometres (μm), such that a very high pressure is achieved by applying a moderate force on a sample with a small area, rather than applying a large force on a large area. Diamond is a very hard and virtually incompressible material, thus minimising the deformation and failure of the anvils that apply the force.
History.
The study of materials at extreme conditions, high pressure and high temperature uses a wide array of techniques to achieve these conditions and probe the behavior of material while in the extreme environment. Percy Williams Bridgman, the great pioneer of high-pressure research during the first half of the 20th century, revolutionized the field of high pressures with his development of an opposed anvil device with small flat areas that were pressed one against the other with a lever-arm. The anvils were made of tungsten carbide (WC). This device could achieve pressure of a few gigapascals, and was used in electrical resistance and compressibility measurements.
The first diamond anvil cell was created in 1957-1958. The principles of the DAC are similar to the Bridgman anvils, but in order to achieve the highest possible pressures without breaking the anvils, they were made of the hardest known material: a single crystal diamond. The first prototypes were limited in their pressure range and there was not a reliable way to calibrate the pressure.
The diamond anvil cell became the most versatile pressure generating device that has a single characteristic that sets it apart from the other pressure devices – its optical transparency. This provided the early high pressure pioneers with the ability to directly observe the properties of a material while under pressure. With just the use of an optical microscope, phase boundaries, color changes and recrystallization could be seen immediately, while x-ray diffraction or spectroscopy required time to expose and develop photographic film. The potential for the diamond anvil cell was realized by Alvin Van Valkenburg while he was preparing a sample for IR spectroscopy and was checking the alignment of the diamond faces.
The diamond cell was created at the National Bureau of Standards (NBS) by Charles E. Weir, Ellis R. Lippincott, and Elmer N. Bunting. Within the group, each member focused on different applications of the diamond cell. Van Valkenburg focused on making visual observations, Weir on XRD, Lippincott on IR Spectroscopy. The group members were well experienced in each of their techniques before they began outside collaboration with university researchers such as William A. Bassett and Taro Takahashi at the University of Rochester.
During the first experiments using diamond anvils, the sample was placed on the flat tip of the diamond (the culet) and pressed between the diamond faces. As the diamond faces were pushed closer together, the sample would be pressed and extrude out from the center. Using a microscope to view the sample, it could be seen that a smooth pressure gradient existed across the sample with the outermost portions of the sample acting as a kind of gasket. The sample was not evenly distributed across the diamond culet but localized in the center due to the "cupping" of the diamond at higher pressures. This cupping phenomenon is the elastic stretching of the edges of the diamond culet, commonly referred to as the "shoulder height". Many diamonds were broken during the first stages of producing a new cell or any time an experiment is pushed to higher pressure. The NBS group was in a unique position where almost endless supplies of diamonds were available to them. Customs officials occasionally confiscated diamonds from people attempting to smuggle them into the country. Disposing of such valuable confiscated materials could be problematic given rules and regulations. One solution was simply to make such materials available to people at other government agencies if they could make a convincing case for their use. This became an unrivaled resource as other teams at the University of Chicago, Harvard University, and General Electric entered the high pressure field.
During the following decades DACs have been successively refined, the most important innovations being the use of gaskets and the ruby pressure calibration. The DAC evolved to be the most powerful lab device for generating static high pressure. The range of static pressure attainable today extends to 640 GPa, much higher than the estimated pressures at the Earth's center (~360 GPa).
Components.
There are many different DAC designs but all have four main components:
Force-generating device.
Relies on the operation of either a lever arm, tightening screws, or pneumatic or hydraulic pressure applied to a membrane. In all cases the force is uniaxial and is applied to the tables (bases) of the two anvils.
Two opposing diamond anvils.
Made of high gem quality, flawless diamonds, usually with 16 facets, they typically weigh <templatestyles src="Fraction/styles.css" />1⁄8 to <templatestyles src="Fraction/styles.css" />1⁄3 carat (25 to 70 mg). The culet (tip) is ground and polished to a hexadecagonal surface parallel to the table. The culets of the two diamonds face one another, and must be perfectly parallel in order to produce uniform pressure and to prevent dangerous strains. Specially selected anvils are required for specific measurements – for example, low diamond absorption and luminescence is required in corresponding experiments.
Gasket.
A gasket used in a diamond anvil cell experiment is a thin metal foil, typically 0.3 mm in thickness, which is placed in between the diamonds. Desirable materials for gaskets are strong, stiff metals such as rhenium or tungsten. Steel is frequently used as a cheaper alternative for low pressure experiments. The above-mentioned materials cannot be used in radial geometries where the x-ray beam must pass through the gasket. Since they are not transparent to X-rays, if X-ray illumination through the gasket is required, lighter materials such as beryllium, boron nitride, boron or diamond are used as a gasket. Gaskets are preindented by the diamonds and a hole is drilled in the center of the indentation to create the sample chamber.
Pressure-transmitting medium.
The pressure transmitting medium is the compressible fluid that fills the sample chamber and transmits the applied force to the sample. Hydrostatic pressure is preferred for high-pressure experiments because variation in strain throughout the sample can lead to distorted observations of different behaviors. In some experiments stress and strain relationships are investigated and the effects of non-hydrostatic forces are desired. A good pressure medium will remain a soft, compressible fluid to high pressure.
The full range of techniques that are available has been summarized in a tree diagram by William Bassett. The ability to utilize any and all of these techniques hinges on being able to look through the diamonds which was first demonstrated by visual observations.
Measuring pressure.
The two main pressure scales used in static high-pressure experiments are X-ray diffraction of a material with a known equation of state and measuring the shift in ruby fluorescence lines. The first began with NaCl, for which the compressibility has been determined by first principles in 1968. The major pitfall of this method of measuring pressure is that the use of X-rays is required. Many experiments do not require X-rays and this presents a major inconvenience to conduct both the intended experiment and a diffraction experiment. In 1971, the NBS high pressure group was set in pursuit of a spectroscopic method for determining pressure. It was found that the wavelength of ruby fluorescence emissions change with pressure; this was easily calibrated against the NaCl scale.
Once pressure could be generated and measured it quickly became a competition for which cells can go the highest. The need for a reliable pressure scale became more important during this race. Shock-wave data for the compressibilities of Cu, Mo, Pd, and Ag were available at this time and could be used to define equations of states up to Mbar pressure. Using these scales these pressures were reported:
Both methods are continually refined and in use today. However, the ruby method is less reliable at high temperature. Well defined equations of state are needed when adjusting temperature and pressure, two parameters that affect the lattice parameters of materials.
Uses.
Prior to the invention of the diamond anvil cell, static high-pressure apparatus required large hydraulic presses which weighed several tons and required large specialized laboratories. The simplicity and compactness of the DAC meant that it could be accommodated in a wide variety of experiments. Some contemporary DACs can easily fit into a cryostat for low-temperature measurements, and for use with a superconducting electromagnet. In addition to being hard, diamonds have the advantage of being transparent to a wide range of the electromagnetic spectrum from infrared to gamma rays, with the exception of the far ultraviolet and soft X-rays. This makes the DAC a perfect device for spectroscopic experiments and for crystallographic studies using hard X-rays.
A variant of the diamond anvil, the hydrothermal diamond anvil cell (HDAC) is used in experimental petrology/geochemistry for the study of aqueous fluids, silicate melts, immiscible liquids, mineral solubility and aqueous fluid speciation at geologic pressures and temperatures. The HDAC is sometimes used to examine aqueous complexes in solution using the synchrotron light source techniques XANES and EXAFS. The design of HDAC is very similar to that of DAC, but it is optimized for studying liquids.
Innovative uses.
An innovative use of the diamond anvil cell is testing the sustainability and durability of life under high pressures, including the search for life on extrasolar planets. Testing portions of the theory of panspermia (a form of interstellar travel) is one application of DAC. When interstellar objects containing life-forms impact a planetary body, there is high pressure upon impact and the DAC can replicate this pressure to determine if the organisms could survive. Another reason the DAC is applicable for testing life on extrasolar planets is that planetary bodies that hold the potential for life may have incredibly high pressure on their surface.
In 2002, scientists at the Carnegie Institution of Washington examined the pressure limits of life processes. Suspensions of bacteria, specifically "Escherichia coli" and "Shewanella oneidensis", were placed in the DAC, and the pressure was raised to 1.6 GPa, which is more than 16,000 times Earth's surface pressure (985 hPa). After 30 hours, only about 1% of the bacteria survived. The experimenters then added a dye to the solution. If the cells survived the squeezing and were capable of carrying out life processes, specifically breaking down formate, the dye would turn clear. 1.6 GPa is such great pressure that during the experiment the DAC turned the solution into ice-IV, a room-temperature ice. When the bacteria broke down the formate in the ice, liquid pockets would form because of the chemical reaction. The bacteria were also able to cling to the surface of the DAC with their tails.
Skeptics debated whether breaking down formate is enough to consider the bacteria living. Art Yayanos, an oceanographer at the Scripps Institute of Oceanography in La Jolla, California, believes an organism should only be considered living if it can reproduce. Subsequent results from independent research groups have shown the validity of the 2002 work. This is a significant step that reiterates the need for a new approach to the old problem of studying environmental extremes through experiments. There is practically no debate whether microbial life can survive pressures up to 600 MPa, which has been shown over the last decade or so to be valid through a number of scattered publications.
Similar tests were performed with a low-pressure (0.1–600 MPa) diamond anvil cell, which has better imaging quality and signal collection. The studied microbes, "Saccharomyces cerevisiae" (baker's yeast), continued to grow at pressures of 15–50 MPa, and died at 200 MPa.
Single crystal X-ray diffraction.
Good single crystal X-ray diffraction experiments in diamond anvil cells require sample stage to rotate on the vertical axis, omega. Most diamond anvil cells do not feature a large opening that would allow the cell to be rotated to high angles, a 60 degrees opening is considered sufficient for most crystals but larger angles are possible. The first cell to be used for single crystal experiments was designed by a graduate student at the University of Rochester, Leo Merrill. The cell was triangular with beryllium seats that the diamonds were mounted on; the cell was pressurized with screws and guide pins holding everything in place.
High-temperature techniques.
Heating in diamond-anvil cells is typically done by two means, external or internal heating. External heating is defined as heating the anvils and would include a number of resistive heaters that are placed around the diamonds or around the cell body. The complementary method does not change the temperature of the anvils and includes fine resistive heaters placed within the sample chamber and laser heating. The main advantage to resistive heating is the precise measurement of temperature with thermocouples, but the temperature range is limited by the properties of the diamond which will oxidize in air at 700 °C The use of an inert atmosphere can extend this range above 1000 °C. A tungsten ring-wire resistive heater inside a BX90 DAC filled with Ar gas was reported to reach 1400 °C. With laser heating the sample can reach temperature above 5000 °C, but the minimum temperature that can be measured when using a laser-heating system is ~1200 °C and the measurement is much less precise. Advances in resistive heating are closing the gap between the two techniques so that systems can be studied from room temperature to beyond 5700 °C with the combination of the two.
Laser heating.
The development of laser heating began only 8 years after Charles Weir, of the National Bureau of Standards (NBS), made the first diamond anvil cell and Alvin Van Valkenburg, NBS, realized the potential of being able to see the sample while under pressure. William Bassett and his colleague Taro Takahashi focused a laser beam on the sample while under pressure. The first laser heating system used a single 7 joule pulsed ruby laser that heated the sample to 3000 °C while at 260 kilobars. This was sufficient to convert graphite to diamond. The major flaws within the first system related to control and temperature measurement.
Temperature measurement was initially done by Basset using an optical pyrometer to measure the intensity of the incandescent light from the sample. Colleagues at UC Berkeley were better able to utilize the black-body radiation and more accurately measure the temperature. The hot spot produced by the laser also created large thermal gradients in between the portions of sample that were hit by the focused laser and those that were not. The solution to this problem is ongoing but advances have been made with the introduction of a double-sided approach.
Double-sided heating.
The use of two lasers to heat the sample reduces the axial temperature gradient, this which allows for thicker samples to be heated more evenly. In order for a double-sided heating system to be successful it is essential that the two lasers are aligned so that they are both focused on the sample position. For in situ heating in diffraction experiments, the lasers need to be focused to the same point in space where the X-ray beam is focused.
Laser heating systems at synchrotron facilities.
The European Synchrotron Radiation Facility (ESRF) as well as many other synchrotron facilities as the three major synchrotron user facilities in the United States all have beamlines equipped with laser heating systems. The respective beamlines with laser heating systems are at the ESRF ID27, ID18, and ID24; at the Advanced Photon Source (APS), 13-ID-D GSECARS and 16-ID-B HP-CAT; at the National Synchrotron Light Source, X17B3; and at the Advanced Light Source, 12.2.2. Laser heating has become a routine technique in high-pressure science but the reliability of temperature measurement is still controversial.
Temperature measurement.
In the first experiments with laser heating, temperature came from a calibration of laser power made with known melting points of various materials. When using the pulsed ruby laser this was unreliable due to the short pulse. YAG lasers quickly become the standard, heating for relatively long duration, and allowing observation of the sample throughout the heating process. It was with the first use of YAG lasers that Bassett used an optical pyrometer to measure temperatures in the range of 1000 °C to 1600 °C. The first temperature measurements had a standard deviation of 30 °C from the brightness temperature, but due to the small sample size was estimated to be 50 °C with the possibility that the true temperature of the sample being was 200 °C higher than that of the brightness measurement. Spectrometry of the incandescent light became the next method of temperature measurement used in Bassett's group. The energy of the emitted radiation could be compared to known black-body radiation spectra to derive a temperature. Calibration of these systems is done with published melting points or melting points as measured by resistive heating.
Gas loading.
Principle.
The pressure transmitting medium is an important component in any high-pressure experiment. The medium fills the space within the sample 'chamber' and applies the pressure being transmitted to the medium onto the sample. In a good high-pressure experiment, the medium should maintain a homogeneous distribution of pressure on the sample. In other words, the medium must stay hydrostatic to ensure uniform compressibility of the sample. Once a pressure transmitting medium has lost its hydrostaticity, a pressure gradient forms in the chamber that increases with increasing pressure. This gradient can greatly affect the sample, compromising results. The medium must also be inert, as to not interact with the sample, and stable under high pressures. For experiments with laser heating, the medium should have low thermal conductivity. If an optical technique is being employed, the medium should be optically transparent and for x-ray diffraction, the medium should be a poor x-ray scatterer – as to not contribute to the signal.
Some of the most commonly used pressure transmitting media have been sodium chloride, silicone oil, and a 4:1 methanol-ethanol mixture. Sodium chloride is easy to load and is used for high-temperature experiments because it acts as a good thermal insulator. The methanol-ethanol mixture displays good hydrostaticity to about 10 GPa and with the addition of a small amount of water can be extended to about 15 GPa.
For pressure experiments that exceed 10 GPa, noble gases are preferred. The extended hydrostaticity greatly reduces the pressure gradient in samples at high pressure. Noble gases, such as helium, neon, and argon are optically transparent, thermally insulating, have small X-ray scattering factors, and have good hydrostaticity at high pressures. Even after solidification, noble gases provide quasihydrostatic environments.
Argon is used for experiments involving laser heating because it is chemically insulating. Since it condenses at a temperature above that of liquid nitrogen, it can be loaded cryogenically. Helium and neon have low X-ray scattering factors and are thus used for collecting X-ray diffraction data. Helium and neon also have low shear moduli; minimizing strain on the sample. These two noble gases do not condense above that of liquid nitrogen and cannot be loaded cryogenically. Instead, a high-pressure gas loading system has been developed that employs a gas compression method.
Techniques.
In order to load a gas as a sample of pressure transmitting medium, the gas must be in a dense state, as to not shrink the sample chamber once pressure is induced. To achieve a dense state, gases can be liquefied at low temperatures or compressed. Cryogenic loading is a technique that uses liquefied gas as a means of filling the sample chamber. The DAC is directly immersed into the cryogenic fluid that fills the sample chamber. However, there are disadvantages to cryogenic loading. With the low temperatures indicative of cryogenic loading, the sample is subjected to temperatures that could irreversibly change it. Also, the boiling liquid could displace the sample or trap an air bubble in the chamber. It is not possible to load gas mixtures using the cryogenic method due to the different boiling points of most gases. Gas compression technique densifies the gases at room temperature. With this method, most of the problems seen with cryogenic loading are fixed. Also, loading gas mixtures becomes a possibility. The technique uses a vessel or chamber in which the DAC is placed and is filled with gas. Gases are pressurized and pumped into the vessel with a compressor. Once the vessel is filled and the desired pressure is reached the DAC is closed with a clamp system run by motor driven screws.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p = \\frac{F}{A},"
}
]
| https://en.wikipedia.org/wiki?curid=774220 |
77423961 | Oklab color space | Standard color space with color-opponent values
The Oklab color space is a uniform color space for device independent color designed to improve perceptual uniformity, hue and lightness prediction, color blending, and usability while ensuring numerical stability and ease of implementation. Introduced by Björn Ottosson in December 2020, Oklab and its cylindrical counterpart, Oklch, have been included in the CSS Color Level 4 and Level 5 drafts for device-independent web colors since December 2021. They are supported by recent versions of major web browsers and allow the specification of wide-gamut P3 colors.
Oklab's model is fitted with improved color appearance data: CAM16 data for lightness and chroma, and IPT data for hue. The new fit addresses issues such as unexpected hue and lightness changes in blue colors present in the CIELAB color space, simplifying the creation of color schemes and smoother color gradients.
Coordinates.
Oklab uses the same spatial structure as CIELAB, representing color using three components:
Like CIELCh, Oklch represents colors using:
Achromatic colors.
Neutral greys, pure black and the reference white are achromatic, that is, formula_0, formula_1, formula_2, and "h" is undefined. Assigning any real value to their hue component has no effect on conversions between color spaces.
While pure black is a coordinate singularity in Oklab and Oklch spaces, high-dynamic-range displays can display many chromatic colors (formula_3) at formula_4 (the lightness of the reference white) due to the extended gamut.
Color differences.
The perceptual color difference in Oklab is calculated as the Euclidean distance between the ("L", "a", "b") coordinates.
Conversions between color spaces.
Conversion to and from Oklch.
Like CIELCh, the Cartesian coordinates "a" and "b" are converted to the polar coordinates "C" and "h" as follows:
formula_5
And the polar coordinates are converted to the Cartesian coordinates as follows:
formula_6
Conversion from CIE XYZ.
Converting from CIE XYZ with a Standard Illuminant D65 involves:
Given:
formula_10
Conversion from sRGB.
Converting from sRGB requires first converting from sRGB to CIE XYZ with a Standard Illuminant D65. As the last step of this conversion is a linear map from linear RGB to CIE XYZ, the reference implementation directly employs the multiplied matrix representing the composition of the two linear maps:
formula_11
Conversion to CIE XYZ and sRGB.
Converting to CIE XYZ and sRGB simply involves applying the respective inverse functions in reverse order:
formula_12
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C = 0"
},
{
"math_id": 1,
"text": "a = 0"
},
{
"math_id": 2,
"text": "b = 0"
},
{
"math_id": 3,
"text": "C > 0 "
},
{
"math_id": 4,
"text": "L = 1"
},
{
"math_id": 5,
"text": "\n\\begin{align}\nC &= \\sqrt{a^2 + b^2} \\\\\nh &= \\operatorname{atan2}(b, a)\n\\end{align}\n"
},
{
"math_id": 6,
"text": "\n\\begin{align}\na &= C \\cos(h) \\\\\nb &= C \\sin(h)\n\\end{align}\n"
},
{
"math_id": 7,
"text": "\\begin{bmatrix} l \\\\ m \\\\ s \\end{bmatrix} = \\mathbf M_1 \\begin{bmatrix} X \\\\ Y \\\\ Z \\end{bmatrix}"
},
{
"math_id": 8,
"text": "\\begin{bmatrix} l' \\\\ m' \\\\ s' \\end{bmatrix} = \\begin{bmatrix} l^{1/3} \\\\ m^{1/3} \\\\ s^{1/3} \\end{bmatrix}"
},
{
"math_id": 9,
"text": "\\begin{bmatrix} L \\\\ a \\\\ b \\end{bmatrix} = \\mathbf M_2 \\begin{bmatrix} l' \\\\ m' \\\\ s' \\end{bmatrix}"
},
{
"math_id": 10,
"text": "\n\\begin{align}\n\\mathbf M_1 &= \\begin{bmatrix}\n0.8189330101 & \\phantom{-}0.3618667424 & -0.1288597137 \\\\\n0.0329845436 & \\phantom{-}0.9293118715 & \\phantom{-}0.0361456387 \\\\\n0.0482003018 & \\phantom{-}0.2643662691 & \\phantom{-}0.6338517070\n\\end{bmatrix} \\\\\n\\mathbf M_2 &= \\begin{bmatrix}\n0.2104542553 & \\phantom{-}0.7936177850 & -0.0040720468 \\\\\n1.9779984951 & -2.4285922050 & \\phantom{-}0.4505937099 \\\\\n0.0259040371 & \\phantom{-}0.7827717662 & -0.8086757660\n\\end{bmatrix}\n\\end{align}\n"
},
{
"math_id": 11,
"text": "\n\\begin{bmatrix} l \\\\ m \\\\ s \\end{bmatrix} =\n\\begin{bmatrix}\n0.4122214708 & 0.5363325363 & 0.0514459929 \\\\\n0.2119034982 & 0.6806995451 & 0.1073969566 \\\\\n0.0883024619 & 0.2817188376 & 0.6299787005\n\\end{bmatrix}\n\\begin{bmatrix} R_\\text{linear} \\\\ G_\\text{linear} \\\\ B_\\text{linear} \\end{bmatrix}\n"
},
{
"math_id": 12,
"text": "\n\\begin{align}\n\\begin{bmatrix} l' \\\\ m' \\\\ s' \\end{bmatrix} &= \\mathbf M_2^{-1} \\begin{bmatrix} L \\\\ a \\\\ b \\end{bmatrix} \\\\\n\\begin{bmatrix} l \\\\ m \\\\ s \\end{bmatrix} &= \\begin{bmatrix} \\left(l'\\right)^3 \\\\ \\left(m'\\right)^3 \\\\ \\left(s'\\right)^3 \\end{bmatrix} \\\\\n\\begin{bmatrix} X \\\\ Y \\\\ Z \\end{bmatrix} &= \\mathbf M_1^{-1} \\begin{bmatrix} l' \\\\ m' \\\\ s' \\end{bmatrix}\n\\end{align}\n"
}
]
| https://en.wikipedia.org/wiki?curid=77423961 |
77424313 | Cauchy's estimate | In mathematics, specifically in complex analysis, Cauchy's estimate gives local bounds for the derivatives of a holomorphic function. These bounds are optimal.
Statement and consequence.
Let formula_0 be a holomorphic function on the open ball formula_1 in formula_2. If formula_3 is the sup of formula_4 over formula_1, then Cauchy's estimate says: for each integer formula_5,
formula_6
where formula_7 is the "n"-th complex derivative of formula_0; i.e., formula_8 and formula_9 (see ).
Moreover, taking formula_10 shows the above estimate cannot be improved.
As a corollary, for example, we obtain Liouville's theorem, which says a bounded entire function is constant (indeed, let formula_11 in the estimate.) Slightly more generally, if formula_0 is an entire function bounded by formula_12 for some constants formula_13 and some integer formula_14, then formula_0 is a polynomial.
Proof.
We start with Cauchy's integral formula applied to formula_0, which gives for formula_15 with formula_16,
formula_17
where formula_18. By the differentiation under the integral sign (in the complex variable), we get:
formula_19
Thus,
formula_20
Letting formula_21 finishes the proof. formula_22
Related estimate.
Here is a somehow more general but less precise estimate. It says: given an open subset formula_23, a compact subset formula_24 and an integer formula_5, there is a constant formula_25 such that for every holomorphic function formula_0 on formula_26,
formula_27
where formula_28 is the Lebesgue measure.
This estimate follows from Cauchy's integral formula (in the general form) applied to formula_29 where formula_30 is a smooth function that is formula_31 on a neighborhood of formula_32 and whose support is contained in formula_26. Indeed, shrinking formula_26, assume formula_26 is bounded and the boundary of it is piecewise-smooth. Then, since formula_33, by the integral formula,
formula_34
for formula_15 in formula_26 (since formula_32 can be a point, we cannot assume formula_15 is in formula_32). Here, the first term on the right is zero since the support of formula_35 lies in formula_26. Also, the support of formula_36 is contained in formula_37. Thus, after the differentiation under the integral sign, the claimed estimate follows.
As an application of the above estimate, we can obtain the Stieltjes–Vitali theorem, which says that that a sequence of holomorphic functions on an open subset formula_23 that is bounded on each compact subset has a subsequence converging on each compact subset (necessarily to a holomorphic function since the limit satisfies the Cauchy–Riemann equations). Indeed, the estimate implies such a sequence is equicontinuous on each compact subset; thus, Ascoli's theorem and the diagonal argument give a claimed subsequence.
In several variables.
Cauchy's estimate is also valid for holomorphic functions in several variables. Namely, for a holomorphic function formula_0 on a polydisc formula_38, we have: for each multiindex formula_39,
formula_40
where formula_41, formula_42 and formula_43.
As in the one variable case, this follows from Cauchy's integral formula in polydiscs. and its consequence also continue to be valid in several variables with the same proofs.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "B(a, r)"
},
{
"math_id": 2,
"text": "\\mathbb C"
},
{
"math_id": 3,
"text": "M"
},
{
"math_id": 4,
"text": "|f|"
},
{
"math_id": 5,
"text": "n > 0"
},
{
"math_id": 6,
"text": "|f^{(n)}(a)| \\le \\frac{n!}{r^n} M"
},
{
"math_id": 7,
"text": "f^{(n)}"
},
{
"math_id": 8,
"text": "f' = \\frac{\\partial f}{\\partial z}"
},
{
"math_id": 9,
"text": "f^{(n)} = (f^{(n-1)})^'"
},
{
"math_id": 10,
"text": "f(z) = z^n, a = 0, r = 1"
},
{
"math_id": 11,
"text": "r \\to \\infty"
},
{
"math_id": 12,
"text": "A + B|z|^k"
},
{
"math_id": 13,
"text": "A, B"
},
{
"math_id": 14,
"text": "k > 0"
},
{
"math_id": 15,
"text": "z"
},
{
"math_id": 16,
"text": "| z - a | < r'"
},
{
"math_id": 17,
"text": "f(z) = \\frac{1}{2\\pi i} \\int_{|w-a| = r'} \\frac{f(w)}{w - z} \\, dw"
},
{
"math_id": 18,
"text": "r' < r"
},
{
"math_id": 19,
"text": "f^{(n)}(z) = \\frac{n!}{2\\pi i} \\int_{|w-a| = r'} \\frac{f(w)}{(w - z)^{n+1}} \\, dw."
},
{
"math_id": 20,
"text": "|f^{(n)}(a)| \\le \\frac{n!M}{2\\pi} \\int_{|w-a| = r'} \\frac{|dw|}{|w - a|^{n+1}} = \\frac{n!M}{{r'}^n}."
},
{
"math_id": 21,
"text": "r' \\to r"
},
{
"math_id": 22,
"text": "\\square"
},
{
"math_id": 23,
"text": "U \\subset \\mathbb{C}"
},
{
"math_id": 24,
"text": "K \\subset U"
},
{
"math_id": 25,
"text": "C"
},
{
"math_id": 26,
"text": "U"
},
{
"math_id": 27,
"text": "\\sup_{K} |f^{(n)}| \\le C \\int_U |f| \\, d\\mu"
},
{
"math_id": 28,
"text": "d\\mu"
},
{
"math_id": 29,
"text": "u =\\psi f"
},
{
"math_id": 30,
"text": "\\psi"
},
{
"math_id": 31,
"text": "=1"
},
{
"math_id": 32,
"text": "K"
},
{
"math_id": 33,
"text": "\\partial u / \\partial \\overline{z} = f \\partial \\psi / \\partial \\overline{z}"
},
{
"math_id": 34,
"text": "u(z) = \\frac{1}{2\\pi i} \\int_{\\partial U} \\frac{u(z)}{w - z} \\, dw + \\frac{1}{2\\pi i} \\int_U \\frac{f(w) \\partial \\psi/\\partial \\overline{w} (w)}{w - z} \\, dw \\wedge d\\overline{w}"
},
{
"math_id": 35,
"text": "u"
},
{
"math_id": 36,
"text": "\\partial \\psi/\\partial \\overline{w}"
},
{
"math_id": 37,
"text": "U - K"
},
{
"math_id": 38,
"text": "U = \\prod_1^n B(a_j, r_j) \\subset \\mathbb{C}^n"
},
{
"math_id": 39,
"text": "\\alpha \\in \\mathbb{N}^n"
},
{
"math_id": 40,
"text": "\\left |\\left(\\frac{\\partial}{\\partial z}^{\\alpha} f\\right) (a) \\right| \\le \\frac{\\alpha!}{r^{\\alpha}} \\sup_U |f|"
},
{
"math_id": 41,
"text": "a = (a_1, \\dots, a_n)"
},
{
"math_id": 42,
"text": "\\alpha! = \\prod {\\alpha}_j!"
},
{
"math_id": 43,
"text": "r^{\\alpha} = \\prod r_j^{\\alpha_j}"
}
]
| https://en.wikipedia.org/wiki?curid=77424313 |
77426205 | Sierpiński's theorem on metric spaces | In mathematics, Sierpiński's theorem is an isomorphism theorem concerning certain metric spaces, named after Wacław Sierpiński who proved it in 1920.
It states that any countable metric space without isolated points is homeomorphic to formula_0 (with its standard topology).
Examples.
As a consequence of the theorem, the metric space formula_1 (with its usual Euclidean distance) is homeomorphic to formula_0, which may seem counterintuitive. This is in contrast to, e.g., formula_2, which is not homeomorphic to formula_3. As another example, formula_4 is also homeomorphic to formula_0, again in contrast to the closed real interval formula_5, which is not homeomorphic to formula_3 (whereas the open interval formula_6 is). | [
{
"math_id": 0,
"text": "\\mathbb{Q}"
},
{
"math_id": 1,
"text": "\\mathbb{Q}^2"
},
{
"math_id": 2,
"text": "\\mathbb{R}^2"
},
{
"math_id": 3,
"text": "\\mathbb{R}"
},
{
"math_id": 4,
"text": "\\mathbb{Q} \\cap [0, 1]"
},
{
"math_id": 5,
"text": "[0, 1]"
},
{
"math_id": 6,
"text": "(0, 1)"
}
]
| https://en.wikipedia.org/wiki?curid=77426205 |
77428152 | Quantum double model | In condensed matter physics and quantum information theory, the quantum double model, proposed by Alexei Kitaev, is a lattice model that exhibits topological excitations. This model can be regarded as a lattice gauge theory, and it has applications in many fields, like topological quantum computation, topological order, topological quantum memory, quantum error-correcting code, etc. The name "quantum double" come from the Drinfeld double of a finite groups and Hopf algebras. The most well-known example is the toric code model, which is a special case of quantum double model by setting input group as cyclic group formula_0.
Kitaev quantum double model.
The input data for Kitaev quantum double is a finite group formula_1. Consider a directed lattice formula_2, we put a Hilbert space formula_3 spanned by group elements on each edge, there are four types of edge operators
formula_4
formula_5
For each vertex connecting to formula_6 edges formula_7, there is a vertex operator
formula_8
Notice each edge has an orientation, when formula_9 is the starting point of formula_10, the operator is set as formula_11, otherwise, it is set as formula_12.
For each face surrounded by formula_6 edges formula_7, there is a face operator
formula_13
Similar to the vertex operator, due to the orientation of the edge, when face formula_14 is on the right-hand side when traversing the positive direction of formula_15, we set formula_16; otherwise, we set formula_17 in the above expression. Also, note that the order of edges surrounding the face is assumed to be counterclockwise.
The lattice Hamiltonian of quantum double model is given by
formula_18
Both of formula_19 and formula_20 are Hermitian projectors, they are stabilizer when regard the model is a quantum error correcting code.
The topological excitations of the model is characterized by the representations of the quantum double of finite group formula_1. The anyon types are given by irreducible representations. For the lattice model, the topological excitations are created by ribbon operators.
The gapped boundary theory of quantum double model can be constructed based on subgroups of formula_1. There is a boundary-bulk duality for this model.
The topological excitation of the model is equivalent to that of the Levin-Wen string-net model with input given by the representation category of finite group formula_1.
Hopf quantum double model.
The quantum double model can be generalized to the case where the input data is given by a C* Hopf algebra. In this case, the face and vertex operators are constructed using the comultiplication of Hopf algebra. For each vertex, the Haar integral of the input Hopf algebra is used to construct the vertex operator. For each face, the Haar integral of the dual Hopf algebra of the input Hopf algebra is used to construct the face operator.
The topological excitation are created by ribbon operators.
Weak Hopf quantum double model.
A more general case arises when the input data is chosen as a weak Hopf algebra, resulting in the weak Hopf quantum double model.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{Z}_2"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "\\Sigma"
},
{
"math_id": 3,
"text": "\\mathbb{C}[G]"
},
{
"math_id": 4,
"text": "L_+^g|h\\rangle =|gh\\rangle, L_-^g|h\\rangle =|hg^{-1}\\rangle,"
},
{
"math_id": 5,
"text": "T_+^g|h\\rangle =\\delta_{g,h}|h\\rangle, L_-^g|h\\rangle =T_{g^{-1},h}|h\\rangle."
},
{
"math_id": 6,
"text": "m"
},
{
"math_id": 7,
"text": "e_1,\\ldots,e_m"
},
{
"math_id": 8,
"text": "A_v=\\frac{1}{|G|}\\sum_{g\\in G}L^g(e_1)\\otimes\\ldots\\otimes L^g(e_m)."
},
{
"math_id": 9,
"text": "v"
},
{
"math_id": 10,
"text": "e_k"
},
{
"math_id": 11,
"text": "L_-"
},
{
"math_id": 12,
"text": "L_+"
},
{
"math_id": 13,
"text": "B_f=\\sum_{h_1\\cdots h_m=1_G}\\prod_{k=1}^m T^{h_k}(e_k)."
},
{
"math_id": 14,
"text": "f"
},
{
"math_id": 15,
"text": "e"
},
{
"math_id": 16,
"text": "T_+"
},
{
"math_id": 17,
"text": "T_{-}"
},
{
"math_id": 18,
"text": "H=-\\sum_v A_v-\\sum_f B_f."
},
{
"math_id": 19,
"text": "A_v"
},
{
"math_id": 20,
"text": "B_f"
}
]
| https://en.wikipedia.org/wiki?curid=77428152 |
77428270 | Hemitesseract | Abstract regular 4-polytope with 4 cubic cells
In abstract geometry, a hemitesseract is an abstract, regular polyhedron, containing half the cells of a tesseract, existing in real projective space, RP3.
Realization.
It has four cubic cells, 12 square faces, 16 edges, and 8 vertices. It has an unexpected property that every cell is in contact with every other cell on two faces, and every cell contains all the vertices, which gives an example of an abstract polytope whose faces are not determined by their vertex sets.
As a graph.
From the point of view of graph theory, the skeleton is a cubic graph with 8 diagonal central edges added.
It is also the complete bipartite graph K4,4, and the regular complex polygon 2{4}4, a generalized cross polytope.
As a configuration.
This configuration matrix represents the hemitesseract. The rows and columns correspond to vertices, edges, faces, and cells. The diagonal numbers say how many of each element occur in the whole hemitesseract. The nondiagonal numbers say how many of the column's element occur in or at the row's element. For example, the 2 in the first column of the second row indicates that there are 2 vertices in (i.e., at the extremes of) each edge; the 4 in the second column of the first row indicates that 4 edges meet at each vertex.
formula_0
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{bmatrix}\\begin{matrix}8 & 4 & 6 & 4 \\\\ 2 & 16 & 3 & 3 \\\\ 4 & 4 & 12 & 2 \\\\ 8 & 12 & 6 & 4 \\end{matrix}\\end{bmatrix}"
}
]
| https://en.wikipedia.org/wiki?curid=77428270 |
77429163 | De Rham theorem | In mathematics, more specifically in differential geometry, the de Rham theorem says that the ring homomorphism from the de Rham cohomology to the singular cohomology given by integration is an isomorphism.
The Poincaré lemma implies that the de Rham cohomology is the sheaf cohomology with the constant sheaf formula_0. Thus, for abstract reason, the de Rham cohomology is isomorphic as a group to the singular cohomology. But the de Rham theorem gives a more explicit isomorphism between the two cohomologies; thus, connecting analysis and topology more directly.
Statement.
The key part of the theorem is a construction of the de Rham homomorphism. Let "M" be a manifold. Then there is a map
formula_1
from the space of differential "p"-forms to the space of smooth singular "p"-cochains
given by
formula_2
Stokes' formula implies: formula_3; i.e., formula_4 is a chain map and so it induces:
formula_5
where these cohomologies are the cohomologies of formula_6 and formula_7, respectively. As it turns out, formula_8 is a ring homomorphism and is called the de Rham homomorphism.
Finally, the theorem says that the induced homomorphism formula_8 is an isomorphism (i.e., bijective).
There is also a variant of the theorem that says the de Rham cohomology of "M" is isomorphic as a ring with the Čech cohomology of it. This Čech version is essentially due to André Weil.
Singular-homology version.
There is also a version of the theorem involving singular homology instead of cohomology. It says the pairing
formula_9
induces a perfect pairing between the de Rham cohomology and the (smooth) singular homology; namely,
formula_10
is an isomorphism of vector spaces.
This theorem has the following consequence (familiar from calculus); namely, a closed differential form is exact if and only if the integrations of it over arbitrary cycles are all zero. For a one-form, it means that a closed one-form formula_11 is exact (i.e., admits a potential function) if and only if formula_12 is independent of a path formula_13. This is exactly a statement in calculus.
Current version.
There is also a current (a differential form with distributional coefficients) version of the de Rham theorem, which says the singular cohomology can be computed as the cohomology of the complex of currents. This version is weaker in the sense that the isomorphism is not a ring homomorphism (since currents cannot be multiplied and so the space of currents is not a ring).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{R}"
},
{
"math_id": 1,
"text": "k : \\Omega^p(M) \\to S^p_{\\mathcal{C}^{\\infty}}(M)"
},
{
"math_id": 2,
"text": "\\omega \\mapsto \\left(\\sigma \\mapsto \\int_{\\sigma} \\omega \\right)."
},
{
"math_id": 3,
"text": "k \\circ d = \\partial \\circ k"
},
{
"math_id": 4,
"text": "k"
},
{
"math_id": 5,
"text": "[k] : \\operatorname{H}_{\\textrm{de Rham}}^*(M) \\to \\operatorname{H}^*_{\\mathrm{sing}}(M)"
},
{
"math_id": 6,
"text": "\\Omega^*(M)"
},
{
"math_id": 7,
"text": "S^*_{\\mathcal{C}^{\\infty}}(M)"
},
{
"math_id": 8,
"text": "[k]"
},
{
"math_id": 9,
"text": "(\\omega, \\sigma) \\mapsto \\int_{\\sigma} \\omega"
},
{
"math_id": 10,
"text": "\\operatorname{H}^*_{\\mathrm{deRham}}(M) \\to \\operatorname{H}_*^{\\mathrm{sing}}(M)^*, \\, [\\omega] \\mapsto \\left([\\sigma] \\mapsto \\int_{\\sigma} \\omega \\right)"
},
{
"math_id": 11,
"text": "\\omega"
},
{
"math_id": 12,
"text": "\\int_{\\gamma} \\omega"
},
{
"math_id": 13,
"text": "\\gamma"
}
]
| https://en.wikipedia.org/wiki?curid=77429163 |
77444635 | Fay-Riddell equation | The Fay-Riddell equation is a fundamental relation in the fields of aerospace engineering and hypersonic flow, which provides a method to estimate the stagnation point heat transfer rate on a blunt body moving at hypersonic speeds in dissociated air. The heat flux for a spherical nose is computed according to quantities at the wall and the edge of an equilibrium boundary layer.
formula_0
where formula_1 is the Prandtl number, formula_2 is the Lewis number, formula_3 is the stagnation enthalpy at the boundary layer's edge, formula_4 is the wall enthalpy, formula_5 is the enthalpy of dissociation, formula_6 is the air density, formula_7 is the dynamic viscosity, and formula_8 is the velocity gradient at the stagnation point. According to Newtonian hypersonic flow theory, the velocity gradient should be:formula_9where formula_10 is the nose radius, formula_11 is the pressure at the edge, and formula_12 is the free stream pressure. The equation was developed by James Fay and Francis Riddell in the late 1950s. Their work addressed the critical need for accurate predictions of aerodynamic heating to protect spacecraft during re-entry, and is considered to be a pioneering work in the analysis of chemically reacting viscous flow.
Assumptions.
The Fay-Riddell equation is derived under several assumptions:
Extensions.
While the Fay-Riddell equation was derived for an equilibrium boundary layer, it is possible to extend the results to a chemically frozen boundary layer with either an equilibrium catalytic wall or a noncatalytic wall.formula_13
Applications.
The Fay-Riddell equation is widely used in the design and analysis of thermal protection systems for re-entry vehicles. It provides engineers with a crucial tool for estimating the severe aerodynamic heating conditions encountered during atmospheric entry and for designing appropriate thermal protection measures. | [
{
"math_id": 0,
"text": " \\dot{q}_{w} = 0.763 \\cdot \\text{Pr}^{-0.6} (\\rho_{e} \\mu_{e})^{0.4} (\\rho_{w} \\mu_{w})^{0.1} \\sqrt{ \\left( \\frac{du_{e} }{dx} \\right)_{s} } (h_{0,e} - h_{w}) \\left[ 1 + ( \\text{Le}^{0.52} - 1 ) \\left( \\frac{ h_{D} }{ h_{0,e} } \\right) \\right] "
},
{
"math_id": 1,
"text": "\\text{Pr}"
},
{
"math_id": 2,
"text": "\\text{Le}"
},
{
"math_id": 3,
"text": "h_{0,e}"
},
{
"math_id": 4,
"text": "h_{w}"
},
{
"math_id": 5,
"text": "h_{D}"
},
{
"math_id": 6,
"text": "\\rho"
},
{
"math_id": 7,
"text": "\\mu"
},
{
"math_id": 8,
"text": "(du_{e}/dx)_{s}"
},
{
"math_id": 9,
"text": "\\left( \\frac{du_{e} }{dx} \\right)_{s} = \\frac{1}{R} \\sqrt{ \\frac{2(p_{e} - p_{\\infty})}{\\rho_{e}} }"
},
{
"math_id": 10,
"text": "R"
},
{
"math_id": 11,
"text": "p_{e}"
},
{
"math_id": 12,
"text": "p_{\\infty}"
},
{
"math_id": 13,
"text": "\\dot{q}_{w} = 0.763 \\cdot \\text{Pr}^{-0.6} (\\rho_{e} \\mu_{e})^{0.4} (\\rho_{w} \\mu_{w})^{0.1} \\sqrt{ \\left( \\frac{du_{e} }{dx} \\right)_{s} } \\times\n\\begin{cases}\n(h_{0,e} - h_{w}) \\left[ 1 + ( \\text{Le}^{0.63} - 1 ) \\left( \\frac{ h_{D} }{ h_{0,e} } \\right) \\right], & (\\text{Equilibrium Catalytic}) \\\\\n\\left( 1 - \\frac{ h_{D} }{ h_{0,e} } \\right), & (\\text{Noncatalytic})\n\\end{cases}"
}
]
| https://en.wikipedia.org/wiki?curid=77444635 |
774572 | Evolutionary game theory | Application of game theory to evolving populations in biology
Evolutionary game theory (EGT) is the application of game theory to evolving populations in biology. It defines a framework of contests, strategies, and analytics into which Darwinian competition can be modelled. It originated in 1973 with John Maynard Smith and George R. Price's formalisation of contests, analysed as strategies, and the mathematical criteria that can be used to predict the results of competing strategies.
Evolutionary game theory differs from classical game theory in focusing more on the dynamics of strategy change. This is influenced by the frequency of the competing strategies in the population.
Evolutionary game theory has helped to explain the basis of altruistic behaviours in Darwinian evolution. It has in turn become of interest to economists, sociologists, anthropologists, and philosophers.
History.
Classical game theory.
Classical non-cooperative game theory was conceived by John von Neumann to determine optimal strategies in competitions between adversaries. A contest involves players, all of whom have a choice of moves. Games can be a single round or repetitive. The approach a player takes in making their moves constitutes their strategy. Rules govern the outcome for the moves taken by the players, and outcomes produce payoffs for the players; rules and resulting payoffs can be expressed as decision trees or in a payoff matrix. Classical theory requires the players to make rational choices. Each player must consider the strategic analysis that their opponents are making to make their own choice of moves.
The problem of ritualized behaviour.
Evolutionary game theory started with the problem of how to explain ritualized animal behaviour in a conflict situation; "why are animals so 'gentlemanly or ladylike' in contests for resources?" The leading ethologists Niko Tinbergen and Konrad Lorenz proposed that such behaviour exists for the benefit of the species. John Maynard Smith considered that incompatible with Darwinian thought, where selection occurs at an individual level, so self-interest is rewarded while seeking the common good is not. Maynard Smith, a mathematical biologist, turned to game theory as suggested by George Price, though Richard Lewontin's attempts to use the theory had failed.
Adapting game theory to evolutionary games.
Maynard Smith realised that an evolutionary version of game theory does not require players to act rationally—only that they have a strategy. The results of a game show how good that strategy was, just as evolution tests alternative strategies for the ability to survive and reproduce. In biology, strategies are genetically inherited traits that control an individual's action, analogous with computer programs. The success of a strategy is determined by how good the strategy is in the presence of competing strategies (including itself), and of the frequency with which those strategies are used. Maynard Smith described his work in his book "Evolution and the Theory of Games".
Participants aim to produce as many replicas of themselves as they can, and the payoff is in units of fitness (relative worth in being able to reproduce). It is always a multi-player game with many competitors. Rules include replicator dynamics, in other words how the fitter players will spawn more replicas of themselves into the population and how the less fit will be culled, in a replicator equation. The replicator dynamics models heredity but not mutation, and assumes asexual reproduction for the sake of simplicity. Games are run repetitively with no terminating conditions. Results include the dynamics of changes in the population, the success of strategies, and any equilibrium states reached. Unlike in classical game theory, players do not choose their strategy and cannot change it: they are born with a strategy and their offspring inherit that same strategy.
Evolutionary games.
Models.
Evolutionary game theory encompasses Darwinian evolution, including competition (the game), natural selection (replicator dynamics), and heredity. Evolutionary game theory has contributed to the understanding of group selection, sexual selection, altruism, parental care, co-evolution, and ecological dynamics. Many counter-intuitive situations in these areas have been put on a firm mathematical footing by the use of these models.
The common way to study the evolutionary dynamics in games is through replicator equations. These show the growth rate of the proportion of organisms using a certain strategy and that rate is equal to the difference between the average payoff of that strategy and the average payoff of the population as a whole. Continuous replicator equations assume infinite populations, continuous time, complete mixing and that strategies breed true. Some attractors (all global asymptotically stable fixed points) of the equations are evolutionarily stable states. A strategy which can survive all "mutant" strategies is considered evolutionarily stable. In the context of animal behavior, this usually means such strategies are programmed and heavily influenced by genetics, thus making any player or organism's strategy determined by these biological factors.
Evolutionary games are mathematical objects with different rules, payoffs, and mathematical behaviours. Each "game" represents different problems that organisms have to deal with, and the strategies they might adopt to survive and reproduce. Evolutionary games are often given colourful names and cover stories which describe the general situation of a particular game. Representative games include hawk-dove, war of attrition, stag hunt, producer-scrounger, tragedy of the commons, and prisoner's dilemma. Strategies for these games include hawk, dove, bourgeois, prober, defector, assessor, and retaliator. The various strategies compete under the particular game's rules, and the mathematics are used to determine the results and behaviours.
Hawk dove.
The first game that Maynard Smith analysed is the classic "hawk dove" game. It was conceived to analyse Lorenz and Tinbergen's problem, a contest over a shareable resource. The contestants can be either a hawk or a dove. These are two subtypes or morphs of one species with different strategies. The hawk first displays aggression, then escalates into a fight until it either wins or is injured (loses). The dove first displays aggression, but if faced with major escalation runs for safety. If not faced with such escalation, the dove attempts to share the resource.
Given that the resource is given the value V, the damage from losing a fight is given cost C:
The actual payoff, however, depends on the probability of meeting a hawk or dove, which in turn is a representation of the percentage of hawks and doves in the population when a particular contest takes place. That, in turn, is determined by the results of all of the previous contests. If the cost of losing C is greater than the value of winning V (the normal situation in the natural world) the mathematics ends in an evolutionarily stable strategy (ESS), a mix of the two strategies where the population of hawks is V/C. The population regresses to this equilibrium point if any new hawks or doves make a temporary perturbation in the population.
The solution of the hawk dove game explains why most animal contests involve only ritual fighting behaviours in contests rather than outright battles. The result does not at all depend on "good of the species" behaviours as suggested by Lorenz, but solely on the implication of actions of so-called selfish genes.
War of attrition.
In the hawk dove game the resource is shareable, which gives payoffs to both doves meeting in a pairwise contest. Where the resource is not shareable, but an alternative resource might be available by backing off and trying elsewhere, pure hawk or dove strategies are less effective. If an unshareable resource is combined with a high cost of losing a contest (injury or possible death) both hawk and dove payoffs are further diminished. A safer strategy of lower cost display, bluffing and waiting to win, is then viable – a bluffer strategy. The game then becomes one of accumulating costs, either the costs of displaying or the costs of prolonged unresolved engagement. It is effectively an auction; the winner is the contestant who will swallow the greater cost while the loser gets the same cost as the winner but no resource. The resulting evolutionary game theory mathematics lead to an optimal strategy of timed bluffing.
This is because in the war of attrition any strategy that is unwavering and predictable is unstable, because it will ultimately be displaced by a mutant strategy which relies on the fact that it can best the existing predictable strategy by investing an extra small delta of waiting resource to ensure that it wins. Therefore, only a random unpredictable strategy can maintain itself in a population of bluffers. The contestants in effect choose an acceptable cost to be incurred related to the value of the resource being sought, effectively making a random bid as part of a mixed strategy (a strategy where a contestant has several, or even many, possible actions in their strategy). This implements a distribution of bids for a resource of specific value V, where the bid for any specific contest is chosen at random from that distribution. The distribution (an ESS) can be computed using the Bishop-Cannings theorem, which holds true for any mixed-strategy ESS. The distribution function in these contests was determined by Parker and Thompson to be:
formula_0
The result is that the cumulative population of quitters for any particular cost m in this "mixed strategy" solution is:
formula_1
as shown in the adjacent graph. The intuitive sense that greater values of resource sought leads to greater waiting times is borne out. This is observed in nature, as in male dung flies contesting for mating sites, where the timing of disengagement in contests is as predicted by evolutionary theory mathematics.
Asymmetries that allow new strategies.
In the war of attrition there must be nothing that signals the size of a bid to an opponent, otherwise the opponent can use the cue in an effective counter-strategy. There is however a mutant strategy which can better a bluffer in the war of attrition game if a suitable asymmetry exists, the bourgeois strategy. Bourgeois uses an asymmetry of some sort to break the deadlock. In nature one such asymmetry is possession of a resource. The strategy is to play a hawk if in possession of the resource, but to display then retreat if not in possession. This requires greater cognitive capability than hawk, but bourgeois is common in many animal contests, such as in contests among mantis shrimps and among speckled wood butterflies.
Social behaviour.
Games like hawk dove and war of attrition represent pure competition between individuals and have no attendant social elements. Where social influences apply, competitors have four possible alternatives for strategic interaction. This is shown on the adjacent figure, where a plus sign represents a benefit and a minus sign represents a cost.
Contests of selfish genes.
At first glance it may appear that the contestants of evolutionary games are the individuals present in each generation who directly participate in the game. But individuals live only through one game cycle, and instead it is the strategies that really contest with one another over the duration of these many-generation games. So it is ultimately genes that play out a full contest – selfish genes of strategy. The contesting genes are present in an individual and to a degree in all of the individual's kin. This can sometimes profoundly affect which strategies survive, especially with issues of cooperation and defection. William Hamilton, known for his theory of kin selection, explored many of these cases using game-theoretic models. Kin-related treatment of game contests helps to explain many aspects of the behaviour of social insects, the altruistic behaviour in parent-offspring interactions, mutual protection behaviours, and co-operative care of offspring. For such games, Hamilton defined an extended form of fitness – "inclusive fitness", which includes an individual's offspring as well as any offspring equivalents found in kin.
Hamilton went beyond kin relatedness to work with Robert Axelrod, analysing games of co-operation under conditions not involving kin where reciprocal altruism came into play.
Eusociality and kin selection.
Eusocial insect workers forfeit reproductive rights to their queen. It has been suggested that kin selection, based on the genetic makeup of these workers, may predispose them to altruistic behaviours. Most eusocial insect societies have haplodiploid sexual determination, which means that workers are unusually closely related.
This explanation of insect eusociality has, however, been challenged by a few highly-noted evolutionary game theorists (Nowak and Wilson) who have published a controversial alternative game theoretic explanation based on a sequential development and group selection effects proposed for these insect species.
Prisoner's dilemma.
A difficulty of the theory of evolution, recognised by Darwin himself, was the problem of altruism. If the basis for selection is at an individual level, altruism makes no sense at all. But universal selection at the group level (for the good of the species, not the individual) fails to pass the test of the mathematics of game theory and is certainly not the general case in nature. Yet in many social animals, altruistic behaviour exists. The solution to this problem can be found in the application of evolutionary game theory to the prisoner's dilemma game – a game which tests the payoffs of cooperating or in defecting from cooperation. It is the most studied game in all of game theory.
The analysis of the prisoner's dilemma is as a repetitive game. This affords competitors the possibility of retaliating for defection in previous rounds of the game. Many strategies have been tested; the best competitive strategies are general cooperation, with a reserved retaliatory response if necessary. The most famous and one of the most successful of these is tit-for-tat with a simple algorithm.
def tit_for_tat(last_move_by_opponent):
"""Defect if opponent defects, else cooperate."""
if last_move_by_opponent == defect:
defect()
else:
cooperate()
The pay-off for any single round of the game is defined by the pay-off matrix for a single round game (shown in bar chart 1 below). In multi-round games the different choices – co-operate or defect – can be made in any particular round, resulting in a certain round payoff. It is, however, the possible accumulated pay-offs over the multiple rounds that count in shaping the overall pay-offs for differing multi-round strategies such as tit-for-tat.
Example 1: The straightforward single round prisoner's dilemma game. The classic prisoner's dilemma game payoffs gives a player a maximum payoff if they defect and their partner co-operates (this choice is known as "temptation"). If, however, the player co-operates and their partner defects, they get the worst possible result (the suckers payoff). In these payoff conditions the best choice (a Nash equilibrium) is to defect.
Example 2: Prisoner's dilemma played repeatedly. The strategy employed is "tit-for-tat" which alters behaviours based on the action taken by a partner in the previous round – i.e. reward co-operation and punish defection. The effect of this strategy in accumulated payoff over many rounds is to produce a higher payoff for both players' co-operation and a lower payoff for defection. This removes the temptation to defect. The suckers payoff also becomes less, although "invasion" by a pure defection strategy is not entirely eliminated.
Routes to altruism.
Altruism takes place when one individual, at a cost (C) to itself, exercises a strategy that provides a benefit (B) to another individual. The cost may consist of a loss of capability or resource which helps in the battle for survival and reproduction, or an added risk to its own survival. Altruism strategies can arise through:
The evolutionarily stable strategy.
The evolutionarily stable strategy (ESS) is akin to the Nash equilibrium in classical game theory, but with mathematically extended criteria. Nash equilibrium is a game equilibrium where it is not rational for any player to deviate from their present strategy, provided that the others adhere to their strategies. An ESS is a state of game dynamics where, in a very large population of competitors, another mutant strategy cannot successfully enter the population to disturb the existing dynamic (which itself depends on the population mix). Therefore, a successful strategy (with an ESS) must be both effective against competitors when it is rare – to enter the previous competing population, and successful when later in high proportion in the population – to defend itself. This in turn means that the strategy must be successful when it contends with others exactly like itself.
An ESS is not:
The ESS state can be solved for by exploring either the dynamics of population change to determine an ESS, or by solving equations for the stable stationary point conditions which define an ESS. For example, in the hawk dove game we can look for whether there is a static population mix condition where the fitness of doves will be exactly the same as fitness of hawks (therefore both having equivalent growth rates – a static point).
Let the chance of meeting a hawk=p so therefore the chance of meeting a dove is (1-p)
Let Whawk equal the payoff for hawk...
Whawk=payoff in the chance of meeting a dove + payoff in the chance of meeting a hawk
Taking the payoff matrix results and plugging them into the above equation:
Whawk
V·(1-p)+(V/2-C/2)·p
Similarly for a dove:
Wdove
V/2·(1-p)+0·(p)
so...
Wdove
V/2·(1-p)
Equating the two fitnesses, hawk and dove
V·(1-p)+(V/2-C/2)·p
V/2·(1-p)
... and solving for p
p
V/C
so for this "static point" where the "population percent" is an ESS solves to be ESS(percent Hawk)="V/C"
Similarly, using inequalities, it can be shown that an additional hawk or dove mutant entering this ESS state eventually results in less fitness for their kind – both a true Nash and an ESS equilibrium. This example shows that when the risks of contest injury or death (the cost C) is significantly greater than the potential reward (the benefit value V), the stable population will be mixed between aggressors and doves, and the proportion of doves will exceed that of the aggressors. This explains behaviours observed in nature.
Unstable games, cyclic patterns.
Rock paper scissors.
Rock paper scissors incorporated into an evolutionary game has been used for modelling natural processes in the study of ecology.
Using experimental economics methods, scientists have used RPS games to test human social evolutionary dynamical behaviours in laboratories. The social cyclic behaviours, predicted by evolutionary game theory, have been observed in various laboratory experiments.
Side-blotched lizard plays the RPS, and other cyclical games.
The first example of RPS in nature was seen in the behaviours and throat colours of a small lizard of western North America. The side-blotched lizard ("Uta stansburiana") is polymorphic with three throat-colour morphs that each pursue a different mating strategy:
However the blue throats cannot overcome the more aggressive orange throats. Later work showed that the blue males are altruistic to other blue males, with three key traits: they signal with blue color, they recognize and settle next to other (unrelated) blue males, and they will even defend their partner against orange, to the death. This is the hallmark of another game of cooperation that involves a green-beard effect.
The females in the same population have the same throat colours, and this affects how many offspring they produce and the size of the progeny, which generates cycles in density, yet another game – the "r-K" game. Here, "r" is the Malthusian parameter governing exponential growth, and "K" is the carrying capacity of the environment. Orange females have larger clutches and smaller offspring which do well at low density. Yellow & blue females have smaller clutches and larger offspring which do well at high density. This generates perpetual cycles tightly tied to population density. The idea of cycles due to density regulation of two strategies originated with rodent researcher Dennis Chitty, ergo these kinds of games lead to "Chitty cycles". There are games within games within games embedded in natural populations. These drive RPS cycles in the males with a periodicity of four years and "r-K" cycles in females with a two year period.
The overall situation corresponds to the rock, scissors, paper game, creating a four-year population cycle. The RPS game in male side-blotched lizards does not have an ESS, but it has a Nash equilibrium (NE) with endless orbits around the NE attractor. Following this Side-blotched lizard research, many other three-strategy polymorphisms have been discovered in lizards and some of these have RPS dynamics merging the male game and density regulation game in a single sex (males). More recently, mammals have been shown to harbour the same RPS game in males and "r-K" game in females, with coat-colour polymorphisms and behaviours that drive cycles. This game is also linked to the evolution of male care in rodents, and monogamy, and drives speciation rates. There are "r-K" strategy games linked to rodent population cycles (and lizard cycles).
When he read that these lizards were essentially engaged in a game with a rock-paper-scissors structure, John Maynard Smith is said to have exclaimed "They have read my book!".
Signalling, sexual selection and the handicap principle.
Aside from the difficulty of explaining how altruism exists in many evolved organisms, Darwin was also bothered by a second conundrum – why a significant number of species have phenotypical attributes that are patently disadvantageous to them with respect to their survival – and should by the process of natural section be selected against – e.g. the massive inconvenient feather structure found in a peacock's tail. Regarding this issue Darwin wrote to a colleague "The sight of a feather in a peacock's tail, whenever I gaze at it, makes me sick." It is the mathematics of evolutionary game theory, which has not only explained the existence of altruism, but also explains the totally counterintuitive existence of the peacock's tail and other such biological encumbrances.
On analysis, problems of biological life are not at all unlike the problems that define economics – eating (akin to resource acquisition and management), survival (competitive strategy) and reproduction (investment, risk and return). Game theory was originally conceived as a mathematical analysis of economic processes and indeed this is why it has proven so useful in explaining so many biological behaviours. One important further refinement of the evolutionary game theory model that has economic overtones rests on the analysis of costs. A simple model of cost assumes that all competitors suffer the same penalty imposed by the game costs, but this is not the case. More successful players will be endowed with or will have accumulated a higher "wealth reserve" or "affordability" than less-successful players. This wealth effect in evolutionary game theory is represented mathematically by "resource holding potential (RHP)" and shows that the effective cost to a competitor with a higher RHP are not as great as for a competitor with a lower RHP. As a higher RHP individual is a more desirable mate in producing potentially successful offspring, it is only logical that with sexual selection RHP should have evolved to be signalled in some way by the competing rivals, and for this to work this signalling must be done "honestly". Amotz Zahavi has developed this thinking in what is known as the "handicap principle", where superior competitors signal their superiority by a costly display. As higher RHP individuals can properly afford such a costly display this signalling is inherently honest, and can be taken as such by the signal receiver. In nature this is illustrated than in the costly plumage of the peacock. The mathematical proof of the handicap principle was developed by Alan Grafen using evolutionary game-theoretic modelling.
Coevolution.
Two types of dynamics:
A third, coevolutionary, dynamic, combines intra-specific and inter-specific competition. Examples include predator-prey competition and host-parasite co-evolution, as well as mutualism. Evolutionary game models have been created for pairwise and multi-species coevolutionary systems. The general dynamic differs between competitive systems and mutualistic systems.
In competitive (non-mutualistic) inter-species coevolutionary system the species are involved in an arms race – where adaptations that are better at competing against the other species tend to be preserved. Both game payoffs and replicator dynamics reflect this. This leads to a Red Queen dynamic where the protagonists must "run as fast as they can to just stay in one place".
A number of evolutionary game theory models have been produced to encompass coevolutionary situations. A key factor applicable in these coevolutionary systems is the continuous adaptation of strategy in such arms races. Coevolutionary modelling therefore often includes genetic algorithms to reflect mutational effects, while computers simulate the dynamics of the overall coevolutionary game. The resulting dynamics are studied as various parameters are modified. Because several variables are simultaneously at play, solutions become the province of multi-variable optimisation. The mathematical criteria of determining stable points are Pareto efficiency and Pareto dominance, a measure of solution optimality peaks in multivariable systems.
Carl Bergstrom and Michael Lachmann apply evolutionary game theory to the division of benefits in mutualistic interactions between organisms. Darwinian assumptions about fitness are modeled using replicator dynamics to show that the organism evolving at a slower rate in a mutualistic relationship gains a disproportionately high share of the benefits or payoffs.
Extending the model.
A mathematical model analysing the behaviour of a system needs initially to be as simple as possible to aid in developing a base understanding the fundamentals, or “first order effects”, pertaining to what is being studied. With this understanding in place it is then appropriate to see if other, more subtle, parameters (second order effects) further impact the primary behaviours or shape additional behaviours in the system. Following Maynard Smith's seminal work in evolutionary game theory, the subject has had a number of very significant extensions which have shed more light on understanding evolutionary dynamics, particularly in the area of altruistic behaviors. Some of these key extensions to evolutionary game theory are:
Spatial games.
Geographic factors in evolution include gene flow and horizontal gene transfer. Spatial game models represent geometry by putting contestants in a lattice of cells: contests take place only with immediate neighbours. Winning strategies take over these immediate neighbourhoods and then interact with adjacent neighbourhoods. This model is useful in showing how pockets of co-operators can invade and introduce altruism in the Prisoners Dilemma game, where Tit for Tat (TFT) is a Nash Equilibrium but NOT also an ESS. Spatial structure is sometimes abstracted into a general network of interactions. This is the foundation of evolutionary graph theory.
Effects of having information.
In evolutionary game theory as in conventional Game Theory the effect of Signalling (the acquisition of information) is of critical importance, as in Indirect Reciprocity in Prisoners Dilemma (where contests between the SAME paired individuals are NOT repetitive). This models the reality of most normal social interactions which are non-kin related. Unless a probability measure of reputation is available in Prisoners Dilemma only direct reciprocity can be achieved. With this information indirect reciprocity is also supported.
Alternatively, agents might have access to an arbitrary signal initially uncorrelated to strategy but becomes correlated due to evolutionary dynamics. This is the green-beard effect (see side-blotched lizards, above) or evolution of ethnocentrism in humans. Depending on the game, it can allow the evolution of either cooperation or irrational hostility.
From molecular to multicellular level, a signaling game model with information asymmetry between sender and receiver might be appropriate, such as in mate attraction or evolution of translation machinery from RNA strings.
Finite populations.
Many evolutionary games have been modelled in finite populations to see the effect this may have, for example in the success of mixed strategies.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p(x)=\\frac{e^{-x/V}}{V}."
},
{
"math_id": 1,
"text": "p(m)=1- e^{-m/V},"
}
]
| https://en.wikipedia.org/wiki?curid=774572 |
7747984 | Eddy diffusion | Mixing of fluids due to eddy currents
In fluid dynamics, eddy diffusion, eddy dispersion, or turbulent diffusion is a process by which fluid substances mix together due to eddy motion. These eddies can vary widely in size, from subtropical ocean gyres down to the small Kolmogorov microscales, and occur as a result of turbulence (or turbulent flow). The theory of eddy diffusion was first developed by Sir Geoffrey Ingram Taylor.
In laminar flows, material properties (salt, heat, humidity, aerosols etc.) are mixed by random motion of individual molecules. By a purely probabilistic argument, the net flux of molecules from high concentration area to low concentration area is higher than the flux in the opposite direction. This down-gradient flux equilibrates the concentration profile over time. This phenomenon is called molecular diffusion, and its mathematical aspect is captured by the diffusion equation.
In turbulent flows, on top of mixing by molecular diffusion, eddies stir () the fluid. This causes fluid parcels from various initial positions, and thus various associated concentrations, to penetrate into fluid regions with different initial concentrations. This causes the fluid properties to homogenize on scale larger than that of eddies responsible for stirring, in a very efficient way compared to individual molecular motion. In most macroscopic flows in nature, eddy diffusion is several orders of magnitude stronger than molecular diffusion. This sometimes leads to the latter being neglected when studying turbulent flows.
The problem with turbulent diffusion in the atmosphere and beyond is that there is no single model drawn from fundamental physics that explains all its significant aspects. There are two alternative approaches with non-overlapping areas of utility. According to the gradient transport theory, the diffusion flux at a fixed point in the fluid is proportional to the local concentration gradient. This theory is Eulerian in its nature, i.e. it describes fluid properties in a spatially fixed coordinate system (see Lagrangian and Eulerian specification of a fluid). In contrast, statistical diffusion theories follow the motion of fluid particles, and are thus Lagrangian. In addition, computational approaches may be classified as continuous-motion or discontinuous-motion theories, depending on whether they assume that particles move continuously or in discrete steps.
Historical developments.
The theory of eddy diffusion was originally developed, around the end of the 1910s, by G. I. Taylor and L. F. Richardson in England and by W. Schmidt in Austria as a direct generalization of the classical theory of molecular diffusion. They proposed the idea that the mass effect of the eddies is entirely similar to that of molecules except for a scale difference. This is described as the "gradient model" in a later section, the name derived from the fact that diffusion fluxes are proportional to the local gradient in concentration, just as for molecular diffusion.
Later research (1930s), mainly by O. G. Sutton, pointed out some problems of the original approach and put forward the idea that the difference between the eddy structure of a turbulent fluid and the molecular structure of a fluid at rest is more than one of scale.
During the following decades, a number of studies were carried out to experimentally probe the established theory on eddy diffusion, both for the atmosphere and the ocean/lake bodies, mostly finding agreement with the original theory. In particular, experiments on the diffusion of foreign material in a turbulent water stream, vertical structure of water in lake bodies, and lowest part of the atmosphere found experimental evidence that eddy diffusion is indeed stronger than molecular diffusion and generally obeys the theory originally developed by G. I. Taylor. Some counter-examples to the original gradient theory are given later in the article.
Active research is now focused on the contributions of eddy diffusion to both atmospheric and oceanic known processes. New models and theories were built on the foundation of the original theory to fully describe these processes. In particular, these studies include eddy diffusion mechanisms to explain processes from aerosols deposition to internal gravity waves in the upper atmosphere, from deep sea eddy diffusion and buoyancy to nutrient supply to the surface of the mixed layer in the Antarctic Circumpolar Current.
Mathematical formulation of eddy diffusion.
Source:
In this section a mathematical framework based on continuity equation is developed to describe the evolution of concentration profile over time, under action of eddy diffusion. Velocity and concentration field are decomposed into mean and fluctuating (eddy) components. It is then derived that the concentration flux due to eddies is given by covariance of fluctuations in velocity and concentration. This covariance is in principle unknown, which means that the evolution equation for concentration profile cannot be solved without making additional assumptions about the covariance. The next section then provides one such assumption (the gradient model) and thus links to the main result of this section. The one after that describes an entirely different statistical (and Lagrangian) approach to problem.
Consider a scalar field formula_0, formula_1 being a position in a fixed Cartesian coordinate system. The field measures the concentration of a passive conserved tracer species (could be a coloured dye in an experiment, salt in the sea, or water vapour in the air). The adjective "passive" means that, at least within some approximation, the tracer does not alter dynamic properties such as density or pressure in any way. It just moves with the flow without modifying it. This is not strictly true for many "tracers" in nature, such as water vapour or salt. "Conserved" means that there are no absolute sources or sinks, the tracer is only moved around by diffusion and advection.
Consider the conservation equation for formula_0. This is the generalized fluid continuity equation with a source term on the right hand side. The source corresponds to molecular diffusion (and not to any net creation/destruction of the tracer). The equation is written in Eulerian view (it contains partial time derivate):
formula_2
formula_3 is the coefficient of molecular diffusivity (mass diffusivity).
The objective is to find out how the laminar mean flow interacts with turbulent eddies, in particular what effect this has on transport of the tracer. In line with standard Reynolds decomposition, the concentration field can be divided into its mean and fluctuating components:
formula_4
Likewise for the velocity field:
formula_5
The mean term (in angular brackets) represents a laminar component of the flow. Note that the mean field is in general a function of space and time, and not just a constant. Average in this sense does not suggest averaging over all available data in space and time, but merely filtering out the turbulent motion. This means that averaging domain is restricted to an extent that still smoothens the turbulence, but does not erase information about the mean flow itself. This assumes that the scales of eddies and mean flow can be separated, which is not always the case. One can get as close as possible to this by suitably choosing the range of averaging, or ideally doing an ensemble average if the experiment can be repeated. In short, the averaging procedure is not trivial in practice. In this section, the topic is treated theoretically, and it is assumed that such suitable averaging procedure exists.
The fluctuating (primed) term has the defining property that it averages out, i.e. formula_6. It is used to describe the turbulence (eddies) that, among other things, stirs the fluid.
One can now proceed with Reynolds decomposition. Using the fact that formula_6 by definition, one can average the entire equation to eliminate all the turbulent fluctuations formula_7, except in non-linear terms (see Reynolds decomposition, Reynolds stress and Reynolds-averaged Navier–Stokes equations). The non-linear advective term becomes:
formula_8 Upon substitution into the conservation equation: formula_9
If one pushes the third (turbulent) term of the left hand side to right hand side (into formula_10), the result is: formula_11 This equation looks like the equation we started with, apart from (i) formula_12 and formula_13 became their laminar components, and (ii) the appearance of a new second term on right hand side. This second term has analogous function to the Reynolds stress term in the Reynolds-averaged Navier–Stokes equations.
This was the Eulerian treatment. One can also study this problem in a Lagrangian point of view (absorbing some terms into the material derivative):
formula_14
Define a mean material derivative by:
formula_15
This is the material derivative associated with the mean flow (advective term only contains the laminar part of formula_12). One can distribute the divergence term on right hand side and use this definition of material derivative: formula_16 This equation looks again like the Lagrangian equation that we started with, with the same caveats (i) and (ii) as in Eulerian case, and the definition of the mean-flow quantity also for the derivative operator. The analysis that follows will return to Eulerian picture.
The interpretation of eddy diffusivity is as follows. formula_17 is the flux of the passive tracer due to molecular diffusion. It is always down-gradient. Its divergence corresponds to the accumulation (if negative) or depletion (if positive) of the tracer concentration due to this effect. One can interpret the formula_18 term like a flux due to turbulent eddies stirring the fluid. Likewise, its divergence would give the accumulation/depletion of tracer due to turbulent eddies. It is not yet specified whether this eddy flux should be down-gradient, see later sections.
One can also examine the concentration budget for a small fluid parcel of volume formula_19. Start from Eulerian formulation and use the divergence theorem: formula_20 The three terms on the right hand side represent molecular diffusion, eddy diffusion, and advection with the mean flow, respectively. An issue arises that there is no separate equation for the formula_21. It is not possible to close the system of equations without coming up with a model for this term. The simplest way how it can be achieved is to assume that, just like the molecular diffusion term, it is also proportional to the gradient in concentration formula_22 (see the section on Gradient based theories). See turbulence modeling for more.
Gradient diffusion theory.
The simplest model of turbulent diffusion can be constructed by drawing an analogy with the probabilistic effect causing the down-gradient flow as a result of motion of individual molecules (molecular diffusion). Consider an inert, passive tracer dispersed in the fluid with an initial spatial concentration formula_23. Let there be a small fluid region with higher concentration of the tracer than its surroundings in every direction. It exchanges fluid (and with it the tracer) with its surroundings via turbulent eddies, which are fluctuating currents going back and forth in a seemingly random way. The eddies flowing to the region from its surroundings are statistically the same as those flowing from the region to its surroundings. This is because the tracer is "passive", so a fluid parcel with higher concentration has similar dynamical behaviour as a fluid parcel with lower concentration. The key difference is that those flowing outwards carry much more tracer than those flowing inwards, since the concentration inside the region is initially higher than outside. This can be quantified with a tracer flux. Flux has units of tracer amount per area per time, which is the same as tracer concentration times velocity. Local tracer accumulation rate formula_24 would then depend on the difference of outgoing and incoming fluxes. In our example, outgoing fluxes are larger than ingoing fluxes, producing a negative local accumulation (i.e. depletion) of the tracer. This effect would in general result in an equilibration of the initial profile formula_25 over time, regardless of what the initial profile might be. To be able to calculate this time evolution, one needs to know how to calculate the flux. This section explores the simplest hypothesis: flux is linearly related to the concentration difference (just as for molecular diffusion). This also comes as the most intuitive guess from the analysis just made. Flux is in principle a vector. This vector points in the direction of tracer transport, and in this case it would be parallel to formula_26. Hence the model is typically called gradient diffusion (or equivalently down-gradient diffusion).
A rough argument for gradient diffusion.
Source:
The subsection aims for a simple, rough and heuristic argument explaining how the mathematics of gradient diffusion arises. A more rigorous and general treatment of gradient model is offered in the next subsection, which builds directly on the section on general mathematical treatment (which was not yet assuming gradient model at that early stage and left the covariance of fluctuations as it was). Means are for now not indicated explicitly for maximal simplicity of notation. Also for now neglect the molecular diffusivity formula_3, since it is usually significantly smaller than eddy diffusivity, and would steer attention away from the eddy mechanism.
Consider two neighbouring fluid parcels with their centers formula_27 apart. They contain volume concentrations formula_28 and formula_29 of an inert, passive tracer. Without loss of generality, let formula_30. Imagine that a single eddy of length scale formula_27 and velocity scale formula_31 is responsible for a continuous stirring of material among the two parcels. The tracer flux exchanged through the lateral boundary of the two parcels is labelled formula_32. The boundary is perpendicular to the formula_33-axis. The flux from parcel 1 to parcel 2 is then, at least by order of magnitude:
formula_34
This argument can be seen as a physically motivated dimensional analysis, since it uses solely the length and velocity scales of an eddy to estimate the tracer flux that it generates. If the entire studied domain (thought to contain a large number of such pairs formula_28 and formula_29) is much larger than the eddy length scale formula_27, one can approximate formula_35 over formula_27 as the derivative of concentration in a continuously varying medium:
formula_36
Based on similarity with Fick's law of diffusion one can interpret the term in parentheses as a diffusion coefficient formula_37 associated with this turbulent eddy, given by a product of its length and velocity scales.
formula_38
using a one-dimensional form of continuity equation formula_39, we can write:
formula_40
If formula_37 is assumed to be spatially homogeneous, it can be pulled out of the derivative and one gets a diffusion equation of the form:
formula_41
This is a prototypical example of parabolic partial differential equation. It is also known as heat equation. Its fundamental solution for a point source at formula_42 is:
formula_43
By comparison with Gaussian distribution, one can identify the variance as formula_44 and standard deviation as formula_45, a very typical time dependence for molecular diffusion or random walk.
To conclude this subsection, it described how an eddy can stir two surrounding regions of a fluid and how this behaviour gives rise to mathematics described as "gradient model", meaning that diffusive fluxes are aligned with a negative spatial gradient in concentration. It considered a very simple geometry, in which all variations happen along one axis. The argument used only order-of-magnitude scales of spatial separation and eddy velocity, therefore it was very rough. The next section offers a more rigorous treatment.
Interpretation from general equations.
Source:
This subsection builds on the section on general mathematical treatment, and observes what happens when a gradient assumption is inserted.
Recall the Reynolds-averaged concentration equation: formula_11 We make a similar gradient assumption to that which was motivated in the subsection above with tracer length and velocity scales. However the coefficient value needs not be the same as in the above subsection (which was only specified by order of magnitude). The gradient hypothesis reads: formula_46
This allows the concentration equation to be rewritten as formula_47 This is again similar to the initial concentration equation, with transformations formula_48 and formula_49. It represents a generalization to Fick's second law (see Fick's laws of diffusion), in presence of turbulent diffusion and advection by the mean flow. That is the reason why down-gradient eddy diffusion models are often referred to as "Fickian", emphasizing this mathematical similarity. Note that the eddy diffusivity formula_37 can in general be a function of space and time, since its value is given by the pattern of eddies that can evolve in time and vary from place to place. Different assumptions made about formula_50 can lead to different models, with various trade-offs between observations and theory.
Sometimes, the term Fickian diffusion is reserved solely for the case when formula_37 is a true constant. formula_37 needs to be at least spatially uniform for it to be possible to write: formula_51
In this case, the sum of molecular and eddy diffusivity can be considered as a new effective viscosity, acting in qualitatively similar way to molecular diffusivity, but significantly increased in magnitude.
In the context of this article, the adjective"Fickian" can also be used as an equivalent to a gradient model, so a more general form like formula_50 is permissible. The terminology in scientific articles is not always consistent in this respect.
Shortcomings and counterexamples of the gradient model.
Gradient models were historically the first models of eddy diffusion. They are simple and mathematically convenient, but the underlying assumption on purely down-gradient diffusive flux is not universally valid. Here are a few experimental counter-examples:
These observations indicate that there exist mechanisms different from purely down-gradient diffusion, and that the qualitative analogy between molecular and eddy diffusion is not perfect. In the coming section on statistical models, a different way of looking at eddy diffusion is presented.
Statistical diffusion theory.
The statistical theory of fluid turbulence comprises a large body of literature and its results are applied in many areas of research, from meteorology to oceanography.
Statistical diffusion theory originated with G. I. Taylor's (1921) paper titled "Diffusion by continuous movements" and later developed in his paper "Statistical theory of turbulence". The statistical approach to diffusion is different from gradient based theories as, instead of studying the spacial transport at a fixed point in space, one makes use of the Lagrangian reference system and follows the particles in their motion through the fluid and tries to determine from these the statistical proprieties in order to represent diffusion.
Taylor in particular argued that, at high Reynolds number, the spatial transport due to molecular diffusion can be neglected compared to the convective transport by the mean flow and turbulent motions. Neglecting the molecular diffusion, formula_13 is then conserved following a fluid particle and consequently the evolution of the mean field formula_53 can be determined from the statistics of the motion of fluid particles.
Lagrangian formulation.
Source:
Consider an unbounded turbulent flow in which a source at the time formula_54 determines the scalar field to some value:formula_55formula_56 is the position at time formula_54 of the fluid particle originating from position formula_57 at time t.
If molecular diffusion is neglected, formula_13 is conserved following a fluid particle. Then, the value of formula_13 at the initial and final points of the fluid particle trajectory are the same:formula_58Calculating the expectation of the last equation yields
formula_59
where formula_60 is the forward probability density function of particle position.
Dispersion from a point source.
For the case of a unit point source fixed at location formula_61, i.e., formula_62, the expectation value of formula_63 isformula_64This means that the mean conserved scalar field resulting from a point source is given by the probability density function of the particle position formula_60 of the fluid particles that originate at the source.
The simplest case to consider is dispersion from a point source, positioned at the origin (formula_65), in statistically stationary isotropic turbulence. In particular, consider an experiment where the isotropic turbulent velocity field has zero mean.
In this setting, one can derive the following results:
formula_74
Langevin equation.
The simplest stochastic Lagrangian model is the Langevin equation, which provides a model for the velocity following the fluid particle. In particular, the Langevin equation for the fluid-particle velocity yields a complete prediction for turbulent dispersion. According to the equation, the Lagrangian velocity autocorrelation function is the exponential formula_83. With this expression for formula_84, the standard deviation of the particle displacement can be integrated to yieldformula_85According to the Langevin equation, each component of the fiuid particle velocity is an Ornstein-Uhlenbeck process. It follows that the fluid particle position (ie., the integral of the Ornstein-Uhlenbeck process) is also a Gaussian process. Thus, the mean scalar field predicted by the Langevin equation is the Gaussian distributionformula_86with formula_87 given by the previous equation.
Eddy diffusion in natural sciences.
Eddy diffusion in the ocean.
Molecular diffusion is negligible for the purposes of material transport across ocean basins. However, observations indicate that the oceans are under constant mixing. This is enabled by ocean eddies that range from Kolmogorov microscales to gyres spanning entire basins. Eddy activity that enables this mixing continuously dissipates energy, which it lost to smallest scales of motion. This is balanced mainly by tides and wind stress, which act as energy sources that continuously compensate for the dissipated energy.
Vertical transport: overturning circulation and eddy-upwelling.
Apart from the layers in immediate vicinity of the surface most of the bulk of the ocean is stably stratified. In a few narrow, sporadic regions at high latitudes surface water becomes unstable enough to sink deeply and constitute the deep, southward branch of the overturning circulation (see e.g. AMOC). Eddy diffusion, mainly in the Antarctic Circumpolar Current, then enables the return upward flow of these water masses. Upwelling has also a coastal component owing to the Ekman transport, but Antarctic Circumpolar Current is considered to be the dominant source of upwelling, responsible for roughly 80% of its overall intensity. Hence the efficiency of turbulent mixing in sub-Antarctic regions is the key element which sets the rate of the overturning circulation, and thus the transport of heat and salt across the global ocean.
Eddy diffusion also controls the upwelling of atmospheric carbon dissolved in upper ocean thousands of years prior, and thus plays an important role in Earth's climate system. In the context of global warming caused by increased atmospheric carbon dioxide, upwelling of these ancient (hence less carbon-rich) water masses while simultaneously dissolving and downwelling present carbon-rich air, causes a net accumulation of carbon emissions in the ocean. This in turn moderates the climate change, but causes issues such as ocean acidification.
Horizontal transport: plastics.
An example of horizontal transport that has received significant research interest in the 21st century is the transport of floating plastics. Over large distances, the most efficient transport mechanism is the wind-driven circulation. Convergent Ekman transport in subtropical gyres turns these into regions of increased floating plastic concentration (e.g. Great Pacific garbage patch).
In addition to the large-scale (deterministic) circulations, many smaller scale processes blur the overall picture of plastic transport. Sub-grid turbulent diffusion adds a stochastic nature to the movement. Numerical studies are often done involving large ensemble of floating particles to overcome this inherent stochasticity.
In addition, there are also more macroscopic eddies that are resolved in simulations and are better understood. For example, mesoscale eddies play an important role. Mesoscale eddies are slowly rotating vortices with diameters of hundreds of kilometers, characterized by Rossby numbers much smaller than unity. Anticyclonic eddies (counterclockwise in the Northern hemisphere) have an inward surface radial flow component, that causes net accumulation of floating particles in their centre. Mesoscale eddies are not only able to hold debris, but to also transport it across large distances owing to their westward drift. This has been shown for surface drifters, radioactive isotope markers, plankton, jellyfish, heat and salt. Sub-mesoscale vortices and ocean fronts are also important, but they are typically unresolved in numerical models, and contribute to the above-mentioned stochastic component of the transport.
Atmosphere.
Source:
The problem of diffusion in the atmosphere is often reduced to that of solving the original gradient based diffusion equation under the appropriate boundary conditions. This theory is often called the K theory, where the name comes from the diffusivity coefficient K introduced in the gradient based theory.
If K is considered to be constant, for example, it can be thought of as measuring the flux of a passive scalar quantity formula_13, such as smoke through the atmosphere.
For a stationary medium formula_13, in which the diffusion coefficients, which are not necessarily equal, can vary with the three spatial coordinates, the more general gradient based diffusion equation states, formula_88Considering a point source, the boundary conditions areformula_89where formula_90 such that formula_91, where formula_92 is the source strength (total amount of formula_93 released).
The solution of this problem is a Gaussian function. In particular, the solution for an instantaneous point source of formula_13, with strength formula_92, of an atmosphere in which formula_94 is constant, formula_95 and for which we consider a Lagrangian system of reference that moves with the mean wind formula_94:formula_96Integration of this instantaneous-point-source solution with respect to space yields equations for instantaneous volume sources (bomb bursts, for example). Integration of the instantaneous-point source equation with respect to time gives the continuous-point-source solutions.
Atmospheric Boundary Layer.
K theory has been applied when studying the dynamics of a scalar quantity formula_93 through the atmospheric boundary layer. The assumption of constant eddy diffusivity can rarely be applied here and for this reason it's not possible to simply apply K theory as previously introduced.
Without losing generality, consider a steady state, i.e. formula_97, and an infinite crosswind line source, for which, at formula_98formula_99Assuming that formula_100, i.e., the x-transport by mean flow greatly outweighs the eddy flux in that direction, the gradient based diffusion equation for the flux of a stationary medium formula_101 becomesformula_102This equation, together with the following boundary conditionsformula_103where, in particular, the last condition implies zero flux at the ground. This equation has been the basis for many investigations. Different assumptions on the form of formula_104 yield different solutions.As an example, K theory is widely used in atmospheric turbulent diffusion (heat conduction from the earth's surface, momentum distribution) because the fundamental differential equation involved can be considerably simplified by eliminating one or more of the space coordinates. Having said that, in planetary-boundary-layer heat conduction, the source is a sinusoidal time function and so the mathematical complexity of some of these solutions is considerable.
Shortcomings and advantages.
In general, K theory comes with some shortcomings. Calder studied the applicability of the diffusion equation to the atmospheric case and concluded that the standard K theory form cannot be generally valid. Monin refers to K theory as a semi-empirical theory of diffusion and points out that the basic nature of K theory must be kept in mind as the chain of deductions from the original equation grows longer and more involved.
That being said, K theory provides many useful, practical results. One of them is the study by Barad where he a K theory of the complicated problem of diffusion of a bent-over stack plume in very stable atmospheres.
Note on stirring and mixing.
The verb "stirring" has a meaning distinct from "mixing". The former stands for a more large scale phenomenon, such as eddy diffusion, while the latter is sometimes used for more microscopic processes, such as molecular diffusion. They are often used interchangeably, including some scientific literature. "Mixing" is often used for the outcome of both, especially in less formal narration. It can be seen in the animation in the introductory section that eddy-induced stirring breaks down the black area to smaller and more chaotic spatial patterns, but nowhere does any shade of grey appear. Two fluids become more and more intertwined, but they do not mix due to eddy diffusion. In reality, as their interface becomes larger, molecular diffusion becomes more and more efficient and finishes the homogenization by actually mixing the molecules across the boundaries. This is a truly microscopically irreversible process. But even without molecular diffusion taking care of the last step, one can reasonably claim that spatial concentration is altered due to eddy diffusion. In practice, concentration is defined using a very small but finite control volume in which particles of the relevant species are counted. Averaging over such small control volume yields a useful measure of concentration. This procedure captures well the action of all eddies smaller than the size of the control volume. This allows to formulate equations describing eddy diffusion and its effect on concentration without the need to explicitly consider molecular diffusion.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi(\\vec{x},t)"
},
{
"math_id": 1,
"text": "\\vec{x}"
},
{
"math_id": 2,
"text": "\\frac{\\partial\\phi}{\\partial t} + \\nabla\\cdot(\\vec{u}\\phi) = K_0 \\nabla^2\\phi"
},
{
"math_id": 3,
"text": "K_0"
},
{
"math_id": 4,
"text": "\\phi(\\vec{x},t) = \\langle \\phi(\\vec{x},t)\\rangle + \\phi'(\\vec{x},t)"
},
{
"math_id": 5,
"text": "\\vec{u}(\\vec{x},t) = \\langle \\vec{u}(\\vec{x},t)\\rangle + \\vec{u}'(\\vec{x},t)"
},
{
"math_id": 6,
"text": "\\langle\\phi'\\rangle=0"
},
{
"math_id": 7,
"text": "\\phi'"
},
{
"math_id": 8,
"text": "\\begin{aligned}\n \\langle\\vec{u}\\phi\\rangle &= \\langle \\left( \\langle\\vec{u}\\rangle + \\vec{u}'\\right) \\left( \\langle\\phi\\rangle + \\phi' \\right) \\rangle \\\\\n &= \\langle\\vec{u}\\rangle\\langle\\phi\\rangle\\ + \\langle\\vec{u}'\\phi'\\rangle\\end{aligned}"
},
{
"math_id": 9,
"text": "\\frac{\\partial\\langle\\phi\\rangle}{\\partial t} + \\nabla\\cdot\\left( \\langle\\vec{u}\\rangle\\langle\\phi\\rangle\\ + \\langle\\vec{u}'\\phi'\\rangle \\right) = K_0 \\nabla^2\\langle\\phi\\rangle"
},
{
"math_id": 10,
"text": "\\nabla^2=\\nabla\\cdot\\nabla"
},
{
"math_id": 11,
"text": "\\frac{\\partial\\langle\\phi\\rangle}{\\partial t} + \\nabla\\cdot\\left( \\langle\\vec{u}\\rangle\\langle\\phi\\rangle\\right)=\n \\nabla\\cdot\\left(K_0\\nabla\\langle\\phi\\rangle - \\langle\\vec{u}'\\phi'\\rangle\\right)"
},
{
"math_id": 12,
"text": "\\vec{u}"
},
{
"math_id": 13,
"text": "\\phi"
},
{
"math_id": 14,
"text": "\\frac{D\\phi}{Dt} +\\phi\\nabla\\cdot\\vec{u} = K_0 \\nabla^2\\phi"
},
{
"math_id": 15,
"text": "\\frac{\\overline{D}}{\\overline{D}t} = \\frac{\\partial}{\\partial t} + \\langle\\vec{u}\\rangle\\cdot\\nabla"
},
{
"math_id": 16,
"text": "\\frac{\\overline{D}\\langle\\phi\\rangle}{\\overline{D}t}\n + \\langle\\phi\\rangle\\nabla\\cdot\\langle\\vec{u}\\rangle=\n \\nabla\\cdot\\left(K_0\\nabla\\langle\\phi\\rangle - \\langle\\vec{u}'\\phi'\\rangle\\right)"
},
{
"math_id": 17,
"text": "K_0\\nabla\\langle\\phi\\rangle"
},
{
"math_id": 18,
"text": "-\\langle\\vec{u}'\\phi'\\rangle"
},
{
"math_id": 19,
"text": "V"
},
{
"math_id": 20,
"text": "\\frac{\\partial}{\\partial t}\\int_V\\langle\\phi\\rangle\\text{d}V = \n \\oint K_0 \\nabla\\langle\\phi\\rangle\\cdot\\vec{n}\\text{d}A\n - \\oint \\langle\\phi'\\vec{u}'\\rangle\\cdot\\vec{n}\\text{d}A\n - \\oint \\langle\\phi\\rangle\\langle\\vec{u}\\rangle\\cdot\\vec{n}\\text{d}A"
},
{
"math_id": 21,
"text": "\\langle\\phi'\\vec{u}'\\rangle"
},
{
"math_id": 22,
"text": "\\langle \\phi \\rangle"
},
{
"math_id": 23,
"text": "\\phi(\\vec{x}, t=0)"
},
{
"math_id": 24,
"text": "\\frac{\\partial\\phi}{\\partial t}"
},
{
"math_id": 25,
"text": "\\phi(\\vec{x})"
},
{
"math_id": 26,
"text": "-\\nabla\\phi(\\vec{x})"
},
{
"math_id": 27,
"text": "\\Delta x"
},
{
"math_id": 28,
"text": "\\phi_1"
},
{
"math_id": 29,
"text": "\\phi_2"
},
{
"math_id": 30,
"text": "\\phi_2 > \\phi_1"
},
{
"math_id": 31,
"text": "U"
},
{
"math_id": 32,
"text": "J"
},
{
"math_id": 33,
"text": "x"
},
{
"math_id": 34,
"text": "\\begin{aligned}\n J &= \\phi_1 U - \\phi_2 U \\\\\n &= - U \\Delta \\phi \\\\\n &= -(U\\Delta x)\\frac{\\Delta\\phi}{\\Delta x}\\end{aligned}"
},
{
"math_id": 35,
"text": "\\Delta\\phi"
},
{
"math_id": 36,
"text": "J= -(U\\Delta x)\\frac{\\partial\\phi}{\\partial x}"
},
{
"math_id": 37,
"text": "K"
},
{
"math_id": 38,
"text": "J = -K\\frac{\\partial\\phi}{\\partial x}"
},
{
"math_id": 39,
"text": "\\frac{\\partial\\phi}{\\partial t} + \\frac{\\partial J}{\\partial x} = 0"
},
{
"math_id": 40,
"text": "\\frac{\\partial\\phi}{\\partial t} = \\frac{\\partial}{\\partial x}\\left(K\\frac{\\partial\\phi}{\\partial x}\\right)"
},
{
"math_id": 41,
"text": "\\frac{\\partial\\phi}{\\partial t} = K\\frac{\\partial^2\\phi}{\\partial x^2}"
},
{
"math_id": 42,
"text": "x=0"
},
{
"math_id": 43,
"text": "\\phi(x,t) = \\frac{1}{\\sqrt{4\\pi K t}}\\exp{\\left(-\\frac{x^2}{4Kt}\\right)}"
},
{
"math_id": 44,
"text": "\\sigma^2(t) = 2Kt"
},
{
"math_id": 45,
"text": "\\sigma(t)=\\sqrt{2Kt}\\sim t^{1/2}"
},
{
"math_id": 46,
"text": "\\langle\\phi'\\vec{u}'\\rangle = -K(\\vec{x}, t)\\nabla\\langle\\phi\\rangle"
},
{
"math_id": 47,
"text": "\\frac{\\partial\\langle\\phi\\rangle}{\\partial t} + \\nabla\\cdot\\left( \\langle\\vec{u}\\rangle\\langle\\phi\\rangle\\right)=\n \\nabla\\cdot\\left((K_0+K)\\nabla\\langle\\phi\\rangle\\right)"
},
{
"math_id": 48,
"text": "\\phi\\rightarrow\\langle\\phi\\rangle, \\vec{u}\\rightarrow\\langle\\vec{u}\\rangle"
},
{
"math_id": 49,
"text": "K_0 \\rightarrow K_0 + K"
},
{
"math_id": 50,
"text": "K(\\vec{x}, t)"
},
{
"math_id": 51,
"text": "\\frac{\\partial\\langle\\phi\\rangle}{\\partial t} + \\nabla\\cdot\\left( \\langle\\vec{u}\\rangle\\langle\\phi\\rangle\\right)=\n (K_0+K)\\nabla^2\\langle\\phi\\rangle"
},
{
"math_id": 52,
"text": "-\\nabla\\langle\\phi\\rangle"
},
{
"math_id": 53,
"text": "\\left\\langle \\phi\\right\\rangle"
},
{
"math_id": 54,
"text": "t_0"
},
{
"math_id": 55,
"text": "\\phi(\\vec{x}, t_0) = \\phi_0(\\vec{x})"
},
{
"math_id": 56,
"text": "\\vec{X}(t, \\vec{Y})"
},
{
"math_id": 57,
"text": "\\vec{Y}"
},
{
"math_id": 58,
"text": "\\phi(\\vec{X}(t, \\vec{Y}), t) = \\phi(\\vec{Y}, t_0) = \\phi_0(\\vec{Y})"
},
{
"math_id": 59,
"text": "\\left\\langle\\phi(\\vec{x}, t)\\right\\rangle = \\left\\langle\\phi_0(\\vec{Y}(t, \\vec{x})\\right\\rangle = \\int f_X(\\vec{x};t|\\vec{Y})\\phi_0(\\vec{Y})d\\vec{Y}"
},
{
"math_id": 60,
"text": "f_X"
},
{
"math_id": 61,
"text": "\\vec{Y_0}"
},
{
"math_id": 62,
"text": "\\phi_0(\\vec{x}) = \\delta(\\vec{x} - \\vec{Y_0})"
},
{
"math_id": 63,
"text": "\\phi(\\vec{x}, t)"
},
{
"math_id": 64,
"text": "\\left\\langle\\phi(\\vec{x}, t)\\right\\rangle = f_X(\\vec{x};t|Y_0)"
},
{
"math_id": 65,
"text": "Y_0 = 0"
},
{
"math_id": 66,
"text": "\\left\\langle\\vec{X}(t,0)\\right\\rangle = \\int_0^t \\left\\langle\\vec{U}(s,0)\\right\\rangle ds = 0"
},
{
"math_id": 67,
"text": "\\left\\langle X_i(t,0) X_j(t,0) \\right\\rangle = \\sigma_X^2(t)\\delta_{ij}"
},
{
"math_id": 68,
"text": "\\sigma_x(t)"
},
{
"math_id": 69,
"text": "\\delta_{ij}"
},
{
"math_id": 70,
"text": "\\rho(s)"
},
{
"math_id": 71,
"text": "\\sigma_X^2(t) = 2u'^2 \\int_0^t(t-s)\\rho(s)ds"
},
{
"math_id": 72,
"text": "u'"
},
{
"math_id": 73,
"text": "\\hat{\\Gamma}_T(t)"
},
{
"math_id": 74,
"text": "\\hat{\\Gamma}_T(t) = \\frac{1}{2}\\frac{d}{dt}\\sigma_X^2 = u'^2 \\int_0^t \\rho(s)ds"
},
{
"math_id": 75,
"text": "T_L = \\int_0^\\infty \\rho(s) ds"
},
{
"math_id": 76,
"text": "t \\ll T_L"
},
{
"math_id": 77,
"text": "\\rho(0)=1"
},
{
"math_id": 78,
"text": "\\sigma_X \\approx u' t"
},
{
"math_id": 79,
"text": "\\hat{\\Gamma}_T(t) \\approx u'^2 t"
},
{
"math_id": 80,
"text": "t \\gg T_L"
},
{
"math_id": 81,
"text": "\\Gamma_T = u'^2 T_L"
},
{
"math_id": 82,
"text": "\\sigma_X(t) \\approx \\sqrt{2u'^2T_Lt}"
},
{
"math_id": 83,
"text": "\\rho(s)= \\exp(-|s|/T_L) "
},
{
"math_id": 84,
"text": "\\rho(s) "
},
{
"math_id": 85,
"text": "\\sigma^2_X(t) = 2 u^2T_L[t-T_L(1-\\exp(-t/T_L))]"
},
{
"math_id": 86,
"text": "\\left\\langle \\phi(\\vec{x}, t) \\right\\rangle = (\\sigma_X\\sqrt{2\\pi})^{-3} \\exp(-x_ix_i/2\\sigma_X^2)"
},
{
"math_id": 87,
"text": "\\sigma_X(t) "
},
{
"math_id": 88,
"text": "\\frac{\\partial \\phi}{\\partial t} = \n\\frac{\\partial}{\\partial x} \\left(K_x \\frac{\\partial \\phi}{\\partial x} \\right) + \n\\frac{\\partial}{\\partial y} \\left(K_y \\frac{\\partial \\phi}{\\partial y} \\right) + \n\\frac{\\partial}{\\partial z} \\left(K_z \\frac{\\partial \\phi}{\\partial z} \\right)"
},
{
"math_id": 89,
"text": "\\begin{aligned}\n (1) \\quad & \\phi \\rightarrow 0 \\quad \\text{as} \\quad t \\rightarrow \\infty \\quad \\text{for} \\quad -\\infty < x < \\infty \\\\ \n (2) \\quad & \\phi \\rightarrow 0 \\quad \\text{as} \\quad t \\rightarrow 0 \\quad \\text{for} \\quad x \\neq 0\\end{aligned}"
},
{
"math_id": 90,
"text": "\\phi \\rightarrow \\infty"
},
{
"math_id": 91,
"text": "\\int_{-\\infty}^{\\infty} \\phi dx = \\Phi"
},
{
"math_id": 92,
"text": "\\Phi"
},
{
"math_id": 93,
"text": "\\phi"
},
{
"math_id": 94,
"text": "\\overline{u}"
},
{
"math_id": 95,
"text": "v = w = 0"
},
{
"math_id": 96,
"text": "\\frac{\\phi}{\\Phi} = \\frac{1}{(4 \\pi K t)^{1/2}}\\exp\\left(-\\frac{x^2}{4Kt}\\right)"
},
{
"math_id": 97,
"text": "\\partial \\phi / \\partial t = 0"
},
{
"math_id": 98,
"text": "z=0"
},
{
"math_id": 99,
"text": "\\frac{\\partial }{\\partial y} \\left( K_y \\frac{\\partial \\phi}{\\partial y} \\right) = 0"
},
{
"math_id": 100,
"text": "\\partial( K_x \\partial \\phi / \\partial x) /\\partial x \\ll \\overline{u} \\partial \\phi / \\partial x"
},
{
"math_id": 101,
"text": "q"
},
{
"math_id": 102,
"text": "\\overline{u} \\frac{\\partial \\phi}{\\partial x} = \\frac{\\partial}{\\partial z}\\left( K_z \\frac{\\partial \\phi}{\\partial z} \\right)"
},
{
"math_id": 103,
"text": "\\begin{aligned}\n (1) \\quad & \\phi \\rightarrow 0 \\quad \\text{as} \\quad z \\rightarrow \\infty \\\\\n (2) \\quad & \\phi \\rightarrow 0 \\quad \\text{as} \\quad x \\rightarrow 0 \\quad \\text{for all} \\quad z>0 \\quad \\text{but} \\quad \\phi \\rightarrow \\infty \\quad \\text{as} \\quad x \\rightarrow 0, \\quad z \\rightarrow 0 \\quad \\text{such that} \\quad \\lim_{x\\rightarrow 0}\\int_0^\\infty\\overline{u}\\phi dz = \\Phi \\\\\n (3) \\quad & K_z \\frac{\\partial \\phi}{\\partial z} \\quad \\text{as} \\quad z \\rightarrow 0 \\quad \\text{for all} \\quad x>0\\end{aligned}"
},
{
"math_id": 104,
"text": "K_z"
}
]
| https://en.wikipedia.org/wiki?curid=7747984 |
77480764 | Williamson theorem | In the context of linear algebra and symplectic geometry, the Williamson theorem concerns the diagonalization of positive definite matrices through symplectic matrices.
More precisely, given a strictly positive-definite formula_0 Hermitian real matrix formula_1, the theorem ensures the existence of a real symplectic matrix formula_2, and a diagonal positive real matrix formula_3, such that formula_4where formula_5 denotes the 2x2 identity matrix.
Proof.
The derivation of the result hinges on a few basic observations:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2n\\times 2n"
},
{
"math_id": 1,
"text": "M\\in\\mathbb{R}^{2n\\times 2n}"
},
{
"math_id": 2,
"text": "S\\in\\mathbf{Sp}(2n,\\mathbb{R})"
},
{
"math_id": 3,
"text": "D\\in\\mathbb{R}^{n\\times n}"
},
{
"math_id": 4,
"text": "SMS^T = I_2\\otimes D \\equiv D\\oplus D,"
},
{
"math_id": 5,
"text": "I_2"
},
{
"math_id": 6,
"text": "M^{-1/2} (J\\otimes I_n) M^{-1/2}"
},
{
"math_id": 7,
"text": "J\\equiv\\begin{pmatrix}0&1\\\\-1&0\\end{pmatrix}"
},
{
"math_id": 8,
"text": "A\\in\\mathbb{R}^{2n\\times 2n}"
},
{
"math_id": 9,
"text": "O\\in\\mathbf{O}(2n)"
},
{
"math_id": 10,
"text": "OAO^T= I_2\\otimes \\Lambda"
},
{
"math_id": 11,
"text": "\\Lambda"
},
{
"math_id": 12,
"text": "A"
},
{
"math_id": 13,
"text": "O\\in\\mathbf O(2n)"
},
{
"math_id": 14,
"text": "S= (I\\otimes\\sqrt D)O M^{-1/2}"
},
{
"math_id": 15,
"text": "SMS^T=I_2\\otimes D"
},
{
"math_id": 16,
"text": "OM^{-1/2} (J\\otimes I_n) M^{-1/2}O^T=I_2\\otimes\\Lambda, "
},
{
"math_id": 17,
"text": "S(J\\otimes I_n)S^T=J\\otimes (D\\Lambda) ."
},
{
"math_id": 18,
"text": "D=\\Lambda^{-1}"
},
{
"math_id": 19,
"text": "S"
},
{
"math_id": 20,
"text": "S(J\\otimes I_n)S^T=J\\otimes I_n"
}
]
| https://en.wikipedia.org/wiki?curid=77480764 |
77481293 | Planar reentry equations | The planar reentry equations are the equations of motion governing the unpowered reentry of a spacecraft, based on the assumptions of planar motion and constant mass, in an Earth-fixed reference frame.
formula_0
where the quantities in these equations are:
Simplifications.
Allen-Eggers solution.
Harry Allen and Alfred Eggers, based on their studies of ICBM trajectories, were able to derive an analytical expression for the velocity as a function of altitude. They made several assumptions:
These assumptions are valid for hypersonic speeds, where the Mach number is greater than 5. Then the planar reentry equations for the spacecraft are:
formula_15
Rearranging terms and integrating from the atmospheric interface conditions at the start of reentry formula_16 leads to the expression:
formula_17
The term formula_18 is small and may be neglected, leading to the velocity:
formula_19
Allen and Eggers were also able to calculate the deceleration along the trajectory, in terms of the number of g's experienced formula_20, where formula_21 is the gravitational acceleration at the planet's surface. The altitude and velocity at maximum deceleration are:
formula_22
It is also possible to compute the maximum stagnation point convective heating with the Allen-Eggers solution and a heat transfer correlation; the Sutton-Graves correlation is commonly chosen. The heat rate formula_23 at the stagnation point, with units of Watts per square meter, is assumed to have the form:
formula_24
where formula_25 is the effective nose radius. The constant formula_26 for Earth. Then the altitude and value of peak convective heating may be found:
formula_27
Equilibrium glide condition.
Another commonly encountered simplification is a lifting entry with a shallow, slowly-varying, flight path angle. The velocity as a function of altitude can be derived from two assumptions:
From these two assumptions, we may infer from the second equation of motion that:
formula_30 | [
{
"math_id": 0,
"text": " \\begin{cases} \\frac{dV}{dt} &= -\\frac{\\rho V^{2}}{2\\beta} + g \\sin \\gamma \\\\ \\frac{d\\gamma}{dt} &= -\\frac{V \\cos\\gamma}{r} - \\frac{\\rho V}{2\\beta} \\left( \\frac{L}{D} \\right) \\cos \\sigma + \\frac{g \\cos \\gamma}{V} \\\\ \\frac{dh}{dt} &= -V\\sin \\gamma \\end{cases} "
},
{
"math_id": 1,
"text": "V"
},
{
"math_id": 2,
"text": "\\gamma > 0"
},
{
"math_id": 3,
"text": "h"
},
{
"math_id": 4,
"text": "\\rho"
},
{
"math_id": 5,
"text": "\\beta"
},
{
"math_id": 6,
"text": "g"
},
{
"math_id": 7,
"text": "r = r_{e} + h"
},
{
"math_id": 8,
"text": "r_{e}"
},
{
"math_id": 9,
"text": "L/D"
},
{
"math_id": 10,
"text": "\\sigma"
},
{
"math_id": 11,
"text": "(L = 0)"
},
{
"math_id": 12,
"text": "\\rho(h) = \\rho_{0}\\exp(-h/H)"
},
{
"math_id": 13,
"text": "\\rho_{0}"
},
{
"math_id": 14,
"text": "H"
},
{
"math_id": 15,
"text": "\\begin{cases} \\frac{dV}{dt} &= -\\frac{\\rho_{0}}{2\\beta}V^{2}e^{-h/H} \\\\\\frac{dh}{dt} &= -V \\sin \\gamma \\end{cases} \\implies \\frac{dV}{dh} = \\frac{\\rho_{0}}{2\\beta\\sin\\gamma}Ve^{-h/H}"
},
{
"math_id": 16,
"text": "(V_{\\text{atm}},h_{\\text{atm}})"
},
{
"math_id": 17,
"text": "\\frac{dV}{V} = \\frac{\\rho_{0}}{2\\beta\\sin \\gamma}e^{-h/H}dh \\implies \\log \\left( \\frac{V}{V_{\\text{atm}}} \\right) = -\\frac{\\rho_{0}H}{2\\beta \\sin\\gamma} \\left( e^{-h/H} - e^{-h_{\\text{atm}}/H} \\right)"
},
{
"math_id": 18,
"text": "\\exp(-h_{\\text{atm}}/H)"
},
{
"math_id": 19,
"text": "V(h) = V_{\\text{atm}} \\exp \\left( -\\frac{\\rho_{0}H}{2\\beta \\sin\\gamma} e^{-h/H} \\right)"
},
{
"math_id": 20,
"text": "n = g_{0}^{-1} (dV/dt)"
},
{
"math_id": 21,
"text": "g_{0}"
},
{
"math_id": 22,
"text": "h_{n_\\max} = H\\log\\left( \\frac{\\rho_{0}H}{\\beta \\sin \\gamma} \\right), \\quad V_{n_\\max} = V_{\\text{atm}}e^{-1/2} \\implies n_{\\max} = \\frac{V_{\\text{atm}}^{2} \\sin \\gamma}{2g_{0} e H}"
},
{
"math_id": 23,
"text": "\\dot{q}''"
},
{
"math_id": 24,
"text": "\\dot{q}'' = k\\left( \\frac{\\rho}{r_{n}} \\right)^{1/2}V^{3} \\sim \\text{W}/\\text{m}^{2}"
},
{
"math_id": 25,
"text": "r_{n}"
},
{
"math_id": 26,
"text": "k = 1.74153 \\times 10^{-4}"
},
{
"math_id": 27,
"text": "h_{\\dot{q}_{\\max}''} = -H \\log \\left( \\frac{\\beta \\sin \\gamma}{3H\\rho_{0} } \\right) \\implies \\dot{q}_{\\max}'' = k \\sqrt{ \\frac{\\beta \\sin\\gamma}{3Hr_{n}e} }V_{\\text{atm}}^{3}"
},
{
"math_id": 28,
"text": "\\cos\\gamma \\approx 1, \\sin\\gamma\\approx \\gamma"
},
{
"math_id": 29,
"text": "d\\gamma/dt \\approx 0"
},
{
"math_id": 30,
"text": "\\left[\\frac{1}{r} + \\frac{\\rho }{2\\beta} \\left( \\frac{L}{D} \\right) \\cos \\sigma \\right]V^{2} = g \\implies V(h) = \\sqrt{ \\frac{g r}{1 + \\frac{\\rho r}{2\\beta} \\left( \\frac{L}{D} \\right) \\cos \\sigma} }"
}
]
| https://en.wikipedia.org/wiki?curid=77481293 |
7750391 | Collision frequency | Collision frequency describes the rate of collisions between two atomic or molecular species in a given volume, per unit time. In an ideal gas, assuming that the species behave like hard spheres, the collision frequency between entities of species A and species B is:
formula_0
which has units of [volume][time]−1.
Here,
Collision in diluted solution.
In the case of equal-size particles at a concentration formula_11 in a solution of viscosity formula_12 , an expression for collision frequency formula_13 where formula_14 is the volume in question, and formula_15 is the number of collisions per second, can be written as:
formula_16
Where:
Here the frequency is independent of particle size, a result noted as counter-intuitive. For particles of different size, more elaborate expressions can be derived for estimating formula_15. | [
{
"math_id": 0,
"text": " Z = N_\\text{A} N_\\text{B} \\sigma_\\text{AB} \\sqrt\\frac{8 k_\\text{B} T}{\\pi \\mu_\\text{AB}},"
},
{
"math_id": 1,
"text": "N_\\text{A}"
},
{
"math_id": 2,
"text": "N_\\text{B}"
},
{
"math_id": 3,
"text": " \\sigma_\\text{AB} "
},
{
"math_id": 4,
"text": " \\sigma_\\text{AB} = \\pi(r_\\text{A}+r_\\text{B})^2 "
},
{
"math_id": 5,
"text": " r_\\text{A} "
},
{
"math_id": 6,
"text": " r_\\text{B} "
},
{
"math_id": 7,
"text": "k_\\text{B}"
},
{
"math_id": 8,
"text": "T"
},
{
"math_id": 9,
"text": "\\mu_\\text{AB}"
},
{
"math_id": 10,
"text": " \\mu_\\text{AB} = \\frac{{m_\\text{A}}{m_\\text{B}}}{{m_\\text{A}} + {m_\\text{B}}} "
},
{
"math_id": 11,
"text": "n "
},
{
"math_id": 12,
"text": "\\eta"
},
{
"math_id": 13,
"text": "Z=V\\nu"
},
{
"math_id": 14,
"text": "V"
},
{
"math_id": 15,
"text": "\\nu"
},
{
"math_id": 16,
"text": " \\nu = \\frac{8 k_\\text{B} T}{3 \\eta} n, "
},
{
"math_id": 17,
"text": "k_B"
},
{
"math_id": 18,
"text": "n"
}
]
| https://en.wikipedia.org/wiki?curid=7750391 |
7751608 | Inertial wave | Inertial waves, also known as inertial oscillations, are a type of mechanical wave possible in rotating fluids. Unlike surface gravity waves commonly seen at the beach or in the bathtub, inertial waves flow through the interior of the fluid, not at the surface. Like any other kind of wave, an inertial wave is caused by a restoring force and characterized by its wavelength and frequency. Because the restoring force for inertial waves is the Coriolis force, their wavelengths and frequencies are related in a peculiar way. Inertial waves are transverse. Most commonly they are observed in atmospheres, oceans, lakes, and laboratory experiments. Rossby waves, geostrophic currents, and geostrophic winds are examples of inertial waves. Inertial waves are also likely to exist in the molten core of the rotating Earth.
Restoring force.
Inertial waves are restored to equilibrium by the Coriolis force, a result of rotation. To be precise, the Coriolis force arises (along with the centrifugal force) in a rotating frame to account for the fact that such a frame is always accelerating. Inertial waves, therefore, cannot exist without rotation. More complicated than tension on a string, the Coriolis force acts at a 90° angle to the direction of motion, and its strength depends on the rotation rate of the fluid. These two properties lead to the peculiar characteristics of inertial waves.
Characteristics.
Inertial waves are possible only when a fluid is rotating, and exist in the bulk of the fluid, not at its surface. Like light waves, inertial waves are transverse, which means that their vibrations occur perpendicular to the direction of wave travel. One peculiar geometrical characteristic of inertial waves is that their phase velocity, which describes the movement of the "crests" and "troughs" of the wave, is "perpendicular" to their group velocity, which is a measure of the propagation of energy.
Whereas a sound wave or an electromagnetic wave of any frequency is possible, inertial waves can exist only over the range of frequencies from zero to twice the rotation rate of the fluid. Moreover, the frequency of the wave is determined by its direction of travel. Waves traveling perpendicular to the axis of rotation have zero frequency and are sometimes called the geostrophic modes. Waves traveling parallel to the axis have maximum frequency (twice the rotation rate), and waves at intermediate angles have intermediate frequencies. In free space, an inertial wave can exist at "any" frequency between 0 and twice the rotation rate. A closed container, however, can impose restrictions on the possible frequencies of inertial waves, as it can for any kind of wave. Inertial waves in a closed container are often called inertial modes. In a sphere, for example, the inertial modes are forced to take on discrete frequencies, leaving gaps where no modes can exist.
Examples of inertial waves.
Any kind of fluid can support inertial waves: water, oil, liquid metals, air, and other gases. Inertial waves are observed most commonly in planetary atmospheres (Rossby waves, geostrophic winds) and in oceans and lakes (geostrophic currents), where they are responsible for much of the mixing that takes place. Inertial waves affected by the slope of the ocean floor are often called Rossby waves. Inertial waves can be observed in laboratory experiments or in industrial flows where a fluid is rotating. Inertial waves are also likely to exist in the liquid outer core of the Earth, and at least one group has claimed evidence of them. Similarly, inertial waves are likely in rotating astronomical flows like stars, accretion disks, planetary rings, and galaxies.
Mathematical description.
Fluid flow is governed by the Navier-Stokes equation for momentum. The flow velocity formula_0 of a fluid with viscosity formula_1 under pressure formula_2 and rotating at rate formula_3 changes over time formula_4 according to
formula_5
The first term on the right accounts for pressure, the second accounts for viscous diffusion and the third (last) term on the right side of the momentum equation (above) is the Coriolis term.
To be precise, formula_0 is the flow velocity as observed in the rotating frame of reference. Since a rotating frame of reference is accelerating (i.e. non-inertial frame), two additional (pseudo) forces (as mentioned above) emerge as a result of this coordinate transformation: the centrifugal force and the Coriolis force. In the equation above, the centrifugal force is included as a part of the generalized pressure formula_2, that is, formula_2 is related to the usual pressure formula_6, depending on the distance from the rotation axis formula_7, by
formula_8
In the case where the rotation rate is large, the Coriolis force and the centrifugal force become large compared to the other terms. Being small in comparison, diffusion and the "convective derivative" (second term on the left) can be left out. Taking a curl of both sides and applying a few vector identities, the result is
formula_9
One class of solutions to this equation are waves that satisfy two conditions. First, if formula_10 is the wave vector,
formula_11
that is, the waves must be transverse, as mentioned above. Second, solutions are required to have a frequency formula_12 that satisfies the dispersion relation
formula_13
where formula_14 is the angle between the axis of rotation and the direction of the wave. These particular solutions are known as inertial waves.
The dispersion relation looks much like the Coriolis term in the momentum equation—notice the rotation rate and the factor of two. It immediately implies the range of possible frequencies for inertial waves, as well as the dependence of their frequency on their direction. | [
{
"math_id": 0,
"text": "\\vec{u}"
},
{
"math_id": 1,
"text": "\\nu"
},
{
"math_id": 2,
"text": "P"
},
{
"math_id": 3,
"text": "\\Omega"
},
{
"math_id": 4,
"text": "t"
},
{
"math_id": 5,
"text": "\n\\frac{\\partial \\vec{u}}{\\partial t}\n+ (\\vec{u} \\cdot \\vec{\\nabla}) \\vec{u}\n= - \\frac{1}{\\rho} \\vec{\\nabla}P\n+ \\nu \\nabla^2 \\vec{u}\n- 2\\vec{\\Omega} \\times \\vec{u}.\n"
},
{
"math_id": 6,
"text": "p"
},
{
"math_id": 7,
"text": "r"
},
{
"math_id": 8,
"text": "\nP = p + \\frac{1}{2} \\rho r^2 \\Omega^2.\n"
},
{
"math_id": 9,
"text": "\n\\frac{\\partial}{\\partial t} \\nabla \\times \\vec{u} \n= 2 ( \\vec{\\Omega} \\cdot \\vec{\\nabla} ) \\vec{u}.\n"
},
{
"math_id": 10,
"text": "\\vec{k}"
},
{
"math_id": 11,
"text": "\n\\vec{u} \\cdot \\vec{k} = 0,\n"
},
{
"math_id": 12,
"text": "\\omega"
},
{
"math_id": 13,
"text": "\n\\omega = 2 \\hat{k} \\cdot \\vec{\\Omega} = 2 \\Omega \\cos{\\theta},\n"
},
{
"math_id": 14,
"text": "\\theta"
}
]
| https://en.wikipedia.org/wiki?curid=7751608 |
77517464 | Chance constrained programming | Chance Constrained Programming (CCP) is a mathematical optimization approach used to handle problems under uncertainty. It was first introduced by Charnes and Cooper in 1959 and further developed by Miller and Wagner in 1965. CCP is widely used in various fields, including finance, engineering, and operations research, to optimize decision-making processes where certain constraints need to be satisfied with a specified probability.
Theoretical Background.
Chance Constrained Programming involves the use of probability and confidence levels to handle uncertainty in optimization problems. It distinguishes between single and joint chance constraints:
Mathematical Formulation.
A general chance constrained optimization problem can be formulated as follows:
formula_0
Here, formula_1 is the objective function, formula_2 represents the equality constraints, formula_3 represents the inequality constraints, formula_4 represents the state variables, formula_5 represents the control variables, formula_6 represents the uncertain parameters, and formula_7 is the confidence level.
Common objective functions in CCP involve minimizing the expected value of a cost function, possibly combined with minimizing the variance of the cost function.
Solution Approaches.
To solve CCP problems, the stochastic optimization problem is often relaxed into an equivalent deterministic problem. There are different approaches depending on the nature of the problem:
Practical Applications.
Chance constrained programming is used in engineering for process optimisation under uncertainty and production planning and in finance for portfolio selection. It has been applied to renewable energy integration, generating flight trajectory for UAVs, and robotic space exploration.
Process Optimization Under Uncertainty.
CCP is used in chemical and process engineering to optimize operations considering uncertainties in operating conditions and model parameters. For example, in optimizing the design and operation of chemical plants, CCP helps in achieving desired performance levels while accounting for uncertainties in feedstock quality, demand, and environmental conditions.
Production Planning and Operations.
In production planning, CCP can optimize production schedules and resource allocation under demand uncertainty. A typical problem formulation involves maximizing profit while ensuring that production constraints are satisfied with a certain probability.
Chance-Constrained Portfolio Selection.
Chance-constrained portfolio selection is an approach to portfolio selection under loss aversion which is based on CCP. The goal is to maximize expected returns while ensuring that the portfolio's risk (e.g., variance or downside risk) stays within acceptable levels with a certain probability. This approach allows investors to consider the uncertainty in asset returns and make more informed investment decisions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\min f(x, u, \\xi)\n\\text{s.t. } g(x, u, \\xi) = 0,\n\\Pr\\{ h(x, u, \\xi) \\geq 0 \\} \\geq \\alpha\n"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "g"
},
{
"math_id": 3,
"text": "h"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "u"
},
{
"math_id": 6,
"text": "\\xi"
},
{
"math_id": 7,
"text": "\\alpha"
}
]
| https://en.wikipedia.org/wiki?curid=77517464 |
77541366 | Sheath instability | Plasma instability in laboratory plasma
Sheath instability is instability in plasma device where ions forms Sheath near electrode becomes unstable. It is examined as Rayleigh–Taylor instability growth rate is compared.
formula_0
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\omega ^2 - (\\frac{ e}{ m })E_0 (L_n^-1 + \\frac {n_1 ^'}{n_0})-i{\\omega} v_0^' = 0"
}
]
| https://en.wikipedia.org/wiki?curid=77541366 |
77557393 | Normalization (machine learning) | Rescaling inputs to improve model training
<templatestyles src="Machine learning/styles.css"/>
In machine learning, normalization is a statistical technique with various applications. There are mainly two forms of normalization, data normalization and activation normalization. Data normalization, or feature scaling, is a general technique in statistics, and it includes methods that rescale input data so that they have well-behaved range, mean, variance, and other statistical properties. Activation normalization is specific to deep learning, and it includes methods that rescale the activation of hidden neurons inside a neural network.
Normalization is often used for faster training convergence, less sensitivity to variations in input data, less overfitting, and better generalization to unseen data. They are often theoretically justified as reducing covariance shift, smoother optimization landscapes, increasing regularization, though they are mainly justified by empirical success.
Batch normalization.
Batch normalization (BatchNorm) operates on the activations of a layer for each mini-batch.
Consider a simple feedforward network, defined by chaining together modules:formula_0where each network module can be a linear transform, a nonlinear activation function, a convolution, etc. formula_1 is the input vector, formula_2 is the output vector from the first module, etc.
BatchNorm is a module that can be inserted at any point in the feedforward network. For example, suppose it is inserted just after formula_3, then the network would operate accordingly:formula_4The BatchNorm module does not operate over individual inputs. Instead, it must operate over one batch of inputs at a time.
Concretely, suppose we have a batch of inputs formula_5, fed all at once into the network. We would obtain in the middle of the network some vectorsformula_6The BatchNorm module computes the coordinate-wise mean and variance of these vectors:formula_7where formula_8 indexes the coordinates of the vectors, and formula_9 indexes the elements of the batch. In other words, we are considering the formula_8-th coordinate of each vector in the batch, and computing the mean and variance of this collection of numbers.
It then normalizes each coordinate to have zero mean and unit variance:
formula_10The formula_11 is a small positive constant such as formula_12 added to the variance for numerical stability, to avoid division by zero.
Finally, it applies a linear transform:formula_13Here, formula_14 and formula_15 are parameters inside the BatchNorm module. They are learnable parameters, typically trained by gradient descent.
The following code illustrates BatchNorm.
import numpy as np
def batchnorm(x, gamma, beta, epsilon=1e-8):
# Mean and variance of each feature
mu = np.mean(x, axis=0) # shape (N,)
sigma2 = np.var(x, axis=0) # shape (N,)
# Normalize the activations
x_hat = (x - mu) / np.sqrt(sigma2 + epsilon) # shape (B, N)
# Apply the linear transform
y = gamma * x_hat + beta # shape (B, N)
return y
Interpretation.
formula_14 and formula_15 allow the network to learn to undo the normalization if that is beneficial.
Because a neural network can always be topped with a linear transform layer on top, BatchNorm can be interpreted as removing the purely linear transformations, so that its layers focus purely on modelling the nonlinear aspects of data.
It is claimed in the original publication that BatchNorm works by reducing "internal covariance shift", though the claim has both supporters and detractors.
Special cases.
The original paper recommended to only use BatchNorms after a linear transform, not after a nonlinear activation. That is, something like formula_16, not formula_17. Also, the bias formula_18 does not matter, since will be canceled by the subsequent mean subtraction, so it is of form formula_19. That is, if a BatchNorm is preceded by a linear transform, then that linear transform's bias term is set to constant zero.
For convolutional neural networks (CNN), BatchNorm must preserve the translation invariance of CNN, which means that it must treat all outputs of the same kernel as if they are different data points within a batch.
Concretely, suppose we have a 2-dimensional convolutional layer defined byformula_20where
In order to preserve the translational invariance, BatchNorm treats all outputs from the same kernel in the same batch as more data in a batch.
That is, it is applied once per "kernel" formula_23 (equivalently, once per channel formula_23), not per "activation" formula_29:formula_30where formula_31 is the batch size, formula_32 is the height of the feature map, and formula_33 is the width of the feature map.
That is, even though there are only formula_34 data points in a batch, all formula_35 outputs from the kernel in this batch are treated equally.
Subsequently, normalization and the linear transform is also done per kernel:formula_36Similar considerations apply for BatchNorm for "n"-dimensional convolutions.
The following code illustrates BatchNorm for 2D convolutions:
import numpy as np
def batchnorm_cnn(x, gamma, beta, epsilon=1e-8):
# Calculate the mean and variance for each channel.
mean = np.mean(x, axis=(0, 1, 2), keepdims=True)
var = np.var(x, axis=(0, 1, 2), keepdims=True)
# Normalize the input tensor.
x_hat = (x - mean) / np.sqrt(var + epsilon)
# Scale and shift the normalized tensor.
y = gamma * x_hat + beta
return y
Layer normalization.
Layer normalization (LayerNorm) is a common competitor to BatchNorm. Unlike BatchNorm, which normalizes activations across the batch dimension for a given feature, LayerNorm normalizes across all the features within a single data sample. Compared to BatchNorm, LayerNorm's performance is not affected by batch size. It is a key component of Transformers.
For a given data input and layer, LayerNorm computes the mean (formula_37) and variance (formula_38) over all the neurons in the layer. Similar to BatchNorm, learnable parameters formula_14 (scale) and formula_15 (shift) are applied. It is defined by:formula_39where formula_40 and formula_41, and formula_42 ranges over the neurons in that layer.
Examples.
For example, in CNN, a LayerNorm applies to all activations in a layer. In the previous notation, we haveformula_43notice that the batch index formula_44 is removed, while the channel index formula_45 is added.
In recurrent neural networks and Transformers, LayerNorm is applied individually to each timestep.
For example, if the hidden vector in an RNN at timestep formula_46 is formula_47 where formula_48 is the dimension of the hidden vector, then LayerNorm will be applied withformula_49where formula_50 and formula_51.
Root mean square layer normalization.
Root mean square layer normalization (RMSNorm) changes LayerNorm byformula_52Essentially it is LayerNorm where we enforce formula_53.
Other normalizations.
Weight normalization (WeightNorm) is a technique inspired by BatchNorm. It normalizes weight matrices in a neural network, rather than its neural activations.
Gradient normalization (GradNorm) normalizes gradient vectors during backpropagation.
CNN-specific normalization.
There are some activation normalization techniques that are only used for CNNs.
Local response normalization.
Local response normalization was used in AlexNet. It was applied in a convolutional layer, just after a nonlinear activation function. It was defined byformula_54where formula_55 is the activation of the neuron at location formula_56 and channel formula_8. In words, each pixel in a channel is suppressed by the activations of the same pixel in its adjacent channels.
The numbers formula_57 are hyperparameters picked by using a validation set.
Group normalization.
Group normalization (GroupNorm) is a technique only used for CNNs. It can be understood as the LayerNorm for CNN applied once per channel-group.
Suppose at a layer formula_24, there are channels formula_58, then we partition it into groups formula_59. Then, we apply LayerNorm to each group.
Instance normalization.
Instance normalization (InstanceNorm), or contrast normalization, is a technique first developed for neural style transfer, and is only used for CNNs. It can be understood as the LayerNorm for CNN applied once per channel, or equivalently, as group normalization where each group consists of a single channel:formula_60
Adaptive instance normalization.
Adaptive instance normalization (AdaIN) is a variant of instance normalization, designed specifically for neural style transfer with CNN, not for CNN in general.
In the AdaIN method of style transfer, we take a CNN, and two input images, one content and one style. Each image is processed through the same CNN, and at a certain layer formula_24, the AdaIn is applied.
Let formula_61 be the activation in the content image, and formula_62 be the activation in the style image. Then, AdaIn first computes the mean and variance of the activations of the content image formula_63, then use those as the formula_64 for InstanceNorm on formula_61. Note that formula_62 itself remains unchanged. Explicitly, we haveformula_65 | [
{
"math_id": 0,
"text": "x^{(0)} \\mapsto x^{(1)} \\mapsto x^{(2)} \\mapsto \\cdots"
},
{
"math_id": 1,
"text": "x^{(0)}"
},
{
"math_id": 2,
"text": "x^{(1)}"
},
{
"math_id": 3,
"text": "x^{(l)}"
},
{
"math_id": 4,
"text": "\\cdots \\mapsto x^{(l)} \\mapsto \\mathrm{BN}(x^{(l)}) \\mapsto x^{(l+1)} \\mapsto \\cdots "
},
{
"math_id": 5,
"text": "x^{(0)}_{(1)}, x^{(0)}_{(2)}, \\dots, x^{(0)}_{(B)} "
},
{
"math_id": 6,
"text": "x^{(l)}_{(1)}, x^{(l)}_{(2)}, \\dots, x^{(l)}_{(B)} "
},
{
"math_id": 7,
"text": "\n\\begin{aligned}\n\\mu^{(l)}_i &= \\frac 1B \\sum_{b=1}^B x^{(l)}_{(b), i} \\\\\n(\\sigma^{(l)}_i)^2 &= \\frac{1}{B} \\sum_{b=1}^B (x_{(b),i}^{(l)} - \\mu_i^{(l)})^2\n\\end{aligned}\n"
},
{
"math_id": 8,
"text": "i"
},
{
"math_id": 9,
"text": "b"
},
{
"math_id": 10,
"text": "\\hat{x}^{(l)}_{(b), i} = \\frac{x^{(l)}_{(b), i} - \\mu^{(l)}_i}{\\sqrt{(\\sigma^{(l)}_i)^2 + \\epsilon}}"
},
{
"math_id": 11,
"text": "\\epsilon"
},
{
"math_id": 12,
"text": "10^{-8}"
},
{
"math_id": 13,
"text": "y^{(l)}_{(b), i} = \\gamma_i \\hat{x}^{(l)}_{(b), i} + \\beta_i"
},
{
"math_id": 14,
"text": "\\gamma"
},
{
"math_id": 15,
"text": "\\beta"
},
{
"math_id": 16,
"text": "\\mathrm{BN}(Wx + b)"
},
{
"math_id": 17,
"text": "\\mathrm{BN}(\\phi(Wx + b))"
},
{
"math_id": 18,
"text": "b "
},
{
"math_id": 19,
"text": "\\mathrm{BN}(Wx)"
},
{
"math_id": 20,
"text": "x^{(l)}_{h, w, c} = \\sum_{h', w', c'} K^{(l)}_{h'-h, w'-w, c, c'} x_{h', w', c'}^{(l-1)} + b^{(l)}_c"
},
{
"math_id": 21,
"text": "x^{(l)}_{h, w, c}"
},
{
"math_id": 22,
"text": "(h, w)"
},
{
"math_id": 23,
"text": "c"
},
{
"math_id": 24,
"text": "l"
},
{
"math_id": 25,
"text": "K^{(l)}_{\\Delta h, \\Delta w, c, c'}"
},
{
"math_id": 26,
"text": "K^{(l)}_{h'-h, w'-w, c, c'}"
},
{
"math_id": 27,
"text": "\\Delta h, \\Delta w, c'"
},
{
"math_id": 28,
"text": "b^{(l)}_c"
},
{
"math_id": 29,
"text": "x^{(l+1)}_{h, w, c}"
},
{
"math_id": 30,
"text": "\n\\begin{aligned}\n\\mu^{(l)}_c &= \\frac{1}{BHW} \\sum_{b=1}^B \\sum_{h=1}^H \\sum_{w=1}^W x^{(l)}_{(b), h, w, c} \\\\\n(\\sigma^{(l)}_c)^2 &= \\frac{1}{BHW} \\sum_{b=1}^B \\sum_{h=1}^H \\sum_{w=1}^W (x_{(b), h, w, c}^{(l)} - \\mu_c^{(l)})^2\n\\end{aligned}\n"
},
{
"math_id": 31,
"text": "B"
},
{
"math_id": 32,
"text": "H"
},
{
"math_id": 33,
"text": "W"
},
{
"math_id": 34,
"text": "\nB\n"
},
{
"math_id": 35,
"text": "\nBHW\n"
},
{
"math_id": 36,
"text": "\n\\begin{aligned}\n\\hat{x}^{(l)}_{(b), h, w, c} &= \\frac{x^{(l)}_{(b), h, w, c} - \\mu^{(l)}_c}{\\sqrt{(\\sigma^{(l)}_c)^2 + \\epsilon}} \\\\\ny^{(l)}_{(b), h, w, c} &= \\gamma_c \\hat{x}^{(l)}_{(b), h, w, c} + \\beta_c\n\\end{aligned}\n"
},
{
"math_id": 37,
"text": "\\mu"
},
{
"math_id": 38,
"text": "\\sigma^2"
},
{
"math_id": 39,
"text": "\\hat{x_i} = \\frac{x_i - \\mu}{\\sqrt{\\sigma^2 + \\epsilon}}, \\quad y_i = \\gamma_i \\hat{x_i} + \\beta_i"
},
{
"math_id": 40,
"text": "\n\\mu = \\frac 1D \\sum_{i=1}^D x_i\n"
},
{
"math_id": 41,
"text": "\n\\sigma^2 = \\frac 1D \\sum_{i=1}^D (x_i - \\mu)^2\n"
},
{
"math_id": 42,
"text": "\ni\n"
},
{
"math_id": 43,
"text": "\\begin{aligned}\n\\mu^{(l)} &= \\frac{1}{HWC} \\sum_{h=1}^H \\sum_{w=1}^W\\sum_{c=1}^C x^{(l)}_{h, w, c} \\\\\n(\\sigma^{(l)})^2 &= \\frac{1}{HWC} \\sum_{h=1}^H \\sum_{w=1}^W\\sum_{c=1}^C (x_{h, w, c}^{(l)} - \\mu^{(l)})^2 \\\\\n\\hat{x}^{(l)}_{h,w,c} &= \\frac{\\hat{x}^{(l)}_{h,w,c} - \\mu^{(l)}}{\\sqrt{(\\sigma^{(l)})^2 + \\epsilon}} \\\\\ny^{(l)}_{h,w,c} &= \\gamma^{(l)} \\hat{x}^{(l)}_{h,w,c} + \\beta^{(l)}\n\\end{aligned}"
},
{
"math_id": 44,
"text": "\nb\n"
},
{
"math_id": 45,
"text": "\nc\n"
},
{
"math_id": 46,
"text": "\nt\n"
},
{
"math_id": 47,
"text": "\nx^{(t)} \\in \\mathbb{R}^{D}\n"
},
{
"math_id": 48,
"text": "\nD\n"
},
{
"math_id": 49,
"text": "\\hat{x_{i}}^{(t)} = \\frac{x_i^{(t)} - \\mu^{(t)}}{\\sqrt{(\\sigma^{(t)})^2 + \\epsilon}}, \\quad y_i^{(t)} = \\gamma_i \\hat{x_i}^{(t)} + \\beta_i"
},
{
"math_id": 50,
"text": "\n\\mu^{(t)} = \\frac 1D \\sum_{i=1}^D x_i^{(t)}\n"
},
{
"math_id": 51,
"text": "\n(\\sigma^{(t)})^2 = \\frac 1D \\sum_{i=1}^D (x_i^{(t)} - \\mu^{(t)})^2\n"
},
{
"math_id": 52,
"text": "\n\\hat{x_i} = \\frac{x_i}{\\sqrt{\\frac 1D \\sum_{i=1}^D x_i^2}}, \\quad y_i = \\gamma \\hat{x_i} + \\beta\n"
},
{
"math_id": 53,
"text": "\\mu, \\epsilon = 0"
},
{
"math_id": 54,
"text": "b_{x, y}^i=\\frac{a_{x, y}^i}{\\left(k+\\alpha \\sum_{j=\\max (0, i-n / 2)}^{\\min (N-1, i+n / 2)}\\left(a_{x, y}^j\\right)^2\\right)^\\beta}"
},
{
"math_id": 55,
"text": "a_{x,y}^i"
},
{
"math_id": 56,
"text": "(x,y)"
},
{
"math_id": 57,
"text": "k, n, \\alpha, \\beta"
},
{
"math_id": 58,
"text": "1, 2, \\dots, C"
},
{
"math_id": 59,
"text": "g_1, \\dots, g_G"
},
{
"math_id": 60,
"text": "\n\\begin{aligned}\n\\mu^{(l)}_c &= \\frac{1}{HW} \\sum_{h=1}^H \\sum_{w=1}^Wx^{(l)}_{h, w, c} \\\\\n(\\sigma^{(l)}_c)^2 &= \\frac{1}{HW} \\sum_{h=1}^H \\sum_{w=1}^W (x_{h, w, c}^{(l)} - \\mu^{(l)}_c)^2 \\\\\n\\hat{x}^{(l)}_{h,w,c} &= \\frac{\\hat{x}^{(l)}_{h,w,c} - \\mu^{(l)}_c}{\\sqrt{(\\sigma^{(l)}_c)^2 + \\epsilon}} \\\\\ny^{(l)}_{h,w,c} &= \\gamma^{(l)}_c \\hat{x}^{(l)}_{h,w,c} + \\beta^{(l)}_c\n\\end{aligned} \n"
},
{
"math_id": 61,
"text": "x^{(l), \\text{ content}}"
},
{
"math_id": 62,
"text": "x^{(l), \\text{ style}}"
},
{
"math_id": 63,
"text": "x'^{(l)}"
},
{
"math_id": 64,
"text": "\\gamma, \\beta"
},
{
"math_id": 65,
"text": "\n\\begin{aligned}\ny^{(l), \\text{ content}}_{h,w,c} &= \\sigma^{(l), \n\\text{ style}}_c \\left( \\frac{x^{(l), \\text{ content}}_{h,w,c} - \\mu^{(l), \\text{ content}}_c}{\\sqrt{(\\sigma^{(l), \\text{ content}}_c)^2 + \\epsilon}} \\right) + \\mu^{(l), \\text{ style}}_c.\n\\end{aligned}\n"
}
]
| https://en.wikipedia.org/wiki?curid=77557393 |
7755881 | Carleson measure | In mathematics, a Carleson measure is a type of measure on subsets of "n"-dimensional Euclidean space R"n". Roughly speaking, a Carleson measure on a domain Ω is a measure that does not vanish at the boundary of Ω when compared to the surface measure on the boundary of Ω.
Carleson measures have many applications in harmonic analysis and the theory of partial differential equations, for instance in the solution of Dirichlet problems with "rough" boundary. The Carleson condition is closely related to the boundedness of the Poisson operator. Carleson measures are named after the Swedish mathematician Lennart Carleson.
Definition.
Let "n" ∈ N and let Ω ⊂ R"n" be an open (and hence measurable) set with non-empty boundary ∂Ω. Let "μ" be a Borel measure on Ω, and let "σ" denote the surface measure on ∂Ω. The measure "μ" is said to be a Carleson measure if there exists a constant "C" > 0 such that, for every point "p" ∈ ∂Ω and every radius "r" > 0,
formula_0
where
formula_1
denotes the open ball of radius "r" about "p".
Carleson's theorem on the Poisson operator.
Let "D" denote the unit disc in the complex plane C, equipped with some Borel measure "μ". For 1 ≤ "p" < +∞, let "H""p"(∂"D") denote the Hardy space on the boundary of "D" and let "L""p"("D", "μ") denote the "L""p" space on "D" with respect to the measure "μ". Define the Poisson operator
formula_2
by
formula_3
Then "P" is a bounded linear operator if and only if the measure "μ" is Carleson.
Other related concepts.
The infimum of the set of constants "C" > 0 for which the Carleson condition
formula_4
holds is known as the Carleson norm of the measure "μ".
If "C"("R") is defined to be the infimum of the set of all constants "C" > 0 for which the restricted Carleson condition
formula_5
holds, then the measure "μ" is said to satisfy the vanishing Carleson condition if "C"("R") → 0 as "R" → 0. | [
{
"math_id": 0,
"text": "\\mu \\left( \\Omega \\cap \\mathbb{B}_{r} (p) \\right) \\leq C \\sigma \\left( \\partial \\Omega \\cap \\mathbb{B}_{r} (p) \\right),"
},
{
"math_id": 1,
"text": "\\mathbb{B}_{r} (p) := \\left\\{ x \\in \\mathbb{R}^{n} \\left| \\| x - p \\|_{\\mathbb{R}^{n}} < r \\right. \\right\\}"
},
{
"math_id": 2,
"text": "P : H^{p} (\\partial D) \\to L^{p} (D, \\mu)"
},
{
"math_id": 3,
"text": "P(f) (z) = \\frac{1}{2 \\pi} \\int_{0}^{2 \\pi} \\mathrm{Re} \\frac{e^{i t} + z}{e^{i t} - z} f(e^{i t}) \\, \\mathrm{d} \\mu(t)."
},
{
"math_id": 4,
"text": "\\forall r > 0, \\forall p \\in \\partial \\Omega, \\mu \\left( \\Omega \\cap \\mathbb{B}_{r} (p) \\right) \\leq C \\sigma \\left( \\partial \\Omega \\cap \\mathbb{B}_{r} (p) \\right)"
},
{
"math_id": 5,
"text": "\\forall r \\in (0, R), \\forall p \\in \\partial \\Omega, \\mu \\left( \\Omega \\cap \\mathbb{B}_{r} (p) \\right) \\leq C \\sigma \\left( \\partial \\Omega \\cap \\mathbb{B}_{r} (p) \\right)"
}
]
| https://en.wikipedia.org/wiki?curid=7755881 |
775658 | Ernst Ising | German physicist
Ernst Ising (; May 10, 1900 – May 11, 1998) was a German physicist, who is best remembered for the development of the Ising model. He was a professor of physics at Bradley University until his retirement in 1976.
Life.
Ernst Ising was born in Cologne in 1900. Ernst Ising's parents were the merchant Gustav Ising and his wife Thekla Löwe. After school, he studied physics and mathematics at the University of Göttingen and University of Hamburg. In 1922, he began researching ferromagnetism under the guidance of Wilhelm Lenz. He earned a Ph.D. in physics from the University of Hamburg in 1924 when he published his doctoral thesis (an excerpt or a summary of his doctoral thesis was published as an article in a scientific journal in 1925 and this has led many to believe that he published his full thesis in 1925, see,). His doctoral thesis studied a problem suggested by his teacher, Wilhelm Lenz. He investigated the special case of a linear chain of magnetic moments, which are only able to take two positions, "up" and "down," and which are coupled by interactions between nearest neighbors. Mainly through following studies by Rudolf Peierls, Hendrik Kramers, Gregory Wannier and Lars Onsager the model proved to be successful explaining phase transitions between ferromagnetic and paramagnetic states.
After earning his doctorate, Ernst Ising worked for a short time in business before becoming a teacher, in Salem, Strausberg and Crossen, among other places. In 1930, he married the economist Dr. Johanna Ehmer (February 2, 1902 – February 2, 2012; later known as Jane Ising and just barely becoming a supercentenarian). As a young German–Jewish scientist, Ising was barred from teaching and researching when Hitler came to power in 1933. In 1934, he found a position, first as a teacher and then as headmaster, at a Jewish school in Caputh near Potsdam for Jewish students who had been thrown out of public schools. Ernst and his wife Dr. Johanna Ising, née Ehmer, lived in Caputh near the famous summer residence of the Einstein family. In 1938, the school in Caputh was destroyed by the Nazis, and in 1939 the Isings fled to Luxembourg, where Ising earned money as a shepherd and railroad worker. After the German Wehrmacht occupied Luxembourg, Ernst Ising was forced to work for the army. In 1947, the Ising family emigrated to the United States. Though he became Professor of Physics at Bradley University in Peoria, Illinois, he never published again. Ising died at his home in Peoria in 1998, just one day after his 98th birthday.
Work.
The Ising model is defined on a discrete collection of variables called "spins", which can take on the value 1 or −1. The spins formula_0 interact in pairs, with energy that has one value when the two spins are the same, and a second value when the two spins are different.
The energy of the Ising model is defined to be:
formula_1
where the sum counts each pair of spins only once. Notice that the product of spins is either +1 if the two spins are the same ("aligned"), or −1 if they are different ("anti-aligned"). J is half the difference in energy between the two possibilities. Magnetic interactions seek to align spins relative to one another. Spins become randomized when thermal energy is greater than the strength of the interaction.
For each pair, if
formula_2 the interaction is called "ferromagnetic"
formula_3 the interaction is called "antiferromagnetic"
formula_4 the spins are "noninteracting"
A ferromagnetic interaction tends to align spins, and an antiferromagnetic tends to antialign them.
The spins can be thought of as living on a graph, where each node has exactly one spin, and each edge connects two spins with a nonzero value of J. If all the Js are equal, it is convenient to measure energy in units of J. Then a model is completely specified by the graph and the sign of J.
The "antiferromagnetic one-dimensional Ising model" has the energy function:
formula_5
where i runs over all the integers. This links each pair of nearest neighbors.
In his 1924 PhD thesis, Ising solved the model for the 1D case. In one dimension, the solution admits no phase transition. On the basis of this result, he incorrectly concluded that his model does not exhibit phase transition in any dimension.
It was only in 1949 that Ising knew the importance his model attained in scientific literature, 25 years after his Ph.D. thesis. Today, each year, about 800 papers are published that use the model to address problems in such diverse fields as neural networks, protein folding, biological membranes and social behavior.
The Ising model had significance as a historical step towards recurrent neural networks. Glauber in 1963 studied the Ising model evolving in time, as a process towards equilibrium (Glauber dynamics), adding in the component of time. Shun'ichi Amari in 1972 proposed to modify the weights of an Ising model by Hebbian learning rule as a model of associative memory, adding in the component of learning. This was popularized as the Hopfield network (1982).
Notes.
<templatestyles src="Reflist/styles.css" />
External links.
| [
{
"math_id": 0,
"text": "S_i"
},
{
"math_id": 1,
"text": "\nE = - \\sum_{ij} J_{ij} S_i S_j\n\\,"
},
{
"math_id": 2,
"text": "J_{ij} > 0 "
},
{
"math_id": 3,
"text": "J_{ij} < 0 "
},
{
"math_id": 4,
"text": "J_{ij} = 0 "
},
{
"math_id": 5,
"text": "\nE = \\sum_{i} S_{i} S_{i+1}\n\\,"
}
]
| https://en.wikipedia.org/wiki?curid=775658 |
77567392 | Low-gravity process engineering | Space manufacturing and fluid behavior
Low-gravity process engineering is a specialized field that focuses on the design, development, and optimization of industrial processes and manufacturing techniques in environments with reduced gravitational forces. This discipline encompasses a wide range of applications, from microgravity conditions experienced in Earth orbit to the partial gravity environments found on celestial bodies such as the Moon and Mars.
As humanity extends its reach beyond Earth, the ability to efficiently produce materials, manage fluids, and conduct chemical processes in reduced gravity becomes crucial for sustained space missions and potential colonization efforts. Furthermore, the unique conditions of microgravity offer opportunities for novel materials and pharmaceuticals that cannot be easily produced on Earth, potentially leading to groundbreaking advancements in various industries.
The historical context of low-gravity research dates back to the early days of space exploration. Initial experiments conducted during the Mercury and Gemini programs in the 1960s provided the first insights into fluid behavior in microgravity. Subsequent missions, including Skylab and the Space Shuttle program, expanded our understanding of materials processing and fluid dynamics in space. The advent of the International Space Station (ISS) in the late 1990s marked a significant milestone, providing a permanent microgravity laboratory for continuous research and development in low-gravity process engineering.
Fundamentals of low-gravity environments.
Low-gravity environments, encompassing both microgravity and reduced gravity conditions, exhibit unique characteristics that significantly alter physical phenomena compared to Earth's gravitational field. These environments are typically characterized by gravitational accelerations ranging from formula_0formula_1 to formula_2formula_1, where formula_1 represents Earth's standard gravitational acceleration formula_3.
Microgravity, often experienced in orbiting spacecraft, is characterized by the near absence of perceptible weight. In contrast, reduced gravity conditions, such as those on the Moon (formula_4) or Mars (formula_5), maintain a fractional gravitational pull relative to Earth.
These environments differ markedly from Earth's gravity in several key aspects:
Effects of low-gravity conditions on various physical processes.
Fluid dynamics.
In microgravity, fluid behavior is primarily governed by surface tension, viscous forces, and inertia. This leads to phenomena such as large stable liquid bridges, spherical droplet formation, and capillary flow dominance. The absence of buoyancy-driven convection alters mixing processes and phase separations, necessitating alternative methods for fluid management in space applications.
Heat transfer.
The lack of natural convection in microgravity significantly impacts heat transfer processes. Conduction and radiation become the primary modes of heat transfer, while forced convection must be induced artificially. This alteration affects cooling systems, boiling processes, and thermal management in spacecraft and space-based manufacturing.
Material behavior.
Low-gravity environments offer unique conditions for materials processing. The absence of buoyancy-driven convection and sedimentation allows for more uniform crystal growth and the formation of novel alloys and composites. Additionally, the reduced mechanical stresses in microgravity can lead to changes in material properties and behavior, influencing fields such as materials science and pharmaceutical research.
Challenges.
Low-gravity process engineering faces a number of challenges that require innovative solutions and adaptations of terrestrial technologies. These challenges stem from the unique physical phenomena observed in microgravity and reduced gravity environments.
Fluid management issues.
The absence of buoyancy and the dominance of surface tension in low-gravity environments significantly alter fluid behavior, presenting several challenges:
Heat transfer limitations.
The lack of natural convection in low-gravity environments poses significant challenges for heat transfer processes:
Material handling and containment difficulties.
Low-gravity environments present unique challenges in manipulating and containing materials:
Equipment design considerations.
Designing equipment for low-gravity operations requires addressing several unique factors
Addressing these challenges requires interdisciplinary approaches, combining insights from fluid dynamics, heat transfer, materials science, and aerospace engineering. As research in low-gravity process engineering progresses, new solutions and technologies continue to emerge, expanding the possibilities for space-based manufacturing and resource utilization.
Key areas.
Fluid processing.
Multiphase flow behavior in microgravity differs substantially from terrestrial conditions. The absence of buoyancy-driven phase separation leads to complex flow patterns and phase distributions. These phenomena affect heat transfer, mass transport, and chemical reactions in multiphase systems, necessitating novel approaches to fluid management in space.
Boiling and condensation processes are fundamentally altered in microgravity. The lack of buoyancy affects bubble dynamics, heat transfer coefficients, and critical heat flux. Understanding these changes is crucial for designing efficient thermal management systems for spacecraft and space habitats.
Capillary flow and wetting phenomena become dominant in low-gravity environments. Surface tension forces drive fluid behavior, leading to unexpected liquid migrations and containment challenges. These effects are particularly important in the design of fuel tanks, life support systems, and fluid handling equipment for space applications.
Materials processing.
Materials processing in space offers unique opportunities for producing novel materials and improving existing manufacturing techniques.
Crystal growth in space benefits from the absence of gravity-induced convection and sedimentation. This environment allows for the growth of larger, more perfect crystals with fewer defects. Space-grown crystals have applications in electronics, optics, and pharmaceutical research.
Metallurgy and alloy formation in microgravity can result in materials with unique properties. The absence of buoyancy-driven convection allows for more uniform mixing of molten metals and the creation of novel alloys and composites that are difficult or impossible to produce on Earth.
Additive manufacturing in low-gravity environments presents both challenges and opportunities. While the absence of gravity can affect material deposition and layer adhesion, it also allows for the creation of complex structures without the need for support materials. This technology has potential applications in on-demand manufacturing of spare parts and tools for long-duration space missions.
Biotechnology applications.
Microgravity conditions offer unique advantages for various biotechnology applications.
Protein crystallization in space often results in larger, more well-ordered crystals compared to those grown on Earth. These high-quality crystals are valuable for structural biology studies and drug design. The microgravity environment reduces sedimentation and convection, allowing for more uniform crystal growth.
Cell culturing and tissue engineering benefit from the reduced mechanical stresses in microgravity. This environment allows for three-dimensional cell growth and the formation of tissue-like structures that more closely resemble in vivo conditions. Such studies contribute to our understanding of cellular biology and may lead to advancements in regenerative medicine.
Pharmaceutical production in space has the potential to yield purer drugs with improved efficacy. The absence of convection and sedimentation can lead to more uniform crystallization and particle formation, potentially enhancing drug properties.
Chemical engineering processes.
Chemical engineering processes in microgravity often exhibit different behaviors compared to their terrestrial counterparts.
Reaction kinetics in microgravity can be altered due to the absence of buoyancy-driven convection. This can lead to more uniform reaction conditions and potentially different reaction rates or product distributions.
Separation processes, such as distillation and extraction, face unique challenges in low-gravity environments. The lack of buoyancy affects phase separation and mass transfer, requiring novel approaches to achieve efficient separations. These challenges have led to the development of alternative separation technologies for space applications.
Catalysis in space presents opportunities for studying fundamental catalytic processes without the interfering effects of gravity. The absence of natural convection and sedimentation can lead to more uniform catalyst distributions and potentially different reaction pathways. This research may contribute to the development of more efficient catalysts for both space and terrestrial applications.
Experimental platforms and simulation techniques.
The study of low-gravity processes requires specialized platforms and techniques to simulate or create microgravity conditions. These methods range from ground-based facilities to orbital laboratories and computational simulations.
Drop towers and parabolic flights.
Drop towers provide short-duration microgravity environments by allowing experiments to free-fall in evacuated shafts. These facilities typically offer 2-10 seconds of high-quality microgravity. Notable examples include NASA's Glenn Research Center 2.2-Second Drop Tower and the 146-meter ZARM Drop Tower in Bremen, Germany.
Parabolic flights, often referred to as "vomit comets," create repeated periods of microgravity lasting 20-25 seconds by flying aircraft in parabolic arcs. These flights allow researchers to conduct hands-on experiments and test equipment destined for space missions.
Sounding rockets and suborbital flights.
Sounding rockets offer extended microgravity durations ranging from 3 to 14 minutes, depending on the rocket's apogee. These platforms are particularly useful for experiments requiring longer microgravity exposure than drop towers or parabolic flights can provide.
Suborbital flights, such as those planned by commercial spaceflight companies, present new opportunities for microgravity research. These flights can offer several minutes of microgravity time and the potential for frequent, cost-effective access to space-like conditions.
International space station facilities.
The International Space Station serves as a permanent microgravity laboratory, offering long-duration experiments in various scientific disciplines. Key research facilities on the ISS include:
These facilities enable researchers to conduct complex, long-term studies in a true microgravity environment, advancing our understanding of fundamental physical processes and developing new technologies for space exploration.
Computational fluid dynamics for low-gravity simulations.
Computational Fluid Dynamics (CFD) plays a crucial role in predicting and analyzing fluid behavior in low-gravity environments. CFD simulations complement experimental research by:
CFD models for low-gravity applications often require modifications to account for the dominance of surface tension forces and the absence of buoyancy-driven flows. Validation of these models typically involves comparison with experimental data from microgravity platforms.
As computational power increases, CFD simulations are becoming increasingly sophisticated, enabling more accurate predictions of complex multiphase flows and heat transfer processes in microgravity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "10^{-6}"
},
{
"math_id": 1,
"text": "g"
},
{
"math_id": 2,
"text": "10^{-2}"
},
{
"math_id": 3,
"text": "(9.81 m/s^2)"
},
{
"math_id": 4,
"text": "0.16g"
},
{
"math_id": 5,
"text": "0.37g"
}
]
| https://en.wikipedia.org/wiki?curid=77567392 |
7757177 | Rendleman–Bartter model | Short-rate model describing the evolution of interest rates
The Rendleman–Bartter model (Richard J. Rendleman, Jr. and Brit J. Bartter) in finance is a short-rate model describing the evolution of interest rates. It is a "one factor model" as it describes interest rate movements as driven by only one source of market risk. It can be used in the valuation of interest rate derivatives. It is a stochastic asset model.
The model specifies that the instantaneous interest rate follows a geometric Brownian motion:
formula_0
where "Wt" is a Wiener process modelling the random market risk factor. The drift parameter, formula_1, represents a constant expected instantaneous rate of change in the interest rate, while the standard deviation parameter, formula_2, determines the volatility of the interest rate.
This is one of the early models of the short-term interest rates, using the same stochastic process as the one already used to describe the dynamics of the underlying price in stock options. Its main disadvantage is that it does not capture the mean reversion of interest rates (their tendency to revert toward some value or range of values rather than wander without bounds in either direction).
Note that in 1979 Rendleman-Bartter also published a version of the Binomial options pricing model for equity underlyings. ("Two-State Option Pricing". "Journal of Finance" 24: 1093-1110.) | [
{
"math_id": 0,
"text": "dr_t = \\theta r_t\\,dt + \\sigma r_t\\,dW_t"
},
{
"math_id": 1,
"text": "\\theta"
},
{
"math_id": 2,
"text": "\\sigma"
}
]
| https://en.wikipedia.org/wiki?curid=7757177 |
775808 | Shallow donor | A shallow donor refers to a donor that contributes an electron that exhibits energy states equivalent to atomic hydrogen with an altered expected mass i.e. the long range coulomb potential of the ion-cores determines the energy levels. Essentially the electron orbits the donor ion within the semiconductor material at approximately the bohr radius. This is in contrast to deep level donors where the short range potential determines the energy levels, not the effective mass states. This contributes additional energy states that can be used for conduction.
Overview.
Introducing impurities in a semiconductor which are used to set free additional electrons in its conduction band is called doping with donors. In a group IV semiconductor like silicon these are most often group V elements like arsenic or antimony. However, these impurities introduce new energy levels in the band gap affecting the band structure which may alter the electronic properties of the semiconductor to a great extent.
Having a shallow donor level means that these additional energy levels are not more than formula_0 (0.075 eV at room temperature) away from the lower conduction band edge. This allows us to treat the original semiconductor as unaffected in its electronic properties, with the impurity atoms only increasing the electron concentration. A limit to donor concentration in order to allow treatment as shallow donors is approximately 1019 cm−3.
Energy levels due to impurities deeper in the bandgap are called deep levels. | [
{
"math_id": 0,
"text": "3 k_b T"
}
]
| https://en.wikipedia.org/wiki?curid=775808 |
77581497 | 1953 Kemerovo mid-air collision | The 1953 Kemerovo mid-air collision was an aviation accident where two Li-2 aircraft collided over Kemerovo Oblast, Soviet Union, on May 27, 1953, resulting in the deaths of all 27 people aboard both aircraft.
Aircraft.
The Li-2 with tail number L4534 (serial number 18429005) from the 133rd Air Transport Unit of the East Siberian Territorial Directorate of Civil Aviation was manufactured on June 30, 1948, and at the time of the accident had logged 3,882 flight hours. On the day of the crash, it was operating flight 18 on the route Irkutsk—Krasnoyarsk—Novosibirsk, piloted by a crew consisting of the commander (PIC) Iosif Danilovich Stanishevsky, co-pilot Valentin Dmitrievich Maksimov, flight radio operator Yuri Yurievich Chernyshkov, and flight engineer Fedor Ivanovich Perevalov. On May 26 at 21:07 MSK, the aircraft departed from Irkutsk Airport and landed in Krasnoyarsk at 00:50 (May 27). After a one-hour stopover, it took off from Krasnoyarsk Airport at 01:50 and climbed to an altitude of 2,100 meters. The flight was conducted along the air corridor Krasnoyarsk—Kemerovo—Novosibirsk, carrying 16 passengers: 14 adults and 2 children.
The Li-2T with tail number A4031 (serial number 18432605) from the 6th Air Survey Unit of the West Siberian Territorial Directorate of Civil Aviation was manufactured on April 30, 1949, and had logged 1,268 flight hours at the time of the crash. On the day of the incident, it was tasked with performing an aerial photography mission in the Goose River area. The crew consisted of the commander (PIC) Alexander Nikolaevich Lomov, co-pilot Mikhail Konstantinovich Kiselev, navigator-aerial photographer Mikhail Petrovich Shchur, flight radio operator Nikolai Pavlovich Lyallin, flight engineer Evgraf Yakovlevich Kuznetsov, camera operator Zoya Pavlovna Volkova, and aerial photographer Nikolai Alexandrovich Pavlov. At 02:40 MSK, the aircraft took off from Novosibirsk Airport and climbed to the prescribed altitude of 1,800 meters before heading along the route Novosibirsk—Goose River—Kemerovo to the aerial photography area, located 45–60 kilometers north-northeast of Kemerovo Airport.
The Accident.
At the time, the weather was clear, with an air temperature of +10 °C, and visibility exceeding 15 kilometers. On the Krasnoyarsk—Kemerovo section, the commander of aircraft L4534 deviated from the route, resulting in a 47-kilometer deviation. Meanwhile, at 03:36, the crew of aircraft A4031 reported their arrival in the aerial photography area at the assigned altitude of 1,800 meters and began their work. Li-2 A4031 initially flew on a heading of 90° for 6 minutes and 40 seconds, then made a 36-second left turn to 180° and flew on a 270° heading for another 6 minutes and 40 seconds. It then performed a right turn and settled on a 90° heading, now flying directly into the rising sun.
Less than a minute later, in clear skies at 03:50 MSK, the two aircraft, flying directly toward each other (L4534 on a 270° course, A4031 on a 90° course), collided head-on in front of numerous witnesses 16.5 kilometers west of the village of Barzas (Kemerovsky District, Kemerovo Oblast). Aircraft A4031, flying slightly lower, sheared off the left wing of aircraft L4534 with its right propeller, cutting through the wing at the junction of the aileron and flap. The impact also tore through the upper part of A4031's fuselage, destroying the center wing section of the second aircraft. As a result, passengers began falling out of the L4534's cabin, and its left wing, tail unit, and parts of the fuselage were scattered over a 150 by 900-meter area of forest and clearing. The center wing section with engines, cockpit, and right wing crashed into the forest. Aircraft A4031 went into a dive and crashed into the forest 350 meters away, exploding on impact, although no fire ensued. All 27 people aboard the two aircraft (20 on L4534 and 7 on A4031) perished.
Investigation.
During the flight, the crew of Li-2 aircraft A4031 maintained communication with the Novosibirsk Aerodrome Dispatch Service and airfield dispatch service but did not contact the Kemerovo ADS. Meanwhile, the crew of Li-2 aircraft L4534 maintained communication with the Krasnoyarsk ADS, RDS, and Kemerovo ADS, but their communication with the Novosibirsk RDS was unsatisfactory. As a result, the exact location of the aircraft within their respective zones was unknown to the RDS. Numerous eyewitnesses on the ground saw the two aircraft heading toward each other at the same altitude, with neither crew taking any evasive action to avoid the collision.
It is worth noting that the 6th Air Survey Unit was originally part of the Main Directorate of Geodesy and Cartography and was transferred to the West Siberian Territorial Directorate of Civil Aviation in March 1953, just two months before the accident. When the unit was integrated, the command of the West Siberian TU GWF failed to adequately address the organization and safety of flights. Only flight training for the crews was conducted, with little attention given to aerial photography operations, which were entrusted to unit commander Klykov, who had only started working in the unit in March 1953 and was also unfamiliar with the specifics of aerial photography operations. Meanwhile, while civil and military aviation flights are stratified, meaning they operate at specific altitudes based on standard pressure settings, the situation is different for aerial survey aviation. These aircraft operate at altitudes dependent on the specified scale of the survey, which in turn is determined by the aerodrome elevation with adjustments for the terrain's relief in the survey area. However, requests for aerial survey operations at Novosibirsk Airport were submitted without specifying the boundaries of the area, and the actual flight altitudes over the survey area did not match the submitted requests. For example, the assignment for aircraft A4031 specified conducting a survey at a scale of 1:17,000. Simple calculations show that the actual flight altitude relative to the departure aerodrome should have been
formula_0, where 1,700 is the true altitude above the survey area's terrain, 290 is the average terrain elevation of the survey area above sea level, and 162 is the elevation of Novosibirsk Airport. Based on this, at standard pressure (760 mm Hg), the flight altitude should have been formula_1, where 745.5 is the pressure at Novosibirsk Airport. Since the pressure at Kemerovo Airport was higher, at 747.1 mm Hg, the flight altitude at standard pressure should have been formula_2, where 17.5 is the correction for the higher pressure at Kemerovo Airport. From these calculations, it is clear that before beginning the survey, aircraft A4031 should have climbed to an altitude of not 1,800 meters but 1,970 meters, meaning it should have gained an additional 170 meters. However, according to the barogram data, the aircraft actually climbed an additional 250 meters. It is noteworthy that neither the command of the West Siberian TU GWF, nor the Novosibirsk RDS and ADS, were aware of the correct method for determining survey altitude.
As for aircraft L4534, its commander, Stanishevsky, grossly violated the flight route on the Krasnoyarsk—Novosibirsk track by cutting a portion of the route near Kemerovo and not maintaining the prescribed flight level. Previously, in July 1952, Stanishevsky had been demoted to co-pilot for indiscipline and loss of orientation. He was reinstated in February 1953, despite retaining significant deficiencies in piloting, as well as displaying arrogance, stubbornness, and a lack of understanding.
At the Novosibirsk RDS, a radio direction finder was installed, but during the entire 35 minutes that aircraft L4534 was in its zone, it did not determine its location, leaving the Novosibirsk dispatcher unaware of the air traffic situation in the aerial photography area. The Kemerovo ADS dispatcher, when allowing aircraft L4334 to fly through its zone, was unaware of the aircraft's actual route and did not ascertain its precise location, although in clear weather, he could have observed the aircraft flying over the Kemerovo aerodrome. However, there was no overflight, as Commander Stanishevsky had cut the route to bypass the aerodrome. The Novosibirsk RDS did not warn the passing crews or the Kemerovo and Krasnoyarsk airports about flights taking place within its zone 50 kilometers north of Kemerovo Airport. Finally, the crews of both aircraft, despite the clear weather, did not monitor the air traffic situation and took no action to prevent the collision.
Commission's Conclusions.
The causes of the aircraft collision were:
Those responsible: both PICs and Klykov, who took a formal approach to ensuring flight safety.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "1700+(290-162)=1828"
},
{
"math_id": 1,
"text": "1828+(760-745,5)*11=1987,5"
},
{
"math_id": 2,
"text": "1987,5-17,5=1970"
}
]
| https://en.wikipedia.org/wiki?curid=77581497 |
77582898 | Type IIA supergravity | Ten-dimensional supergravity
In supersymmetry, type IIA supergravity is the unique supergravity in ten dimensions with two supercharges of opposite chirality. It was first constructed in 1984 by a dimensional reduction of eleven-dimensional supergravity on a circle. The other supergravities in ten dimensions are type IIB supergravity, which has two supercharges of the same chirality, and type I supergravity, which has a single supercharge. In 1986 a deformation of the theory was discovered which gives mass to one of the fields and is known as massive type IIA supergravity. Type IIA supergravity plays a very important role in string theory as it is the low-energy limit of type IIA string theory.
History.
After supergravity was discovered in 1976 with pure 4D formula_0 supergravity, significant effort was devoted to understanding other possible supergravities that can exist with various numbers of supercharges and in various dimensions. The discovery of eleven-dimensional supergravity in 1978 led to the derivation of many lower dimensional supergravities through dimensional reduction of this theory. Using this technique, type IIA supergravity was first constructed in 1984 by three different groups, by F. Giani and M. Pernici, by I.C.G. Campbell and P. West, and by M. Huq and M. A. Namazie. In 1986 it was noticed by L. Romans that there exists a massive deformation of the theory. Type IIA supergravity has since been extensively used to study the low-energy behaviour of type IIA string theory. The terminology of type IIA, type IIB, and type I was coined by J. Schwarz, originally to refer to the three string theories that were known of in 1982.
Theory.
Ten dimensions admits both formula_0 and formula_1 supergravity, depending on whether there are one or two supercharges. Since the smallest spinorial representations in ten dimensions are Majorana–Weyl spinors, the supercharges come in two types formula_2 depending on their chirality, giving three possible supergravity theories. The formula_1 theory formed using two supercharges of opposite chiralities is denoted by formula_3 and is known as type IIA supergravity.
This theory contains a single multiplet, known as the ten-dimensional formula_1 nonchiral multiplet. The fields in this multiplet are formula_4, where formula_5 is the metric corresponding to the graviton, while the next three fields are the 3-, 2-, and 1-form gauge fields, with the 2-form being the Kalb–Ramond field. There is also a Majorana gravitino formula_6 and a Majorana spinor formula_7, both of which decompose into a pair of Majorana–Weyl spinors of opposite chiralities formula_8 and formula_9. Lastly, there a scalar field formula_10.
This nonchiral multiplet can be decomposed into the ten-dimensional formula_0 multiplet formula_11, along with four additional fields formula_12. In the context of string theory, the bosonic fields in the first multiplet consists of NSNS fields while the bosonic fields are all RR fields. The fermionic fields are meanwhile in the NSR sector.
Algebra.
The superalgebra for formula_3 supersymmetry is given by
formula_13
formula_14
where all terms on the right-hand side besides the first one are the central charges allowed by the theory. Here formula_15 are the spinor components of the Majorana supercharges while formula_16 is the charge conjugation operator. Since the anticommutator is symmetric, the only matrices allowed on the right-hand side are ones that are symmetric in the spinor indices formula_17, formula_18. In ten dimensions formula_19 is symmetric only for formula_20 modulo formula_21, with the chirality matrix formula_22 behaving as just another formula_23 matrix, except with no index. Going only up to five-index matrices, since the rest are equivalent up to Poincare duality, yields the set of central charges described by the above algebra.
The various central charges in the algebra correspond to different BPS states allowed by the theory. In particular, the formula_24, formula_25 and formula_26 correspond to the D0, D2, and D4 branes. The formula_27 corresponds to the NSNS 1-brane, which is equivalent to the fundamental string, while formula_28 corresponds to the NS5-brane.
Action.
The type IIA supergravity action is given up to four-fermion terms by
formula_29
formula_30
formula_31
Here formula_32 and formula_33 where formula_34 corresponds to a formula_34-form gauge field. The 3-form gauge field has a modified field strength tensor formula_35 with this having a non-standard Bianchi identity of formula_36. Meanwhile, formula_37, formula_38, formula_39, and formula_40 are various fermion bilinears given by
formula_41
formula_42
formula_43
formula_44
The first line of the action has the Einstein–Hilbert action, the dilaton kinetic term, the 2-form formula_45 field strength tensor. It also contains the kinetic terms for the gravitino formula_6 and spinor formula_7, described by the Rarita–Schwinger action and Dirac action, respectively. The second line has the kinetic terms for the 1-form and 3-form gauge fields as well as a Chern–Simons term. The last line contains the cubic interaction terms between two fermions and a boson.
Supersymmetry transformations.
The supersymmetry variations that leave the action invariant are given up to three-fermion terms by
formula_46
formula_47
formula_48
formula_49
formula_50
formula_51
formula_52
They are useful for constructing the Killing spinor equations and finding the supersymmetric ground states of the theory since these require that the fermionic variations vanish.
Related theories.
Massive type IIA supergravity.
Since type IIA supergravity has p-form field strengths of even dimensions, it also admits a nine-form gauge field formula_53. But since formula_54 is a scalar and the free field equation is given by formula_55, this scalar must be a constant. Such a field therefore has no propagating degrees of freedom, but does have an energy density associated to it. Working only with the bosonic sector, the ten-form can be included in supergravity by modifying the original action to get massive type IIA supergravity
formula_56
where formula_57 is equivalent to the original type IIA supergravity up to the replacement of formula_58 and formula_59. Here formula_60 is known as the Romans mass and it acts as a Lagrange multiplier for formula_61. Often one integrates out this field strength tensor resulting in an action where formula_60 acts as a mass term for the Kalb–Ramond field.
Unlike in the regular type IIA theory, which has a vanishing scalar potential formula_62, massive type IIA has a nonvanishing scalar potential. While the formula_1 supersymmetry transformations appear to be realised, they are actually formally broken since the theory corresponds to a D8-brane background. A closely related theory is Howe–Lambert–West supergravity which is another massive deformation of type IIA supergravity, but one that can only be described at the level of the equations of motion. It is acquired by a compactification of eleven-dimensional MM theory on a circle.
Relation to 11D supergravity.
Compactification of eleven-dimensional supergravity on a circle and keeping only the zero Fourier modes that are independent of the compact coordinates results in type IIA supergravity. For eleven-dimensional supergravity with the graviton, gravitino, and a 3-form gauge field denoted by formula_63, then the 11D metric decomposes into the 10D metric, the 1-form, and the dilaton as
formula_64
Meanwhile, the 11D 3-form decomposes into the 10D 3-form formula_65 and the 10D 2-form formula_66. The ten-dimensional modified field strength tensor formula_67 directly arises in this compactification from formula_68.
Dimensional reduction of the fermions must generally be done in terms of the flat coordinates formula_69, where formula_70 is the 11D vielbein. In that case the 11D Majorana graviton decomposes into the 10D Majorana gravitino and the Majorana fermion formula_71, although the exact identification is given by
formula_72
where this is chosen to make the supersymmetry transformations simpler. The ten-dimensional supersymmetry variations can also be directly acquired from the eleven-dimensional ones by setting formula_73.
Relation to type IIA string theory.
The low-energy effective field theory of type IIA string theory is given by type IIA supergravity. The fields correspond to the different massless excitations of the string, with the metric, 2-form formula_74, and dilaton being NSNS states that are found in all string theories, while the 3-form and 1-form fields correspond to the RR states of type IIA string theory. Corrections to the type IIA supergravity action come in two types, quantum corrections in powers of the string coupling formula_75, and curvature corrections in powers of formula_76. Such corrections often play an important role in type IIA string phenomenology. The type IIA superstring coupling constant formula_75 corresponds to the vacuum expectation value of formula_77, while the string length formula_78 is related to the gravitational coupling constant through formula_79.
When string theory is compactified to acquire four-dimensional theories, this is often done at the level of the low-energy supergravity. Reduction of type IIA on a Calabi–Yau manifold yields an formula_1 theory in four dimensions, while reduction on a Calabi–Yau orientifold further breaks the symmetry down to give the phenomenologically viable four-dimensional formula_0 supergravity. Type IIA supergravity is automatically anomaly free since it is a non-chiral theory.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal N=1"
},
{
"math_id": 1,
"text": "\\mathcal N=2"
},
{
"math_id": 2,
"text": "Q^\\pm"
},
{
"math_id": 3,
"text": "\\mathcal N=(1,1)"
},
{
"math_id": 4,
"text": "(g_{\\mu\\nu}, C_{\\mu\\nu\\rho},B_{\\mu\\nu},C_\\mu,\\psi_\\mu,\\lambda,\\phi)"
},
{
"math_id": 5,
"text": "g_{\\mu\\nu}"
},
{
"math_id": 6,
"text": "\\psi_\\mu"
},
{
"math_id": 7,
"text": "\\lambda"
},
{
"math_id": 8,
"text": "\\psi_\\mu = \\psi_\\mu^++\\psi_\\mu^-"
},
{
"math_id": 9,
"text": "\\lambda = \\lambda^++\\lambda^-"
},
{
"math_id": 10,
"text": "\\phi"
},
{
"math_id": 11,
"text": "(g_{\\mu\\nu}, B_{\\mu\\nu}, \\psi^+_\\mu, \\lambda^-, \\phi)"
},
{
"math_id": 12,
"text": "(C_{\\mu\\nu\\rho}, C_\\mu, \\psi_\\mu^-, \\lambda^+)"
},
{
"math_id": 13,
"text": "\n\\{Q_\\alpha, Q_\\beta\\} = (\\gamma^\\mu C)_{\\alpha \\beta}P_\\mu + (\\gamma_* C)_{\\alpha \\beta}Z + (\\gamma^\\mu \\gamma_* C)_{\\alpha \\beta}Z_\\mu +(\\gamma^{\\mu\\nu}C)_{\\alpha \\beta}Z_{\\mu\\nu} \n"
},
{
"math_id": 14,
"text": "\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ + (\\gamma^{\\mu\\nu\\rho\\sigma}\\gamma_*C)_{\\alpha \\beta}Z_{\\mu\\nu\\rho\\sigma} + (\\gamma^{\\mu\\nu\\rho\\sigma\\delta}C)_{\\alpha \\beta}Z_{\\mu\\nu\\rho\\sigma \\delta},\n"
},
{
"math_id": 15,
"text": "Q_\\alpha"
},
{
"math_id": 16,
"text": "C"
},
{
"math_id": 17,
"text": "\\alpha"
},
{
"math_id": 18,
"text": "\\beta"
},
{
"math_id": 19,
"text": "\\gamma^{\\mu_1\\cdots \\mu_p}C"
},
{
"math_id": 20,
"text": "p=1,2"
},
{
"math_id": 21,
"text": "4"
},
{
"math_id": 22,
"text": "\\gamma_*"
},
{
"math_id": 23,
"text": "\\gamma"
},
{
"math_id": 24,
"text": "Z"
},
{
"math_id": 25,
"text": "Z_{\\mu\\nu}"
},
{
"math_id": 26,
"text": "Z_{\\mu\\nu\\rho\\sigma}"
},
{
"math_id": 27,
"text": "Z_\\mu"
},
{
"math_id": 28,
"text": "Z_{\\mu\\nu\\rho\\sigma\\delta}"
},
{
"math_id": 29,
"text": "\nS_{IIA,\\text{bosonic}} = \\frac{1}{2\\kappa^2} \\int d^{10} x \\sqrt{-g} e^{-2\\phi}\\bigg[R + 4 \\partial_\\mu\n\\phi \\partial^\\mu \\phi -\\frac{1}{12}H_{\\mu\\nu\\rho}H^{\\mu\\nu\\rho} - 2 \\bar \\psi_\\mu \\gamma^{\\mu\\nu\\rho}D_\\nu \\psi_\\rho + 2 \\bar \\lambda \\gamma^\\mu D_\\mu \\lambda \\bigg]\n"
},
{
"math_id": 30,
"text": "\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ -\\frac{1}{4\\kappa^2}\\int d^{10}x \\sqrt{-g}\\big[\\tfrac{1}{2}F_{2,\\mu\\nu}F^{\\mu\\nu}_2+\\tfrac{1}{24}\\tilde F_{4,\\mu\\nu\\rho\\sigma}\\tilde F^{\\mu\\nu\\rho\\sigma}_4\\big]\n-\\frac{1}{4\\kappa^2}\\int B \\wedge F_4 \\wedge F_4\n"
},
{
"math_id": 31,
"text": "\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ +\\frac{1}{2\\kappa^2}\\int d^{10}x \\sqrt{-g} \\bigg[e^{-2\\phi}(2 \\chi^\\mu_1 \\partial_\\mu \\phi - \\tfrac{1}{6} H_{\\mu\\nu\\rho} \\chi_3^{\\mu\\nu\\rho} - 4 \\bar \\lambda \\gamma^{\\mu\\nu}D_\\mu \\psi_\\nu) - \\tfrac{1}{2}F_{2,\\mu\\nu} \\Psi_2^{\\mu\\nu} - \\tfrac{1}{24}\\tilde F_{4,\\mu\\nu\\rho\\sigma} \\Psi_4^{\\mu\\nu\\rho\\sigma} \\bigg].\n"
},
{
"math_id": 32,
"text": "H = dB"
},
{
"math_id": 33,
"text": "F_{p+1}=dC_p"
},
{
"math_id": 34,
"text": "p"
},
{
"math_id": 35,
"text": "\\tilde F_4 = F_4 -A_1\\wedge F_3"
},
{
"math_id": 36,
"text": "d\\tilde F_4 = -F_2\\wedge F_3"
},
{
"math_id": 37,
"text": "\\chi_1^\\mu"
},
{
"math_id": 38,
"text": "\\chi_3^{\\mu\\nu\\rho}"
},
{
"math_id": 39,
"text": "\\Psi_2^{\\mu\\nu}"
},
{
"math_id": 40,
"text": "\\Psi_4^{\\mu\\nu\\rho\\sigma}"
},
{
"math_id": 41,
"text": "\n\\chi^\\mu_1 = -2\\bar \\psi_\\nu \\gamma^\\nu \\psi^\\mu - 2\\bar \\lambda \\gamma^\\nu \\gamma^\\mu \\psi_\\nu,\n"
},
{
"math_id": 42,
"text": "\n\\chi^{\\mu\\nu\\rho}_3 = \\tfrac{1}{2}\\bar \\psi^\\alpha \\gamma_{[\\alpha}\\gamma^{\\mu\\nu\\rho}\\gamma_{\\beta]}\\gamma_* \\psi^\\beta + \\bar \\lambda \\gamma^{\\mu\\nu\\rho}{}_\\beta \\gamma_* \\psi^\\beta - \\tfrac{1}{2}\\bar \\lambda \\gamma_* \\gamma^{\\mu\\nu\\rho}\\lambda,\n"
},
{
"math_id": 43,
"text": "\n\\Psi^{\\mu\\nu}_2 = \\tfrac{1}{2}e^{-\\phi}\\bar \\psi^\\alpha\\gamma_{[\\alpha}\\gamma^{\\mu\\nu}\\gamma_{\\beta]}\\gamma_*\\psi^\\beta + \\tfrac{1}{2}e^{-\\phi}\\bar \\lambda \\gamma^{\\mu\\nu}\\gamma_\\beta \\gamma_*\\psi^\\beta + \\tfrac{1}{4}e^{-\\phi}\\bar \\lambda \\gamma^{\\mu\\nu}\\gamma_*\\lambda,\n"
},
{
"math_id": 44,
"text": "\n\\Psi^{\\mu\\nu\\rho\\sigma}_4 = \\tfrac{1}{2}e^{-\\phi}\\bar \\psi^\\alpha\\gamma_{[\\alpha}\\gamma^{\\mu\\nu\\rho\\sigma}\\gamma_{\\beta]}\\psi^\\beta + \\tfrac{1}{2}e^{-\\phi}\\bar \\lambda \\gamma^{\\mu\\nu\\rho\\sigma}\\gamma_\\beta\\psi^\\beta - \\tfrac{1}{4}e^{-\\phi}\\bar \\lambda \\gamma^{\\mu\\nu\\rho\\sigma}\\lambda.\n"
},
{
"math_id": 45,
"text": "B_{\\mu\\nu}"
},
{
"math_id": 46,
"text": "\n\\delta e_\\mu{}^a = \\bar \\epsilon \\gamma^a \\psi_\\mu,\n"
},
{
"math_id": 47,
"text": "\n\\delta \\psi_\\mu = (D_\\mu + \\tfrac{1}{8}H_{\\alpha \\beta\\mu}\\gamma^{\\alpha \\beta}\\gamma_*)\\epsilon + \\tfrac{1}{16}e^\\phi F_{\\alpha \\beta}\\gamma^{\\alpha \\beta}\\gamma_\\mu \\gamma_* \\epsilon + \\tfrac{1}{192}e^\\phi F_{\\alpha \\beta \\gamma \\delta}\\gamma^{\\alpha \\beta \\gamma \\delta}\\gamma_\\mu \\epsilon,\n"
},
{
"math_id": 48,
"text": "\n\\delta B_{\\mu\\nu} = 2\\bar \\epsilon \\gamma_{*}\\gamma_{[\\mu}\\psi_{\\nu]},\n"
},
{
"math_id": 49,
"text": "\n\\delta C_\\mu = -e^{-\\phi}\\bar \\epsilon\\gamma_* (\\psi_\\mu - \\tfrac{1}{2}\\gamma_\\mu \\lambda),\n"
},
{
"math_id": 50,
"text": "\n\\delta C_{\\mu\\nu\\rho} = -e^{-\\phi}\\bar \\epsilon\\gamma_{[\\mu\\nu}(3\\psi_{\\rho]}-\\tfrac{1}{2}\\gamma_{\\rho]}\\lambda) + 3 C_{[\\mu}\\delta B_{\\nu \\rho]},\n"
},
{
"math_id": 51,
"text": "\n\\delta \\lambda = ({\\partial\\!\\!\\!/} \\phi +\\tfrac{1}{12}H_{\\alpha \\beta \\gamma}\\gamma^{\\alpha \\beta \\gamma}\\gamma_*)\\epsilon + \\tfrac{3}{8}e^\\phi F_{\\alpha \\beta}\\gamma^{\\alpha \\beta}\\gamma_* \\epsilon + \\tfrac{1}{96}e^\\phi F_{\\alpha \\beta \\gamma \\delta}\\gamma^{\\alpha \\beta \\gamma \\delta}\\epsilon,\n"
},
{
"math_id": 52,
"text": "\n\\delta \\phi = \\tfrac{1}{2}\\bar \\epsilon \\lambda.\n"
},
{
"math_id": 53,
"text": "F_{10} = dC_9"
},
{
"math_id": 54,
"text": "\\star F_{10}"
},
{
"math_id": 55,
"text": "d\\star F_{10} = 0"
},
{
"math_id": 56,
"text": "\nS_{\\text{massive }IIA} = \\tilde S_{IIA} -\\frac{1}{4\\kappa^2}\\int d^{10}x \\sqrt{-g}M^2 + \\frac{1}{2\\kappa^2}\\int MF_{10},\n"
},
{
"math_id": 57,
"text": "\\tilde S_{IIA}"
},
{
"math_id": 58,
"text": "F_2 \\rightarrow F_2 +MB"
},
{
"math_id": 59,
"text": "F_4 \\rightarrow F_4+\\tfrac{1}{2}MB\\wedge B"
},
{
"math_id": 60,
"text": "M"
},
{
"math_id": 61,
"text": "F_{10}"
},
{
"math_id": 62,
"text": "V(\\phi)=0"
},
{
"math_id": 63,
"text": "(g_{MN}',\\psi_M', A_{MNR}')"
},
{
"math_id": 64,
"text": "\ng'_{MN} = e^{-2\\phi/3}\\begin{pmatrix} g_{\\mu\\nu}+e^{2\\phi}C_\\mu C_\\nu & -e^{2\\phi}C_\\mu \\\\ -e^{2\\phi} C_\\nu & e^{2\\phi}\\end{pmatrix}.\n"
},
{
"math_id": 65,
"text": "A_{\\mu\\nu\\rho}'\\rightarrow C_{\\mu\\nu\\rho}"
},
{
"math_id": 66,
"text": "A_{\\mu\\nu11}' \\rightarrow B_{\\mu\\nu}"
},
{
"math_id": 67,
"text": "\\tilde F_4"
},
{
"math_id": 68,
"text": "F'_{\\mu\\nu\\rho\\sigma} = e^{4\\phi/3}\\tilde F_{\\mu\\nu\\rho\\sigma}"
},
{
"math_id": 69,
"text": "\\psi_A' = e_A'^M\\psi_M"
},
{
"math_id": 70,
"text": "{e'}^M_A"
},
{
"math_id": 71,
"text": "\\psi_A'\\sim (\\psi_a, \\lambda)"
},
{
"math_id": 72,
"text": "\n\\psi_a' = e^{\\phi/6}(2\\psi_a - \\tfrac{1}{3}\\gamma_a \\lambda), \\ \\ \\ \\ \\ \\ \\ \\psi_{11}' = \\tfrac{2}{3}e^{\\phi/6}\\gamma_* \\lambda,\n"
},
{
"math_id": 73,
"text": "\\epsilon' = e^{-\\phi/6}\\epsilon"
},
{
"math_id": 74,
"text": "B"
},
{
"math_id": 75,
"text": "g_s"
},
{
"math_id": 76,
"text": "\\alpha'"
},
{
"math_id": 77,
"text": "e^\\phi"
},
{
"math_id": 78,
"text": "l_s = \\sqrt{\\alpha'}"
},
{
"math_id": 79,
"text": "2\\kappa^2 = (2\\pi)^7{\\alpha'}^4"
}
]
| https://en.wikipedia.org/wiki?curid=77582898 |
77583575 | Type IIB supergravity | Ten-dimensional supergravity
In supersymmetry, type IIB supergravity is the unique supergravity in ten dimensions with two supercharges of the same chirality. It was first constructed in 1983 by John Schwarz and independently by Paul Howe and Peter West at the level of its equations of motion. While it does not admit a fully covariant action due to the presence of a self-dual field, it can be described by an action if the self-duality condition is imposed by hand on the resulting equations of motion. The other types of supergravity in ten dimensions are type IIA supergravity, which has two supercharges of opposing chirality, and type I supergravity, which has a single supercharge. The theory plays an important role in modern physics since it is the low-energy limit of type IIB string theory.
History.
After supergravity was discovered in 1976, there was a concentrated effort to construct the various possible supergravities that were classified in 1978 by Werner Nahm. He showed that there exist three types of supergravity in ten dimensions, later named type I, type IIA and type IIB. While both type I and type IIA can be realised at the level of the action, type IIB does not admit a covariant action. Instead it was first fully described through its equations of motion, derived in 1983 by John Schwartz, and independently by Paul Howe and Peter West. In 1995 it was realised that one can effectively describe the theory using a pseudo-action where the self-duality condition is imposed as an additional constraint on the equations of motion. The main application of the theory is as the low-energy limit of type IIB strings, and so it plays an important role in string theory, type IIB moduli stabilisation, and the AdS/CFT correspondence.
Theory.
Ten-dimensional supergravity admits both formula_0 and formula_1 supergravities, which differ by the number of the Majorana–Weyl spinor supercharges that they possess. The type IIB theory has two supercharges of the same chirality, equivalent to a single Weyl supercharge, with it sometimes denoted as the ten-dimensional formula_2 supergravity. The field content of this theory is given by the ten dimensional formula_1 chiral supermultiplet formula_3. Here formula_4 is the metric corresponding to the graviton, while formula_5 are 4-form, 2-form, and 0-form gauge fields. Meanwhile, formula_6 is the Kalb–Ramond field and formula_7 is the dilaton. There is also a single left-handed Weyl gravitino formula_8, equivalent to two left-handed Majorana–Weyl gravitinos, and a single right-handed Weyl fermion formula_9, also equivalent to two right-handed Majorana–Weyl fermions.
Algebra.
The superalgebra for ten-dimensional formula_2 supersymmetry is given by
formula_10
formula_11
Here formula_12 with formula_13 are the two Majorana–Weyl supercharges of the same chirality. They therefore satisfy the projection relation formula_14 where formula_15 is the left-handed chirality projection operator and formula_16 is the ten-dimensional chirality matrix.
The formula_17 matrices allowed on the right-hand side are fixed by the fact that they must be representations of the formula_18 R-symmetry group of the type IIB theory, which only allows for formula_19, formula_20 and trace-free symmetric matrices formula_21. Since the anticommutator is symmetric under an exchange of the spinor and formula_22 indices, the maximally extended superalgebra can only have terms with the same chirality and symmetry property as the anticommutator. The terms are therefore a product of one of the formula_17 matrices with formula_23, where formula_24 is the charge conjugation operator. In particular, when the spinor matrix is symmetric, it multiplies formula_25 or formula_19 while when it is antisymmetric it multiplies formula_20. In ten dimensions formula_26 is symmetric for formula_27 modulo formula_28 and antisymmetric for formula_29 modulo formula_28. Since the projection operator formula_30 is a sum of the identity and a gamma matrix, this means that the symmetric combination works when formula_31 modulo formula_28 and the antisymmetric one when formula_32 modulo formula_28. This yields all the central charges found in the superalgebra up to Poincaré duality.
The central charges are each associated to various BPS states that are found in the theory. The formula_25 central charges correspond to the fundametnal string and the D1 brane, formula_33 is associated with the D3 brane, while formula_34 and formula_35 give three 5-form charges. One is the D5-brane, another the NS5-brane, and the last is associated with the KK monopole.
Self-dual field.
For the supergravity multiplet to have an equal number of bosonic and fermionic degrees of freedom, the four-form formula_36 has to have 35 degrees of freedom. This is achieved when the corresponding field strength tensor is self-dual formula_37, eliminating half of the degrees of freedom that would otherwise be found in a 4-form gauge field.
This presents a problem when constructing an action since the kinetic term for the self-dual 5-form field vanishes. The original way around this was to only work at the level of the equations of motion where self-duality is just another equation of motion. While it is possible to formulate a covariant action with the correct degrees of freedom by introducing an auxiliary field and a compensating gauge symmetry, the more common approach is to instead work with a pseudo-action where self-duality is imposed as an additional constraint on the equations of motion. Without this constraint the action cannot be supersymmetric since it does not have an equal number of fermionic and bosonic degrees of freedom. Unlike for example type IIA supergravity, type IIB supergravity cannot be acquired as a dimensional reduction of a theory in higher dimensions.
Pseudo-action.
The bosonic part of the pseudo-action for type IIB supergravity is given by
formula_38
formula_39
Here formula_40 and formula_41 are modified field strength tensors for the 2-form and 4-form gauge fields, with the resulting Bianchi identity for the 5-form being given by formula_42. The notation employed for the kinetic terms is formula_43. Self-duality formula_44 has to be imposed by hand onto the equations of motion, making this a pseudo-action rather than a regular action.
The first line in the action contains the Einstein–Hilbert action, the dilaton kinetic term, and the Kalb–Ramond field strength tensor formula_45. The first term on the second line has the appropriately modified field strength tensors for the three formula_5 gauge fields, while the last term is a Chern–Simons term. The action is written in the string frame which allows one to equate the fields to type IIB string states. In particular, the first line consists of kinetic terms for the NSNS fields, with these terms being identical to those found in type IIA supergravity. The first integral on the second line meanwhile consists of the kinetic term for the RR fields.
Global symmetry.
Type IIB supergravity has a global noncompact formula_46 symmetry. This can be made explicit by rewriting the action into the Einstein frame formula_47 and defining the axio-dilaton complex scalar field formula_48. Introducing the matrix
formula_49
and combining the two 3-form field strength tensors into a doublet formula_50, the action becomes
formula_51
This action is manifestly invariant under the transformation formula_52 which transforms the 3-forms formula_53 and the axio-dilaton as
formula_54
Both the metric and the self-dual field strength tensor are invariant under these transformations. The invariance of the 3-form field strength tensors follows from the fact that formula_55.
Supersymmetry transformations.
The equations of motion acquired from the supergravity action are invariant under the following supersymmetry transformations
formula_56
formula_57
formula_58
formula_59
formula_60
formula_61
formula_62
Here formula_63 are the field strength tensors associated with the formula_64 gauge fields, including all their magnetic duals for formula_65, while formula_66. Additionally, formula_67 when formula_68 is even and formula_69 when it is odd.
Relation to string theory.
Type IIB supergravity is the low-energy limit of type IIB string theory. The fields of the supergravity in the string frame are directly related to the different massless states of the string theory. In particular, the metric, Kalb–Ramond field, and dilaton are NSNS fields, while the three formula_5 p-forms are RR fields. Meanwhile, the gravitational coupling constant is related to the Regge slope through formula_70.
The global formula_46 symmetry of the supergravity is not a symmetry of the full type IIB string theory since it would mix the formula_6 and formula_71 fields. This does not happen in the string theory since one of these is an NSNS field and the other an RR field, with these having different physics, such as the former coupling to strings but the latter not. The symmetry is instead broken to the discrete subgroup formula_72 which is believed to be a symmetry of the full type IIB string theory.
The quantum theory is anomaly free, with the gravitational anomalies cancelling exactly. In string theory the pseudo-action receives much studied corrections that are classified into two types. The first are quantum corrections in terms of the string coupling and the second are string corrections terms of the Regge slope formula_73. These corrections play an important role in many moduli stabilisation scenarios.
Dimensional reduction of type IIA and type IIB supergravities necessarily results in the same nine-dimensional formula_1 theory since only one superalgebra of this type exists in this dimension. This is closely linked to the T-duality between the corresponding string theories.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal N=1"
},
{
"math_id": 1,
"text": "\\mathcal N=2"
},
{
"math_id": 2,
"text": "\\mathcal N=(2,0)"
},
{
"math_id": 3,
"text": "(g_{\\mu\\nu}, B, C_4, C_2, C_0, \\psi_\\mu, \\lambda, \\phi)"
},
{
"math_id": 4,
"text": "g_{\\mu\\nu}"
},
{
"math_id": 5,
"text": "C_p"
},
{
"math_id": 6,
"text": "B"
},
{
"math_id": 7,
"text": "\\phi"
},
{
"math_id": 8,
"text": "\\psi_\\mu"
},
{
"math_id": 9,
"text": "\\lambda"
},
{
"math_id": 10,
"text": "\n\\{Q^i_\\alpha, Q^j_\\beta\\} = \\delta^{ij}(P\\gamma^\\mu C)_{\\alpha \\beta}P_\\mu + (P\\gamma^\\mu C)_{\\alpha \\beta}\\tilde Z_\\mu^{ij} + \\epsilon^{ij}(P\\gamma^{\\mu\\nu\\rho}C)_{\\alpha \\beta}Z_{\\mu\\nu\\rho}\n"
},
{
"math_id": 11,
"text": "\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ + \\delta^{ij}(P\\gamma^{\\mu\\nu\\rho\\sigma\\delta}C)_{\\alpha \\beta}Z_{\\mu\\nu\\rho\\sigma\\delta} + (P\\gamma^{\\mu\\nu\\rho\\sigma\\delta}C)_{\\alpha \\beta}(\\tilde Z)^{ij}_{\\mu\\nu\\rho\\sigma\\delta}.\n"
},
{
"math_id": 12,
"text": "Q^i_\\alpha"
},
{
"math_id": 13,
"text": "i=1,2"
},
{
"math_id": 14,
"text": "PQ^i_\\alpha = Q^i_\\alpha"
},
{
"math_id": 15,
"text": "P = \\tfrac{1}{2}(1 - \\gamma_*)"
},
{
"math_id": 16,
"text": "\\gamma_*"
},
{
"math_id": 17,
"text": "ij"
},
{
"math_id": 18,
"text": "\\text{SO}(2)"
},
{
"math_id": 19,
"text": "\\delta^{ij}"
},
{
"math_id": 20,
"text": "\\epsilon^{ij}"
},
{
"math_id": 21,
"text": "Z^{ij}"
},
{
"math_id": 22,
"text": "i,j"
},
{
"math_id": 23,
"text": "P\\gamma^{\\mu_1\\cdots \\mu_p}C"
},
{
"math_id": 24,
"text": "C"
},
{
"math_id": 25,
"text": "\\tilde Z^{ij}"
},
{
"math_id": 26,
"text": "\\gamma^{\\mu_1 \\cdots \\mu_p}C"
},
{
"math_id": 27,
"text": "p=1,2"
},
{
"math_id": 28,
"text": "4"
},
{
"math_id": 29,
"text": "p=3,0"
},
{
"math_id": 30,
"text": "P"
},
{
"math_id": 31,
"text": "p = 1"
},
{
"math_id": 32,
"text": "p=3"
},
{
"math_id": 33,
"text": "Z_{\\mu\\nu\\rho}"
},
{
"math_id": 34,
"text": "Z_{\\mu\\nu\\rho\\sigma\\delta}"
},
{
"math_id": 35,
"text": "\\tilde Z^{ij}_{\\mu\\nu\\rho\\sigma\\delta}"
},
{
"math_id": 36,
"text": "C_4"
},
{
"math_id": 37,
"text": "\\star \\tilde F_5= \\tilde F_5"
},
{
"math_id": 38,
"text": "\nS_{IIB,\\text{bosonic}} = \\frac{1}{2\\kappa^2} \\int d^{10} x \\sqrt{-g} e^{-2\\phi}\\bigg[R + 4 \\partial_\\mu\n\\phi \\partial^\\mu \\phi -\\frac{1}{2}|H|^2\\bigg]\n"
},
{
"math_id": 39,
"text": "\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ -\\frac{1}{4\\kappa^2}\\int d^{10}x \\sqrt{-g}\\big[|F_1|^2+|\\tilde F_3|^2+\\tfrac{1}{2}|\\tilde F_5|^2\\big]\n-\\frac{1}{4\\kappa^2}\\int C_4 \\wedge H \\wedge F_3.\n"
},
{
"math_id": 40,
"text": "\\tilde F_3 = F_3-C_0 \\wedge H"
},
{
"math_id": 41,
"text": "\\tilde F_5 = F_5 - \\tfrac{1}{2}C_2\\wedge H + \\tfrac{1}{2}B \\wedge F_3"
},
{
"math_id": 42,
"text": "d\\tilde F_5 = H\\wedge F_3"
},
{
"math_id": 43,
"text": "|F_p|^2 = \\tfrac{1}{p!} F_{\\mu_1\\cdots \\mu_p}F^{\\mu_1\\cdots \\mu_p}"
},
{
"math_id": 44,
"text": "\\tilde F_5 = \\star \\tilde F_5"
},
{
"math_id": 45,
"text": "H=dB"
},
{
"math_id": 46,
"text": "\\text{SL}(2,\\mathbb R)"
},
{
"math_id": 47,
"text": "g_{\\mu\\nu}\\rightarrow e^{\\phi/2}g_{\\mu\\nu}"
},
{
"math_id": 48,
"text": "\\tau = C_0+ie^{-\\phi}"
},
{
"math_id": 49,
"text": "\nM_{ij} = \\frac{1}{\\text{Im} \\ \\tau} \\begin{pmatrix} |\\tau|^2 & - \\text{Re} \\ \\tau \\\\ -\\text{Re} \\ \\tau & 1 \\end{pmatrix}\n"
},
{
"math_id": 50,
"text": "F_3^i = (H, F_3)"
},
{
"math_id": 51,
"text": "\nS_{IIB} = \\frac{1}{2\\kappa^2} \\int d^{10}x \\sqrt{-g}\\bigg[R - \\frac{\\partial_\\mu \\tau \\partial^\\mu \\bar \\tau}{2(\\text{Im} \\ \\tau)^2} -\\frac{1}{12}M_{ij}F_{3,\\mu\\nu\\rho}^i F_3^{j,\\mu\\nu\\rho} -\\frac{1}{4}|\\tilde F_5|^2\\bigg] - \\frac{\\epsilon_{ij}}{8\\kappa^2}\\int C_4 \\wedge F_3^i \\wedge F_3^j.\n"
},
{
"math_id": 52,
"text": "\\Lambda \\in \\text{SL}(2,\\mathbb R)"
},
{
"math_id": 53,
"text": "F^i_3\\rightarrow \\Lambda^i{}_j F_3^j"
},
{
"math_id": 54,
"text": "\n\\tau \\rightarrow \\frac{a\\tau+b}{c\\tau+d}, \\ \\ \\ \\ \\text{where} \\ \\ \\ \\Lambda = \\begin{pmatrix}d & c \\\\ b & a\\end{pmatrix}.\n"
},
{
"math_id": 55,
"text": "M \\rightarrow (\\Lambda^{-1})^TM\\Lambda^{-1}"
},
{
"math_id": 56,
"text": "\n\\delta e_\\mu{}^a = \\bar \\epsilon \\gamma^a \\psi_\\mu,\n"
},
{
"math_id": 57,
"text": "\n\\delta \\psi_\\mu = (D_\\mu \\epsilon - \\tfrac{1}{8}H_{\\alpha \\beta \\mu}\\gamma^{\\alpha \\beta}\\sigma^3)\\epsilon + \\tfrac{1}{16}e^\\phi \\sum_{n=1}^6\\frac{{F\\!\\!\\!/}^{(2n-1)}}{(2n-1)!}\\gamma_\\mu \\mathcal P_n \\epsilon,\n"
},
{
"math_id": 58,
"text": "\n\\delta B_{\\mu\\nu} = 2\\bar \\epsilon \\sigma^3 \\gamma_{[\\mu}\\psi_{\\nu]},\n"
},
{
"math_id": 59,
"text": "\n\\delta C^{(2n-2)}_{\\mu_1, \\cdots \\mu_{2n-2}} = -(2n-2)e^{-\\phi}\\bar \\epsilon \\mathcal P_n \\gamma_{[\\mu_1 \\cdots \\mu_{2n-3}}(\\psi_{\\mu_{2n-2}]}-\\tfrac{1}{2(2n-2)}\\gamma_{\\mu_{2n-2}]}\\lambda)\n"
},
{
"math_id": 60,
"text": "\n \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ + \\tfrac{1}{2}(2n-2)(2n-3)C^{(2n-4)}_{[\\mu_1\\cdots \\mu_{2n-4}}\\delta B_{\\mu_{2n-3}\\mu_{2n-2}]},\n"
},
{
"math_id": 61,
"text": "\n\\delta \\lambda = ({\\partial\\!\\!\\!/}\\phi - \\tfrac{1}{12}H_{\\mu \\nu \\rho}\\gamma^{\\mu\\nu\\rho}\\sigma^3)\\epsilon + \\tfrac{1}{4}e^\\phi \\sum_{n=1}^6 \\frac{n-3}{(2n-1)!}{F\\!\\!\\!/}^{(2n-1)}\\mathcal P_n \\epsilon,\n"
},
{
"math_id": 62,
"text": "\n\\delta \\phi = \\tfrac{1}{2} \\bar \\epsilon \\lambda.\n"
},
{
"math_id": 63,
"text": "F_{\\mu_1\\cdots \\mu_p}"
},
{
"math_id": 64,
"text": "C^{(p-1)}"
},
{
"math_id": 65,
"text": "p>5"
},
{
"math_id": 66,
"text": "{F\\!\\!\\!/}^{(p)} = F_{\\mu_1\\cdots \\mu_p}\\gamma^{\\mu_1 \\cdots \\mu_p}"
},
{
"math_id": 67,
"text": "\\mathcal P_n=\\sigma^1"
},
{
"math_id": 68,
"text": "n"
},
{
"math_id": 69,
"text": "\\mathcal P_n = i\\sigma^2"
},
{
"math_id": 70,
"text": "2\\kappa^2 = (2\\pi)^7 \\alpha'^4"
},
{
"math_id": 71,
"text": "C_2"
},
{
"math_id": 72,
"text": "\\text{SL}(2,\\mathbb Z)\\subset \\text{SL}(2,\\mathbb R)"
},
{
"math_id": 73,
"text": "\\alpha'"
}
]
| https://en.wikipedia.org/wiki?curid=77583575 |
77583924 | Type I supergravity | Ten-dimensional supergravity
In supersymmetry, type I supergravity is the theory of supergravity in ten dimensions with a single supercharge. It consists of a single supergravity multiplet and a single Yang–Mills multiplet. The full non-abelian action was first derived in 1983 by George Chapline and Nicholas Manton. Classically the theory can admit any gauge group, but a consistent quantum theory resulting in anomaly cancellation only exists if the gauge group is either formula_0 or formula_1. Both these supergravities are realised as the low-energy limits of string theories, in particular of type I string theory and of the two heterotic string theories.
History.
Supergravity was much studied during the 1980s as a candidate theory of nature. As part of this it was important to understand the various supergravities that can exist in different dimensions, with the possible supergravities being classified in 1978 by Werner Nahm. Type I supergravity was first written down in 1983, with Eric Bergshoeff, Mees de Roo, Bernard de Wit, and Peter van Nieuwenhuizen describing the abelian theory, and then George Chapline and Nicholas Manton extending this to the full non-abelian theory. An important development was made by Michael Green and John Schwarz in 1984 when they showed that only a handful of these theories are anomaly free, with additional work showing that only formula_0 and formula_1 result in a consistent quantum theory. The first case was known at the time to correspond to the low-energy limit of type I superstrings. Heterotic string theories were discovered the next year, with these having a low-energy limit described by type I supergravity with both gauge groups.
Theory.
Type I supergravity is the ten-dimensional supergravity with a single Majorana–Weyl spinor supercharge. Its field content consists of the formula_2 supergravity supermultiplet formula_3, together with the formula_2 Yang–Mills supermultiplet formula_4 with some associated gauge group. Here formula_5 is the metric, formula_6 is the two-form Kalb–Ramond field, formula_7 is the dilaton, and formula_8 is a Yang–Mills gauge field. Meanwhile, formula_9 is the gravitino, formula_10 is a dilatino, and formula_11 a gaugino, with all these being Majorana–Weyl spinors. The gravitino and gaugino have the same chirality, while the dilatino has the opposite chirality.
Algebra.
The superalgebra for type I supersymmetry is given by
formula_12
Here formula_13 is the supercharge with a fixed chirality formula_14, where formula_15 is the relevant projection operator. Meanwhile, formula_16 is the charge conjugation operator and formula_17 are the gamma matrices. The right-hand side must have the same chirality as the supercharges and must also be symmetric under an exchange of the spinor indices. The second term is the only central charge that is admissible under these constraints up to Poincare duality. This is because in ten dimensions only formula_18 with formula_19 modulo formula_20 are symmetric matrices. The central charge corresponds to a 5-brane solution in the supergravity which is dual to the fundamental string in heterotic string theory.
Action.
The action for type I supergravity in the Einstein frame is given up to four-fermion terms by
formula_21
formula_22
formula_23
formula_24
formula_25
Here formula_26 is the gravitational coupling constant, formula_7 is the dilaton, and
formula_27
where formula_28 is the trace of the Yang–Mills Chern–Simons form given by
formula_29
The non-abelian field strength tensor corresponding to the gauge field formula_30 is denote by formula_31. The spacetime index gamma-matrices are position-dependent fields formula_32. Meanwhile, formula_33 is the covariant derivative formula_34, while formula_35 and formula_36 is the spin connection.
Supersymmetry transformations.
The supersymmetry transformation rules are given up to three fermion terms by
formula_37
formula_38
formula_39
formula_40
formula_41
formula_42
formula_43
The supersymmetry parameter is denoted by formula_44. These transformation rules are useful for constructing the Killing spinor equations and finding supersymmetric ground states.
Anomaly cancellation.
At a classical level the supergravity has an arbitrary gauge group, however not all gauge groups are consistent at the quantum level. The Green–Schwartz anomaly cancellation mechanism is used to show when the gauge, mixed, and gravitational anomalies vanish in hexagonal diagrams. In particular, the only anomaly free type I supergravity theories are ones with gauge groups of formula_0, formula_1, formula_45, and formula_46. It was later found that the latter two with abelian factors are inconsistent theories of quantum gravity. The two anomaly free theories both have ultraviolet completions to string theory, where the corresponding string theories can also be shown to be anomaly free at the string level.
Relation to string theory.
Type I supergravity is the low-energy effective field theory of type I string theory and both heterotic string theories. In particular, type I string theory and formula_0 heterotic string theory reduce to type I supergravity with an formula_0 gauge group, while formula_1 heterotic string theory reduces to type I supergravity with an formula_1 gauge group. There are additional corrections that the supergravity receives in string theory, notably the Chern–Simons term becomes a linear combination of the Yang–Mills Chern–Simons three-form found at tree-level and a Lorentz Chern–Simons three-form formula_47. This latter three-form is a higher-derivative correction given by
formula_48,
where formula_49 is the spin connection. To maintain supersymmetry of the action when this term is included, additional higher-derivative corrections must be added to the action up to second order in formula_50.
In type I string theory, the gauge coupling constant is related to the ten-dimensional Yang–Mills coupling constant by formula_51, while the coupling constant is related to the string length formula_52 by formula_53. Meanwhile, in heterotic string theory the gravitational coupling constant is related to the string length by formula_54.
The fields in the Einstein frame are not the same as the fields corresponding to the string states. Instead one has to transform the action into the various string frames through a Weyl transformation and dilaton redefinition
formula_55
formula_56
S-duality between type I string theory and formula_0 heterotic string theory can be seen at the level of the action since the respective string frame actions are equivalent with the correct field redefinitions. Similarly, Hořava–Witten theory, which describes the duality between formula_1 heterotic string theory and M-theory, can also be seen at the level of the supergravity since compactification of eleven-dimensional supergravity on formula_57, yields formula_1 supergravity.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{SO}(32)"
},
{
"math_id": 1,
"text": "E_8\\times E_8"
},
{
"math_id": 2,
"text": "\\mathcal N=1"
},
{
"math_id": 3,
"text": "(g_{\\mu\\nu}, \\psi_\\mu, B, \\lambda, \\phi)"
},
{
"math_id": 4,
"text": "(A_\\mu^a, \\chi^a)"
},
{
"math_id": 5,
"text": "g_{\\mu\\nu}"
},
{
"math_id": 6,
"text": "B"
},
{
"math_id": 7,
"text": "\\phi"
},
{
"math_id": 8,
"text": "A_\\mu^a"
},
{
"math_id": 9,
"text": "\\psi_\\mu"
},
{
"math_id": 10,
"text": "\\lambda"
},
{
"math_id": 11,
"text": "\\chi^a"
},
{
"math_id": 12,
"text": "\n\\{Q_\\alpha, Q_\\beta\\} = (P\\gamma^\\mu C)_{\\alpha \\beta}P_\\mu + (P\\gamma^{\\mu\\nu\\rho\\sigma\\delta}C)_{\\alpha \\beta}Z_{\\mu\\nu\\rho\\sigma \\delta}.\n"
},
{
"math_id": 13,
"text": "Q_\\alpha"
},
{
"math_id": 14,
"text": "PQ_\\alpha = Q_\\alpha"
},
{
"math_id": 15,
"text": "P = \\tfrac{1}{2}(1\\pm \\gamma_*)"
},
{
"math_id": 16,
"text": "C"
},
{
"math_id": 17,
"text": "\\gamma^{\\mu}"
},
{
"math_id": 18,
"text": "P\\gamma^{\\mu_1 \\cdots \\mu_p}C"
},
{
"math_id": 19,
"text": "p=1"
},
{
"math_id": 20,
"text": "4"
},
{
"math_id": 21,
"text": "\nS = \\frac{1}{2\\kappa^2}\\int d^{10} x \\ e \\bigg[ R -2\\partial_\\mu \\phi \\partial^\\mu \\phi -\\tfrac{3}{4} e^{-2\\phi}H_{\\mu\\nu\\rho}H^{\\mu\\nu\\rho} - \\tfrac{\\kappa^2}{2g^2}e^{-\\phi}\\text{tr}(F_{\\mu\\nu}F^{\\mu\\nu})\n"
},
{
"math_id": 22,
"text": "\n\\ \\ \\ -\\bar \\psi_\\mu \\gamma^{\\mu\\nu\\rho}D_\\nu \\psi_\\rho -\\bar \\lambda \\gamma^\\mu D_\\mu \\lambda - \\text{tr}(\\bar \\chi \\gamma^\\mu D_\\mu \\chi)\n"
},
{
"math_id": 23,
"text": "\n\\ \\ \\ -\\sqrt 2\\bar \\psi_\\mu \\gamma^\\nu \\gamma^\\mu \\lambda \\partial_\\nu \\phi + \\tfrac{1}{8}e^{-\\phi}\\text{tr}(\\bar \\chi \\gamma^{\\mu\\nu\\rho}\\chi)H_{\\mu\\nu\\rho}\n"
},
{
"math_id": 24,
"text": "\n\\ \\ \\ -\\tfrac{\\kappa}{2g}e^{-\\phi/2}\\text{tr}[\\bar \\chi \\gamma^\\mu \\gamma^{\\nu\\rho}(\\psi_\\mu + \\tfrac{\\sqrt 2}{12}\\gamma_\\mu \\lambda)F_{\\nu\\rho}]\n"
},
{
"math_id": 25,
"text": "\n\\ \\ \\ + \\tfrac{1}{8}e^{-\\phi}(\\bar \\psi_\\mu \\gamma^{\\mu\\nu\\rho\\sigma\\delta}\\psi_\\delta + 6 \\bar \\psi^\\nu \\gamma^\\rho \\psi^\\sigma - \\sqrt 2 \\bar \\psi_\\mu \\gamma^{\\nu\\rho \\sigma}\\gamma^\\mu \\lambda)H_{\\nu\\rho\\sigma}\\bigg].\n"
},
{
"math_id": 26,
"text": "\\kappa^2"
},
{
"math_id": 27,
"text": "\nH_{\\mu\\nu\\rho} =\\partial_{[\\mu} B_{\\nu\\rho]} - \\tfrac{\\kappa^2}{g^2}\\omega_{\\text{YM},\\mu\\nu\\rho},\n"
},
{
"math_id": 28,
"text": "\\omega_{\\text{YM}}"
},
{
"math_id": 29,
"text": "\n\\omega_{\\text{YM}} = \\text{tr}(A \\wedge dA +\\tfrac{2}{3}A\\wedge A \\wedge A).\n"
},
{
"math_id": 30,
"text": "A_\\mu"
},
{
"math_id": 31,
"text": "F_{\\mu\\nu}"
},
{
"math_id": 32,
"text": "\\gamma_\\mu = e_\\mu^a \\gamma_a"
},
{
"math_id": 33,
"text": "D_\\mu"
},
{
"math_id": 34,
"text": "D_\\mu = \\partial_\\mu + \\tfrac{1}{4}\\omega_\\mu^{ab}\\gamma_{ab}"
},
{
"math_id": 35,
"text": "\\gamma_{ab} = \\gamma_a\\gamma_b"
},
{
"math_id": 36,
"text": "\\omega_\\mu^{ab}"
},
{
"math_id": 37,
"text": "\n\\delta e^a{}_\\mu = \\tfrac{1}{2}\\bar \\epsilon \\gamma^a\\psi_\\mu, \n"
},
{
"math_id": 38,
"text": "\n\\delta \\psi_\\mu = D_\\mu \\epsilon + \\tfrac{1}{32}e^{-\\phi}(\\gamma_\\mu{}^{\\nu\\rho\\sigma}-9\\delta^\\nu_\\mu \\gamma^{\\rho\\sigma})\\epsilon H_{\\nu\\rho\\sigma},\n"
},
{
"math_id": 39,
"text": "\n\\delta B_{\\mu\\nu} = \\tfrac{1}{2}e^\\phi \\bar \\epsilon (\\gamma_\\mu \\psi_\\nu - \\gamma_\\nu \\psi_\\mu - \\tfrac{1}{\\sqrt 2}\\gamma_{\\mu\\nu}\\lambda)+\\tfrac{\\kappa}{g}e^{\\phi/2}\\bar \\epsilon \\gamma_{[\\mu}\\text{tr}(\\chi A_{\\nu]}),\n"
},
{
"math_id": 40,
"text": "\n\\delta \\phi = -\\tfrac{1}{2\\sqrt 2}\\bar \\epsilon \\lambda,\n"
},
{
"math_id": 41,
"text": "\n\\delta \\lambda = -\\tfrac{\\kappa}{\\sqrt 2}{\\partial\\!\\!\\!/}\\phi + \\tfrac{1}{8\\sqrt 2}e^{-\\phi}\\gamma^{\\mu\\nu\\rho}\\epsilon H_{\\mu\\nu\\rho},\n"
},
{
"math_id": 42,
"text": "\n\\delta A_\\mu^a = \\tfrac{g}{2\\kappa}e^{\\phi/2}\\bar \\epsilon \\gamma_\\mu \\chi^a,\n"
},
{
"math_id": 43,
"text": "\n\\delta \\chi^a = -\\tfrac{\\kappa}{4g} e^{-\\phi/2}\\gamma^{\\mu\\nu}F^a_{\\mu\\nu}\\epsilon.\n"
},
{
"math_id": 44,
"text": "\\epsilon"
},
{
"math_id": 45,
"text": "E_8\\times \\text{U}(1)^{248}"
},
{
"math_id": 46,
"text": "\\text{U}(1)^{496}"
},
{
"math_id": 47,
"text": "\\omega_{\\text{YM}}\\rightarrow \\omega_{\\text{YM}}-\\omega_{\\text{L}}"
},
{
"math_id": 48,
"text": "\n\\omega_{\\text{L}} = \\text{tr}(\\omega\\wedge d\\omega + \\tfrac{2}{3}\\omega \\wedge \\omega \\wedge \\omega)\n"
},
{
"math_id": 49,
"text": "\\omega"
},
{
"math_id": 50,
"text": "\\kappa"
},
{
"math_id": 51,
"text": "g_{YM}^2 = g^2g_s"
},
{
"math_id": 52,
"text": "l_s = \\sqrt{\\alpha'}"
},
{
"math_id": 53,
"text": "g^2 = 4\\pi(2\\pi l_s)^6"
},
{
"math_id": 54,
"text": "2\\kappa = l_s g"
},
{
"math_id": 55,
"text": "\n\\text{Heterotic}: \\ \\ \\ \\ \\ \\ \\ \\ g_{\\mu\\nu}=e^{-\\phi_h/2}g_{h,\\mu\\nu}, \\ \\ \\ \\ \\ \\ \\phi = \\phi_h/2,\n"
},
{
"math_id": 56,
"text": "\n\\text{Type I}: \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ g_{\\mu\\nu} = e^{-\\phi_I/2}g_{I,\\mu\\nu}, \\ \\ \\ \\ \\ \\ \\ \\phi = -\\phi_I/2.\n"
},
{
"math_id": 57,
"text": "S^1/\\mathbb Z_2"
}
]
| https://en.wikipedia.org/wiki?curid=77583924 |
77591639 | 1 | Number
Natural number
1 (one, unit, unity) is a number, numeral, and glyph. 1 is the first and smallest positive integer of the infinite sequence of natural numbers. This fundamental property has led to its unique uses in other fields, ranging from science to sports, where it commonly denotes the first, leading, or top thing in a group. 1 is the unit of counting or measurement, a determiner for singular nouns, and a gender-neutral pronoun. Historically, the representation of 1 evolved from ancient Sumerian and Babylonian symbols to the modern Arabic numeral.
In mathematics, 1 is the multiplicative identity, meaning that any number multiplied by 1 equals the same number. 1 is by convention not considered a prime number; this was not universally accepted until the mid-20th century. In digital technology, 1 represents the "on" state in binary code, the foundation of computing. Philosophically, 1 symbolizes the ultimate reality or source of existence in various traditions.
In mathematics.
The number 1 is the first natural number after 0. Each natural number, including 1, is constructed by succession, that is, by adding 1 to the previous natural number. The number 1 is the multiplicative identity of the integers, real numbers, and complex numbers, that is, any number formula_0 multiplied by 1 remains unchanged (formula_1). As a result, 1 own square (formula_2) and square root (formula_3), and one raised to any power is always equal to 1. 1 is its own factorial (formula_4), and 0! is also one 1. These are a special case of the empty product.
Different constructions of the natural numbers have different representations of 1. For example, in the original formulation of the Peano axioms, 1 serves as the starting point in the sequence of natural numbers. Peano later revised his axioms to state 1 is the successor of 0. In the Von Neumann cardinal assignment of natural numbers, numbers are defined as the set containing all preceding numbers, with 1 represented as the singleton {0}. 1 is both the first and second number in the Fibonacci sequence (0 being the zeroth) and is the first number in many other mathematical sequences.
The number 1 can be represented in decimal form by two recurring notations: 1.000..., where the digit 0 repeats infinitely after the decimal point, and 0.999..., which contains an infinite repetition of the digit 9 after the decimal point. The latter arises from the definition of decimal numbers as the limits of their summed components, such that "0.999..." and "1" represent exactly the same number.
The simplest way to represent the natural numbers is by the unary numeral system, as used in tallying. This is an example of a "base-1" number system, since only one mark – the tally itself – is needed, although base-1 is rarely used as a practical base for counting due to its difficult readability.
In many mathematical and engineering problems, numeric values are typically "normalized" to fall within the unit interval from 0 to 1, where 1 usually represents the maximum possible value in the range of parameters. For example, by definition, 1 is the probability of an event that is absolutely or almost certain to occur. Likewise, vectors are often normalized into unit vectors (i.e., vectors of magnitude one), because these often have more desirable properties. Functions, are often normalized by the condition that they have integral one, maximum value one, or square integral one, depending on the application.
1 is the value of Legendre's constant, introduced in 1808 by Adrien-Marie Legendre to express the asymptotic behavior of the prime-counting function.
1 is the most common leading digit in many sets of data (occurring about 30% of the time), a consequence of Benford's law.
1 is the only known Tamagawa number for all simply connected algebraic groups over a number field.
Primality.
Although 1 meets the naïve definition of a prime number, being evenly divisible only by 1 and itself (also 1), by modern convention it is regarded as neither a prime number nor a composite number. Some mathematicians of the Middle Ages and Renaissance considered 1 a prime number. The 18th century mathematician Christian Goldbach listed 1 as prime in his correspondence with Leonhard Euler, and many 19th century mathematicians still considered 1 to be prime. By the early 20th century, mathematicians started to agree that 1 should not be classified as a prime number. However, published lists of primes continued to include 1 as recently as 1956.
As a word.
Etymology.
"One" originates from the Old English word "an", derived from the Germanic root "*ainaz", from the Proto-Indo-European root "*oi-no-" (meaning "one, unique").
Modern usage.
Linguistically, "one" is a cardinal number used for counting and expressing the number of items in a collection of things. "One" is most commonly a determiner used with singular countable nouns, as in "one day at a time". The determiner has two senses: numerical one ("I have one apple") and singulative one ("one day I'll do it").
"One" is also a gender-neutral pronoun used to refer to an unspecified person or to people in general as in "one should take care of oneself".
Words that derive their meaning from "one" include "alone", which signifies "all one" in the sense of being by oneself, "none" meaning "not one", "once" denoting "one time", and "atone" meaning to become "at one" with the someone. Combining "alone" with "only" (implying "one-like") leads to "lonely", conveying a sense of solitude. Other common numeral prefixes for the number 1 include uni- (e.g., unicycle, universe, unicorn), sol- (e.g., solo dance), derived from Latin, or mono- (e.g., monorail, monogamy, monopoly) derived from Greek.
The various glyphs used to represent the number one, including Arabic numerals (1), Roman numerals (I), and Chinese numerals (一), are logograms. These symbols directly represent the concept of 'one' without breaking it down into phonetic components.
Symbols and representation.
History.
Among the earliest known record of a numeral system, is the Sumerian decimal-sexagesimal system on clay tablets dating from the first half of the third millennium BCE. The Archaic Sumerian numerals for 1 and 60 both consisted of horizontal semi-circular symbols. By c. 2350 BCE, the older Sumerian curviform numerals were replaced with cuneiform symbols, with 1 and 60 both represented by the same symbol . The Sumerian cuneiform system is a direct ancestor to the Eblaite and Assyro-Babylonian Semitic cuneiform decimal systems. Surviving Babylonian documents date mostly from Old Babylonian (c. 1500 BCE) and the Seleucid (c. 300 BCE) eras. The Babylonian cuneiform script notation for numbers used the same symbol for 1 and 60 as in the Sumerian system.
The most commonly used glyph in the modern Western world to represent the number 1 is the Arabic numeral, a vertical line, often with a serif at the top and sometimes a short horizontal line at the bottom. It can be traced back to the Brahmic script of ancient India, as represented by Ashoka as a simple vertical line in his Edicts of Ashoka in c. 250 BCE. This script's numeral shapes were transmitted to Europe via the Maghreb and Al-Andalus during the Middle Ages
Modern typefaces.
In modern typefaces, the shape of the character for the digit 1 is typically typeset as a "lining figure" with an ascender, such that the digit is the same height and width as a capital letter. However, in typefaces with text figures (also known as "Old style numerals" or "non-lining figures"), the glyph usually is of x-height and designed to follow the rhythm of the lowercase, as, for example, in . In "old-style" typefaces (e.g., Hoefler Text), the typeface for numeral 1 resembles a small caps version of I, featuring parallel serifs at the top and bottom, while the capital I retains a full-height form. This is a relic from the Roman numerals system where I represents 1. Many older typewriters do not have dedicated key for the numeral 1, requiring the use of the lowercase letter "l" or uppercase "I" as substitutes. The lower case "j" can be considered a swash variant of a lower-case Roman numeral "i", often employed for the final i of a "lower-case" Roman numeral. It is also possible to find historic examples of the use of "j" or "J" as a substitute for the Arabic numeral 1. In some countries, the serif at the top may be extended into a long upstroke as long as the vertical line. This variation can lead to confusion with the glyph used for seven in other countries and so to provide a visual distinction between the two the digit 7 may be written with a horizontal stroke through the vertical line.
In technology.
In digital technology, data is represented by binary code, i.e., a base-2 numeral system with numbers represented by a sequence of 1s and 0s. Digitised data is represented in physical devices, such as computers, as pulses of electricity through switching devices such as transistors or logic gates where "1" represents the value for "on". As such, the numerical value of true is equal to 1 in many programming languages. In lambda calculus and computability theory, natural numbers are represented by Church encoding as functions, where the Church numeral for 1 is represented by the function formula_5 applied to an argument formula_6 once (1formula_7).
In science.
Physics.
Dimensionless quantities are also known as quantities of dimension one.
In physics, selected physical constants are set to 1 in natural unit systems (for example in Planck units the speed of light "c"=1) to simplify the form of equations. In quantum mechanics, the normalization condition wavefunctions requires the integral of a wavefunction's squared modulus to be equal to 1.
Chemistry.
Hydrogen, the first element of the periodic table and the most abundant element in the known universe, has an atomic number of 1. Group 1 of the periodic table consists of hydrogen and the alkali metals.
In philosophy and religion.
The number 1 is commonly regarded as a symbol of unity, often representing God or the universe in monotheistic traditions. The Pythagoreans considered the numbers to be plural and therefore did not classify 1 itself as a number, but as the origin of all numbers. In their number philosophy, where odd numbers were considered male and even numbers female, 1 was considered neutral capable of transforming even numbers to odd and vice versa by addition. The Neopythagorean philosopher Nicomachus of Gerasa's number treatise, as recovered by Boethius in the Latin translation "Introduction to Arithmetic", affirmed that one is not a number, but the source of number. In the philosophy of Plotinus (and that of other neoplatonists), The One is the ultimate reality and source of all existence. Philo of Alexandria (20 BC – AD 50) regarded the number one as God's number, and the basis for all numbers.
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "1n = n1 = n"
},
{
"math_id": 2,
"text": "1^2=1"
},
{
"math_id": 3,
"text": "\\sqrt{1} = 1"
},
{
"math_id": 4,
"text": "1!=1"
},
{
"math_id": 5,
"text": "f"
},
{
"math_id": 6,
"text": "x"
},
{
"math_id": 7,
"text": "fx=fx"
}
]
| https://en.wikipedia.org/wiki?curid=77591639 |
7759194 | Cox–Ingersoll–Ross model | Stochastic model for the evolution of financial interest rates
In mathematical finance, the Cox–Ingersoll–Ross (CIR) model describes the evolution of interest rates. It is a type of "one factor model" (short-rate model) as it describes interest rate movements as driven by only one source of market risk. The model can be used in the valuation of interest rate derivatives. It was introduced in 1985 by John C. Cox, Jonathan E. Ingersoll and Stephen A. Ross as an extension of the Vasicek model, itself an Ornstein–Uhlenbeck_process.
The model.
The CIR model describes the instantaneous interest rate formula_0 with a Feller square-root process, whose stochastic differential equation is
formula_1
where formula_2 is a Wiener process (modelling the random market risk factor) and formula_3, formula_4, and formula_5 are the parameters. The parameter formula_3 corresponds to the speed of adjustment to the mean formula_4, and formula_5 to volatility. The drift factor, formula_6, is exactly the same as in the Vasicek model. It ensures mean reversion of the interest rate towards the long run value formula_7, with speed of adjustment governed by the strictly positive parameter formula_8.
The standard deviation factor, formula_9, avoids the possibility of negative interest rates for all positive values of formula_8 and formula_7.
An interest rate of zero is also precluded if the condition
formula_10
is met. More generally, when the rate (formula_0) is close to zero, the standard deviation (formula_9) also becomes very small, which dampens the effect of the random shock on the rate. Consequently, when the rate gets close to zero, its evolution becomes dominated by the drift factor, which pushes the rate upwards (towards equilibrium).
In the case formula_11, the Feller square-root process can be obtained from the square of an Ornstein–Uhlenbeck process. It is ergodic and possesses a stationary distribution. It is used in the Heston model to model stochastic volatility.
The distribution of future values of a CIR process can be computed in closed form:
formula_12
where formula_13, and "Y" is a non-central chi-squared distribution with formula_14 degrees of freedom and non-centrality parameter formula_15. Formally the probability density function is:
formula_16
where formula_17, formula_18, formula_19, and formula_20 is a modified Bessel function of the first kind of order formula_21.
Due to mean reversion, as time becomes large, the distribution of formula_22 will approach a gamma distribution with the probability density of:
formula_23
where formula_24 and formula_25.
The continuous SDE can be discretized as follows
formula_30
which is equivalent to
formula_31
provided formula_32 is n.i.i.d. (0,1). This equation can be used for a linear regression.
Simulation.
Stochastic simulation of the CIR process can be achieved using two variants:
Bond pricing.
Under the no-arbitrage assumption, a bond may be priced using this interest rate process. The bond price is exponential affine in the interest rate:
formula_33
where
formula_34
formula_35
formula_36
Extensions.
The CIR model uses a special case of a basic affine jump diffusion, which still permits a closed-form expression for bond prices. Time varying functions replacing coefficients can be introduced in the model in order to make it consistent with a pre-assigned term structure of interest rates and possibly volatilities. The most general approach is in Maghsoodi (1996). A more tractable approach is in Brigo and Mercurio (2001b) where an external time-dependent shift is added to the model for consistency with an input term structure of rates.
A significant extension of the CIR model to the case of stochastic mean and stochastic volatility is given by Lin Chen (1996) and is known as Chen model. A more recent extension for handling cluster volatility, negative interest rates and different distributions is the so-called "CIR #" by Orlando, Mininni and Bufalo (2018, 2019, 2020, 2021, 2023) and a simpler extension focussing on negative interest rates was proposed by Di Francesco and Kamm (2021, 2022), which are referred to as the CIR- and CIR-- models.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "r_t"
},
{
"math_id": 1,
"text": "dr_t = a(b-r_t)\\, dt + \\sigma\\sqrt{r_t}\\, dW_t,"
},
{
"math_id": 2,
"text": "W_t"
},
{
"math_id": 3,
"text": " a "
},
{
"math_id": 4,
"text": " b "
},
{
"math_id": 5,
"text": " \\sigma\\, "
},
{
"math_id": 6,
"text": " a(b-r_t)"
},
{
"math_id": 7,
"text": "b"
},
{
"math_id": 8,
"text": "a"
},
{
"math_id": 9,
"text": "\\sigma \\sqrt{r_t}"
},
{
"math_id": 10,
"text": "2 a b \\geq \\sigma^2 \\,"
},
{
"math_id": 11,
"text": "4 a b =\\sigma^2 \\,"
},
{
"math_id": 12,
"text": "r_{t+T} = \\frac{Y}{2c},"
},
{
"math_id": 13,
"text": "c=\\frac{2a}{(1 - e^{-aT})\\sigma^2}"
},
{
"math_id": 14,
"text": "\\frac{4ab}{\\sigma^2}"
},
{
"math_id": 15,
"text": "2 c r_te^{-aT}"
},
{
"math_id": 16,
"text": "f(r_{t+T};r_t,a,b,\\sigma)=c\\,e^{-u-v} \\left (\\frac{v}{u}\\right)^{q/2} I_{q}(2\\sqrt{uv}), "
},
{
"math_id": 17,
"text": "q = \\frac{2ab}{\\sigma^2}-1"
},
{
"math_id": 18,
"text": "u = c r_t e^{-aT}"
},
{
"math_id": 19,
"text": "v = c r_{t+T}"
},
{
"math_id": 20,
"text": "I_{q}(2\\sqrt{uv})"
},
{
"math_id": 21,
"text": "q"
},
{
"math_id": 22,
"text": "r_{\\infty}"
},
{
"math_id": 23,
"text": "f(r_\\infty;a,b,\\sigma)=\\frac{\\beta^\\alpha}{\\Gamma(\\alpha)}r_\\infty^{\\alpha-1}e^{-\\beta r_\\infty},"
},
{
"math_id": 24,
"text": "\\beta = 2a/\\sigma^2 "
},
{
"math_id": 25,
"text": " \\alpha = 2ab/\\sigma^2 "
},
{
"math_id": 26,
"text": "r_0"
},
{
"math_id": 27,
"text": "2 a b \\geq\\sigma^2"
},
{
"math_id": 28,
"text": "\\operatorname E[r_t\\mid r_0]=r_0 e^{-at} + b (1-e^{-at})"
},
{
"math_id": 29,
"text": "\\operatorname{Var}[r_t\\mid r_0]=r_0 \\frac{\\sigma^2}{a} (e^{- a t}-e^{-2a t}) + \\frac{b \\sigma^2}{2 a}(1-e^{- a t})^2."
},
{
"math_id": 30,
"text": " r_{t+\\Delta t}-r_t = a (b-r_t)\\,\\Delta t + \\sigma\\, \\sqrt{r_t \\Delta t} \\varepsilon_t, "
},
{
"math_id": 31,
"text": " \\frac{r_{t+\\Delta t}-r_t}{\\sqrt r_t} =\\frac{ab\\Delta t}{\\sqrt r_t}-a \\sqrt r_t\\Delta t + \\sigma\\, \\sqrt{\\Delta t} \\varepsilon_t, "
},
{
"math_id": 32,
"text": "\\varepsilon_t"
},
{
"math_id": 33,
"text": "P(t,T) = A(t,T) e^{-B(t,T) r_t}\\!"
},
{
"math_id": 34,
"text": "A(t,T) = \\left(\\frac{2h e^{(a+h)(T-t)/2}}{2h + (a+h)(e^{h(T-t)} -1)}\\right)^{2ab/\\sigma^2}"
},
{
"math_id": 35,
"text": "B(t,T) = \\frac{2(e^{h(T-t)}-1)}{2h+(a+h)(e^{h(T-t)}-1)}"
},
{
"math_id": 36,
"text": "h = \\sqrt{a^2+2\\sigma^2}"
}
]
| https://en.wikipedia.org/wiki?curid=7759194 |
7759491 | Crown-rump length | Ultrasound measurement of a foetus
Crown-rump length (CRL) is the measurement of the length of human embryos and fetuses from the top of the head (crown) to the bottom of the buttocks (rump). It is typically determined from ultrasound imagery and can be used to estimate gestational age.
Introduction.
The embryo and fetus float in the amniotic fluid inside the uterus of the mother usually in a curved posture resembling the letter "C". The measurement can actually vary slightly if the fetus is temporarily stretching (straightening) its body. The measurement needs to be in the natural state with an unstretched body which is actually "C" shaped. The measurement of CRL is useful in determining the gestational age (menstrual age starting from the first day of the last menstrual period) and thus the expected date of delivery (EDD). Different human fetuses grow at different rates and thus the gestational age is an approximation. Recent evidence has indicated that CRL growth (and thus the approximation of gestational age) may be influenced by maternal factors such as age, smoking, and folic acid intake. Early in pregnancy gestational age 8 weeks, it is accurate within about +/- 5 days but later in pregnancy due to different growth rates, the accuracy is less. In that situation, other parameters can be used in addition to CRL. The length of the umbilical cord is approximately equal to the CRL throughout pregnancy.
Gestational age is not the same as fertilization age. It takes about 14 days from the first day of the last menstrual period for conception to take place and thus for the conceptus to form. The age from this point in time (conception) is called the fertilization age and is thus 2 weeks shorter than the gestational age. Thus a 6-week gestational age would be a 4-week fertilization age. Some authorities however casually interchange these terms and the reader is advised to be cautious. An average gestational period (duration of pregnancy from the first day of the last menstrual period up to delivery) is 280 days. On average, this is 9 months and 6 days.
Gestational age estimation.
A commonly used estimate of gestational age in weeks is (as described by Verburg et al.):
formula_0
Gestational age estimation in days is carried out according to the equations:
formula_1 ; and SD of GA = 2.39102 + (0.0193474 × CRL).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "GA_{weeks}=CRL^{0.2313}\\cdot e^{1.4653+0.001737\\cdot CRL}"
},
{
"math_id": 1,
"text": "GA_{days}=40.9+3.24585\\cdot CRL^{0.5}+0.348956\\cdot CRL"
}
]
| https://en.wikipedia.org/wiki?curid=7759491 |
77600421 | XLNet | A large language model developed by Google AI
The XLNet was an autoregressive Transformer designed as an improvement over BERT, with 340M parameters and trained on 33 billion words. It was released on 19 June, 2019, under the Apache 2.0 license. It achieved state-of-the-art results on a variety of natural language processing tasks, including language modeling, question answering, and natural language inference.
Architecture.
The main idea of XLNet is to model language autoregressively like the GPT models, but allow for "all possible permutations" of a sentence. Concretely, consider the following sentence:My dog is cute.In standard autoregressive language modeling, the model would be tasked with predicting the probability of each word, conditioned on the previous words as its context:
We factorize the joint probability of a sequence of words formula_0 using the chain rule:formula_1
For example, the sentence "My dog is cute" is factorized as:
formula_2
Schematically, we can write it as
formula_3
However, for XLNet, the model is required to predict the words in a randomly generated order. Suppose we have sampled a randomly generated order 3241, then schematically, the model is required to perform the following prediction task:
formula_4
By considering all permutations, XLNet is able to capture longer-range dependencies and better model the bidirectional context of words.
Two-Stream Self-Attention.
To implement permutation language modeling, XLNet uses a two-stream self-attention mechanism. The two streams are:
The content stream uses the causal maskformula_5permuted by a random permutation matrix to formula_6.
The query stream uses the cross-attention mask formula_7, where the diagonal is subtracted away specifically to avoid the model "cheating" by looking at the content stream for what the current masked token is.
Like the causal masking for GPT models, this two-stream masked architecture allows the model to train on all tokens in one forward pass.
Training.
Two models were released:
It was trained on a dataset that amounted to 32.89 billion tokens after tokenization with SentencePiece. The dataset was composed of BooksCorpus, and English Wikipedia, Giga5, ClueWeb 2012-B, and Common Crawl.
It was trained on 512 TPU v3 chips, for 5.5 days. At the end of training, it still under-fitted the data, meaning it could have achieved lower loss with more training. It took 0.5 million steps with an Adam optimizer, linear learning rate decay, and a batch size of 8192. | [
{
"math_id": 0,
"text": "x_1, \\ldots, x_T"
},
{
"math_id": 1,
"text": "\n\\Pr(x_1, \\ldots, x_T) = \\Pr(x_1) \\Pr(x_2 | x_1) \\Pr(x_3 | x_1, x_2) \\ldots \\Pr(x_T | x_1, \\ldots, x_{T-1}). \n"
},
{
"math_id": 2,
"text": "\n\\Pr(\\text{My}, \\text{dog}, \\text{is}, \\text{cute}) = \\Pr(\\text{My}) \\Pr(\\text{dog} | \\text{My}) \\Pr(\\text{is} | \\text{My}, \\text{dog}) \\Pr(\\text{cute} | \\text{My}, \\text{dog}, \\text{is}).\n"
},
{
"math_id": 3,
"text": "\n\\texttt{<MASK> } \\texttt{<MASK> } \\texttt{<MASK> } \\texttt{<MASK> } \\to \\text{My } \\texttt{<MASK> } \\texttt{<MASK> } \\texttt{<MASK> } \\to \\text{My dog }\\texttt{<MASK> } \\texttt{<MASK> } \\to \\text{My dog is }\\texttt{<MASK> } \\to \\text{My dog is cute}. \n"
},
{
"math_id": 4,
"text": "\\texttt{<MASK> } \\texttt{<MASK> } \\texttt{<MASK> } \\texttt{<MASK> } \n\\to \\texttt{<MASK> } \\texttt{<MASK> } \\text{is }\\texttt{<MASK> } \n\\to \\texttt{<MASK> } \\text{dog is }\\texttt{<MASK> } \n\\to \\texttt{<MASK> }\\text{dog is cute} \n\\to \\text{My dog is cute}"
},
{
"math_id": 5,
"text": "M_{\\text{causal}} = \\begin{bmatrix}\n0 & -\\infty & -\\infty & \\dots & -\\infty \\\\\n0 & 0 & -\\infty & \\dots & -\\infty \\\\\n0 & 0 & 0 & \\dots & -\\infty \\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n0 & 0 & 0 & \\dots & 0\n\\end{bmatrix}\n"
},
{
"math_id": 6,
"text": "P M_{\\text{causal}} P^{-1}\n"
},
{
"math_id": 7,
"text": "P (M_{\\text{causal}} - \\infty I) P^{-1}\n"
}
]
| https://en.wikipedia.org/wiki?curid=77600421 |
7760322 | Thermodynamic databases for pure substances | Thermodynamic properties list
Thermodynamic databases contain information about thermodynamic properties for substances, the most important being enthalpy, entropy, and Gibbs free energy. Numerical values of these thermodynamic properties are collected as tables or are calculated from thermodynamic datafiles. Data is expressed as temperature-dependent values for one mole of substance at the standard pressure of 101.325 kPa (1 atm), or 100 kPa (1 bar). Both of these definitions for the standard condition for pressure are in use.
Thermodynamic data.
Thermodynamic data is usually presented as a table or chart of function values for one mole of a substance (or in the case of the steam tables, one kg). A thermodynamic datafile is a set of equation parameters from which the numerical data values can be calculated. Tables and datafiles are usually presented at a standard pressure of 1 bar or 1 atm, but in the case of steam and other industrially important gases, pressure may be included as a variable. Function values depend on the state of aggregation of the substance, which must be defined for the value to have any meaning. The state of aggregation for thermodynamic purposes is the "standard state", sometimes called the "reference state", and defined by specifying certain conditions. The "normal" standard state is commonly defined as the most stable physical form of the substance at the specified temperature and a pressure of 1 bar or 1 atm. However, since any non-normal condition could be chosen as a standard state, it must be defined in the context of use. A "physical" standard state is one that exists for a time sufficient to allow measurements of its properties. The most common physical standard state is one that is stable thermodynamically (i.e., the normal one). It has no tendency to transform into any other physical state. If a substance can exist but is not thermodynamically stable (for example, a supercooled liquid), it is called a "metastable" state. A "non"-"physical" standard state is one whose properties are obtained by extrapolation from a physical state (for example, a solid superheated above the normal melting point, or an ideal gas at a condition where the real gas is non-ideal). Metastable liquids and solids are important because some substances can persist and be used in that state indefinitely. Thermodynamic functions that refer to conditions in the normal standard state are designated with a small superscript °. The relationship between certain physical and thermodynamic properties may be described by an equation of state.
Enthalpy, heat content and heat capacity.
It is very difficult to measure the absolute amount of any thermodynamic quantity involving the internal energy (e.g. enthalpy), since the internal energy of a substance can take many forms, each of which has its own typical temperature at which it begins to become important in thermodynamic reactions. It is therefore the "change" in these functions that is of most interest. The isobaric change in enthalpy "H" above the common reference temperature of 298.15 K (25 °C) is called the "high temperature heat content", the "sensible heat", or the "relative high-temperature enthalpy", and called henceforth the heat content. Different databases designate this term in different ways; for example "H"T-"H"298, "H"°-"H"°298, "H"°T-"H"°298 or "H"°-"H"°(Tr), where Tr means the reference temperature (usually 298.15 K, but abbreviated in heat content symbols as 298). All of these terms mean the molar heat content for a substance in its normal standard state above a reference temperature of 298.15 K. Data for gases is for the hypothetical ideal gas at the designated standard pressure. The SI unit for enthalpy is J/mol, and is a positive number above the reference temperature. The heat content has been measured and tabulated for virtually all known substances, and is commonly expressed as a polynomial function of temperature. The heat content of an ideal gas is independent of pressure (or volume), but the heat content of real gases varies with pressure, hence the need to define the state for the gas (real or ideal) and the pressure. Note that for some thermodynamic databases such as for steam, the reference temperature is 273.15 K (0 °C).
The "heat capacity" C is the ratio of heat added to the temperature increase. For an incremental isobaric addition of heat:
formula_0
"Cp" is therefore the slope of a plot of temperature vs. isobaric heat content (or the derivative of a temperature/heat content equation). The SI units for heat capacity are J/(mol·K).
Enthalpy change of phase transitions.
When heat is added to a condensed-phase substance, its temperature increases until a phase change temperature is reached. With further addition of heat, the temperature remains constant while the phase transition takes place. The amount of substance that transforms is a function of the amount of heat added. After the transition is complete, adding more heat increases the temperature. In other words, the enthalpy of a substance changes isothermally as it undergoes a physical change. The enthalpy change resulting from a phase transition is designated Δ"H". There are four types of enthalpy changes resulting from a phase transition. To wit:
* "Enthalpy of transformation". This applies to the transformations from one solid phase to another, such as the transformation from α-Fe (bcc ferrite) to formula_1-Fe (fcc austenite). The transformation is designated Δ"H"tr.
* "Enthalpy of fusion or melting". This applies to the transition of a solid to a liquid and is designated Δ"H"m.
* "Enthalpy of vaporization". This applies to the transition of a liquid to a vapor and is designated Δ"H"v.
* "Enthalpy of sublimation". This applies to the transition of a solid to a vapor and is designated Δ"H"s.
"Cp" is infinite at phase transition temperatures because the enthalpy changes isothermally. At the Curie temperature, "Cp" shows a sharp discontinuity while the enthalpy has a change in slope.
Values of Δ"H" are usually given for the transition at the normal standard state temperature for the two states, and if so, are designated with a superscript °. Δ"H" for a phase transition is a weak function of temperature. In some texts, the heats of phase transitions are called "latent" heats (for example, "latent heat of fusion").
Enthalpy change for a chemical reaction.
An enthalpy change occurs during a chemical reaction. For the special case of the formation of a compound from the elements, the change is designated Δ"H"form and is a weak function of temperature. Values of Δ"H"form are usually given where the elements and compound are in their normal standard states, and as such are designated "standard heats" of formation, as designated by a superscript °. The Δ"H"°form undergoes discontinuities at a phase transition temperatures of the constituent element(s) and the compound. The enthalpy change for any standard reaction is designated Δ"H"°rx.
Entropy and Gibbs energy.
The entropy of a system is another thermodynamic quantity that is not easily measured. However, using a combination of theoretical and experimental techniques, entropy can in fact be accurately estimated. At low temperatures, the Debye model leads to the result that the atomic heat capacity "C"v for solids should be proportional to "T"3, and that for perfect crystalline solids it should become zero at absolute zero. Experimentally, the heat capacity is measured at temperature intervals to as low a temperature as possible. Values of "C"p/T are plotted against T for the whole range of temperatures where the substance exists in the same physical state. The data are extrapolated from the lowest experimental temperature to 0 K using the Debye model. The third law of thermodynamics states that the entropy of a perfect crystalline substance becomes zero at 0 K. When "S"0 is zero, the area under the curve from 0 K to any temperature gives the entropy at that temperature. Even though the Debye model contains "C"v instead of "C"p, the difference between the two at temperatures near 0 K is so small as to be negligible.
The absolute value of entropy for a substance in its standard state at the reference temperature of 298.15 K is designated "S"°298. Entropy increases with temperature, and is discontinuous at phase transition temperatures. The change in entropy (Δ"S"°) at the normal phase transition temperature is equal to the heat of transition divided by the transition temperature. The SI units for entropy are J/(mol·K).
The standard entropy change for the formation of a compound from the elements, or for any standard reaction is designated Δ"S"°form or Δ"S"°rx. The entropy change is obtained by summing the absolute entropies of the products minus the sum of the absolute entropies of the reactants.
Like enthalpy, the Gibbs energy "G" has no intrinsic value, so it is the change in "G" that is of interest.
Furthermore, there is no change in "G" at phase transitions between substances in their standard states.
Hence, the main functional application of Gibbs energy from a thermodynamic database is its change in value during the formation of a compound from the standard-state elements, or for any standard chemical reaction (Δ"G"°form or Δ"G"°rx).
The SI units of Gibbs energy are the same as for enthalpy (J/mol).
Additional functions.
Compilers of thermochemical databases may contain some additional thermodynamic functions. For example, the absolute enthalpy of a substance "H"("T") is defined in terms of its formation enthalpy and its heat content as follows:
formula_2
For an element, "H"("T") and ["H"T - "H"298] are identical at all temperatures because Δ"H"°form is zero, and of course at 298.15 K, "H"("T") = 0. For a compound:
formula_3
Similarly, the absolute Gibbs energy "G"("T") is defined by the absolute enthalpy and entropy of a substance:
formula_4
For a compound:
formula_5
Some tables may also contain the Gibbs energy function ("H"°298.15 – "G"°T)/"T" which is defined in terms of the entropy and heat content.
formula_6
The Gibbs energy function has the same units as entropy, but unlike entropy, exhibits no discontinuity at normal phase transition temperatures.
The log10 of the equilibrium constant "K"eq is often listed, which is calculated from the defining thermodynamic equation.
formula_7
Thermodynamic databases.
A thermodynamic database consists of sets of critically evaluated values for the major thermodynamic functions.
Originally, data was presented as printed tables at 1 atm and at certain temperatures, usually 100° intervals and at phase transition temperatures. Some compilations included polynomial equations that could be used to reproduce the tabular values. More recently, computerized databases are used which consist of the equation parameters and subroutines to calculate specific values at any temperature and prepare tables for printing. Computerized databases often include subroutines for calculating reaction properties and displaying the data as charts.
Thermodynamic data comes from many types of experiments, such as calorimetry, phase equilibria, spectroscopy, composition measurements of chemical equilibrium mixtures, and emf measurements of reversible reactions. A proper database takes all available information about the elements and compounds in the database, and assures that the presented results are "internally consistent". Internal consistency requires that all values of the thermodynamic functions are correctly calculated by application of the appropriate thermodynamic equations. For example, values of the Gibbs energy obtained from high-temperature equilibrium emf methods must be identical to those calculated from calorimetric measurements of the enthalpy and entropy values. The database provider must use recognized data analysis procedures to resolve differences between data obtained by different types of experiments.
All thermodynamic data is a non-linear function of temperature (and pressure), but there is no universal equation format for expressing the various functions. Here we describe a commonly used polynomial equation to express the temperature dependence of the heat content. A common six-term equation for the isobaric heat content is:
formula_8
Regardless of the equation format, the heat of formation of a compound at any temperature is Δ"H"°form at 298.15 K, plus the sum of the heat content parameters of the products minus the sum of the heat content parameters of the reactants. The "C"p equation is obtained by taking the derivative of the heat content equation.
formula_9
The entropy equation is obtained by integrating the "C"p/T equation:
formula_10
F' is a constant of integration obtained by inserting "S"° at any temperature "T". The Gibbs energy of formation of a compound is obtained from the defining equation Δ"G"°form = Δ"H"°form – T(Δ"S"°form), and is expressed as
formula_11
formula_12
For most substances, Δ"G"°form deviates only slightly from linearity with temperature, so over a short temperature span, the seven-term equation can be replaced by a three-term equation, whose parameter values are obtained by regression of tabular values.
formula_13
Depending on the accuracy of the data and the length of the temperature span, the heat content equation may require more or fewer terms. Over a very long temperature span, two equations may be used instead of one. It is unwise to extrapolate the equations to obtain values outside the range of experimental data used to derive the equation parameters.
Thermodynamic datafiles.
The equation parameters and all other information required to calculate values of the important thermodynamic functions are stored in a thermodynamic datafile. The values are organized in a format that makes them readable by a thermodynamic calculation program or for use in a spreadsheet. For example, the Excel-based thermodynamic database FREED creates the following type of datafile, here for a standard pressure of 1 atm.
* Row 1. Molar mass of species, density at 298.15 K, Δ"H"°form 298.15, "S"°298.15. and the upper temperature limit for the file.
* Row 2. Number of "C"p equations required. Here, three because of three species phases.
* Row 3. Values of the five parameters for the first "C"p equation; temperature limit for the equation.
* Row 4. Values of the five parameters for the second "C"p equation; temperature limit for the equation.
* Row 5. Values of the five parameters for the third "C"p equation; temperature limit for the equation.
* Row 6. Number of "H"T - "H"298 equations required.
* Row 7. Values of the six parameters for the first "H"T - "H"298 equation; temperature limit for the equation, and Δ"H"°trans for the first phase change.
* Row 8. Values of the six parameters for the second "H"T - "H"298 equation; temperature limit for the equation, and Δ"H"°trans for the second phase change.
* Row 9. Values of the six parameters for the third "H"T - "H"298 equation; temperature limit for the equation, and Δ"H"°trans for the third phase change.
* Row 10. Number of Δ"H"°form equations required. Here five; three for species phases and two because one of the elements has a phase change.
* Row 11. Values of the six parameters for the first Δ"H"°form equation; temperature limit for the equation.
* Row 12. Values of the six parameters for the second Δ"H"°form equation; temperature limit for the equation.
* Row 13. Values of the six parameters for the third Δ"H"°form equation; temperature limit for the equation.
* Row 14. Values of the six parameters for the fourth Δ"H"°form equation; temperature limit for the equation.
* Row 15. Values of the six parameters for the fifth Δ"H"°form equation; temperature limit for the equation.
* Row 16. Number of Δ"G"°form equations required.
* Row 17. Values of the seven parameters for the first Δ"G"°form equation; temperature limit for the equation.
* Row 18. Values of the seven parameters for the second Δ"G"°form equation; temperature limit for the equation.
* Row 19. Values of the seven parameters for the third Δ"G"°form equation; temperature limit for the equation.
* Row 20. Values of the seven parameters for the fourth Δ"G"°form equation; temperature limit for the equation.
* Row 21. Values of the seven parameters for the fifth Δ"G"°form equation; temperature limit for the equation.
Most computerized databases will create a table of thermodynamic values using the values from the datafile. For MgCl2(c,l,g) at 1 atm pressure:
The table format is a common way to display thermodynamic data. The FREED table gives additional information in the top rows, such as the mass and amount composition and transition temperatures of the constituent elements. Transition temperatures for the constituent elements have dashes ------- in the first column in a blank row, such as at 922 K, the melting point of Mg. Transition temperatures for the substance have two blank rows with dashes, and a center row with the defined transition and the enthalpy change, such as the melting point of MgCl2 at 980 K. The datafile equations are at the bottom of the table, and the entire table is in an Excel worksheet. This is particularly useful when the data is intended for making specific calculations. | [
{
"math_id": 0,
"text": "C_P(T)=\\left \\{ \\lim_{\\Delta T \\to 0}\\frac{\\Delta H}{\\Delta T}\\right \\}=\\left ( \\frac{\\partial H}{\\partial T}\\right ) _p"
},
{
"math_id": 1,
"text": "\\gamma"
},
{
"math_id": 2,
"text": " H(T) = \\Delta H^\\circ_{form,298} + [H_T - H_{298}] "
},
{
"math_id": 3,
"text": " \\Delta H^\\circ_{form} = H(T)compound - \\sum \\left \\{ H(T)elements \\right \\} "
},
{
"math_id": 4,
"text": " G(T) = H(T) - T \\times S(T) "
},
{
"math_id": 5,
"text": " \\Delta G^\\circ_{form} = G(T)compound - \\sum \\left \\{ G(T)elements \\right \\} "
},
{
"math_id": 6,
"text": " (H^\\circ_{298} - G^\\circ_T) / T = S^\\circ_T - (H_T - H_{298}) / T "
},
{
"math_id": 7,
"text": " \\log_{10} \\left ( K_{eq} \\right ) = -\\Delta G^\\circ_{form} /(19.1448T) "
},
{
"math_id": 8,
"text": " H_T - H_{298} = A(T) + B(T^2) + C(T^{-1}) + D(T^{0.5}) + E(T^3) + F \\, "
},
{
"math_id": 9,
"text": " C_P = A + 2B(T) - C(T^{-2}) + \\textstyle \\frac {1}{2} D(T^{-0.5}) + 3E(T^2) \\,"
},
{
"math_id": 10,
"text": " S^\\circ_T = A(\\ln T) + 2B(T) + \\textstyle \\frac {1}{2}C(T^{-2}) - D(T^{\\textstyle - \\frac {1}{2}}) + 1 \\textstyle \\frac {1}{2} E(T^2) + F'"
},
{
"math_id": 11,
"text": " \\Delta G^\\circ_{form} = ( \\Delta A - \\Delta F' )T - \\Delta A ( T \\ln T ) - \\Delta B ( T^2 ) + \\textstyle \\frac {1}{2} \\Delta C ( T^{-1} ) + 2 \\Delta D ( T^{ \\textstyle \\frac {1}{2} } ) "
},
{
"math_id": 12,
"text": " - \\textstyle \\frac {1}{2} \\Delta E (T^3) + \\Delta F + \\Delta H^\\circ_{form 298} "
},
{
"math_id": 13,
"text": " \\Delta G^\\circ_{form} = \\alpha T + \\beta (T \\ln T) + \\chi "
}
]
| https://en.wikipedia.org/wiki?curid=7760322 |
7760747 | Theoretical plate | A theoretical plate in many separation processes is a hypothetical zone or stage in which two phases, such as the liquid and vapor phases of a substance, establish an equilibrium with each other. Such equilibrium stages may also be referred to as an equilibrium stage, ideal stage, or a theoretical tray. The performance of many separation processes depends on having series of equilibrium stages and is enhanced by providing more such stages. In other words, having more theoretical plates increases the efficiency of the separation process be it either a distillation, absorption, chromatographic, adsorption or similar process.
Applications.
The concept of theoretical plates and trays or equilibrium stages is used in the design of many different types of separation.
Distillation columns.
The concept of theoretical plates in designing distillation processes has been discussed in many reference texts. Any physical device that provides good contact between the vapor and liquid phases present in industrial-scale distillation columns or laboratory-scale glassware distillation columns constitutes a "plate" or "tray". Since an actual, physical plate can never be a 100% efficient equilibrium stage, the number of actual plates is more than the required theoretical plates.
formula_0
where formula_1 is the number of actual, physical plates or trays, formula_2 is the number of theoretical plates or trays and formula_3 is the plate or tray efficiency.
So-called bubble-cap or valve-cap trays are examples of the vapor and liquid contact devices used in industrial distillation columns. Another example of vapor and liquid contact devices are the spikes in laboratory Vigreux fractionating columns.
The trays or plates used in industrial distillation columns are fabricated of circular steel plates and usually installed inside the column at intervals of about 60 to 75 cm (24 to 30 inches) up the height of the column. That spacing is chosen primarily for ease of installation and ease of access for future repair or maintenance.
An example of a very simple tray is a perforated tray. The desired contacting between vapor and liquid occurs as the vapor, flowing upwards through the perforations, comes into contact with the liquid flowing downwards through the perforations. In current modern practice, as shown in the adjacent diagram, better contacting is achieved by installing bubble-caps or valve caps at each perforation to promote the formation of vapor bubbles flowing through a thin layer of liquid maintained by a weir on each tray.
To design a distillation unit or a similar chemical process, the number of theoretical trays or plates (that is, hypothetical equilibrium stages), Nt, required in the process should be determined, taking into account a likely range of feedstock composition and the desired degree of separation of the components in the output fractions. In industrial continuous fractionating columns, Nt is determined by starting at either the top or bottom of the column and calculating material balances, heat balances and equilibrium flash vaporizations for each of the succession of equilibrium stages until the desired end product composition is achieved. The calculation process requires the availability of a great deal of vapor–liquid equilibrium data for the components present in the distillation feed, and the calculation procedure is very complex.
In an industrial distillation column, the Nt required to achieve a given separation also depends upon the amount of reflux used. Using more reflux decreases the number of plates required and using less reflux increases the number of plates required. Hence, the calculation of Nt is usually repeated at various reflux rates. Nt is then divided by the tray efficiency, E, to determine the actual number of trays or physical plates, Na, needed in the separating column. The final design choice of the number of trays to be installed in an industrial distillation column is then selected based upon an economic balance between the cost of additional trays and the cost of using a higher reflux rate.
There is a very important distinction between the theoretical plate terminology used in discussing conventional distillation trays and the theoretical plate terminology used in the discussions below of packed bed distillation or absorption or in chromatography or other applications. The theoretical plate in conventional distillation trays has no "height". It is simply a hypothetical equilibrium stage. However, the theoretical plate in packed beds, chromatography and other applications is defined as having a height.
The empirical formula known as Van Winkle's Correlation can be used to predict the Murphree plate efficiency for distillation columns separating binary systems.
Distillation and absorption packed beds.
Distillation and absorption separation processes using packed beds for vapor and liquid contacting have an equivalent concept referred to as the plate height or the height equivalent to a theoretical plate (HETP). HETP arises from the same concept of equilibrium stages as does the theoretical plate and is numerically equal to the absorption bed length divided by the number of theoretical plates in the absorption bed (and in practice is measured in this way).
formula_4
where formula_2 is the number of theoretical plates (also called the "plate count"), H is the total bed height and HETP is the height equivalent to a theoretical plate.
The material in packed beds can either be random dumped packing (1-3" wide) such as Raschig rings or structured sheet metal. Liquids tend to wet the surface of the packing and the vapors contact the wetted surface, where mass transfer occurs.
Chromatographic processes.
The theoretical plate concept was also adapted for chromatographic processes by Martin and Synge. The IUPAC's Gold Book provides a definition of the number of theoretical plates in a chromatography column.
The same equation applies in chromatography processes as for the packed bed processes, namely:
formula_4
In packed column chromatography, the HETP may also be calculated with the Van Deemter equation.
In capillary column chromatography HETP is given by the Golay equation.
Other applications.
The concept of theoretical plates or trays applies to other processes as well, such as capillary electrophoresis and some types of adsorption.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N_a = \\frac{N_t}{E}"
},
{
"math_id": 1,
"text": "N_a"
},
{
"math_id": 2,
"text": "N_t"
},
{
"math_id": 3,
"text": "E"
},
{
"math_id": 4,
"text": "N_t = \\frac{H}\\mathrm{HETP}"
}
]
| https://en.wikipedia.org/wiki?curid=7760747 |
77621259 | Closed chain inference | Proof strategy for showing a collection of statements are pairwise equivalent.
A closed chain inference () is a mathematical proof technique with which the pairwise equivalence of several statements can be proven without having to prove all pairwise equivalences directly.
In order to prove that the statements formula_0 are each pairwise equivalent, proofs are given for the implications formula_1, formula_2, formula_3, formula_4 and formula_5.
The pairwise equivalence of the statements then results from the transitivity of the material conditional.
Example.
For formula_6 the proofs are given for formula_1, formula_2, formula_7 and formula_8. The equivalence of formula_9 and formula_10 results from the chain of conclusions that are no longer explicitly given:
That is formula_11.
Motivation.
The technique saves writing effort above all. By dispensing with the formally necessary chain of conclusions, only formula_12 direct proofs need to be provided for formula_13 instead of formula_14 direct proofs. The difficulty for the mathematician is to find a sequence of statements that allows for the most elegant direct proofs possible.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\varphi_1,\\ldots,\\varphi_n"
},
{
"math_id": 1,
"text": "\\varphi_1\\Rightarrow\\varphi_2"
},
{
"math_id": 2,
"text": "\\varphi_2\\Rightarrow\\varphi_3"
},
{
"math_id": 3,
"text": "\\dots"
},
{
"math_id": 4,
"text": "\\varphi_{n-1}\\Rightarrow\\varphi_n"
},
{
"math_id": 5,
"text": "\\varphi_{n}\\Rightarrow\\varphi_1"
},
{
"math_id": 6,
"text": "n=4"
},
{
"math_id": 7,
"text": "\\varphi_3\\Rightarrow\\varphi_4"
},
{
"math_id": 8,
"text": "\\varphi_4\\Rightarrow\\varphi_1"
},
{
"math_id": 9,
"text": "\\varphi_2"
},
{
"math_id": 10,
"text": "\\varphi_4"
},
{
"math_id": 11,
"text": "\\varphi_2\\Leftrightarrow \\varphi_4"
},
{
"math_id": 12,
"text": "n"
},
{
"math_id": 13,
"text": "\\varphi_i\\Rightarrow\\varphi_j"
},
{
"math_id": 14,
"text": "n(n-1)"
}
]
| https://en.wikipedia.org/wiki?curid=77621259 |
7762273 | Cheeger constant (graph theory) | Measure of whether or not a graph has a "bottleneck"
In mathematics, the Cheeger constant (also Cheeger number or isoperimetric number) of a graph is a numerical measure of whether or not a graph has a "bottleneck". The Cheeger constant as a measure of "bottleneckedness" is of great interest in many areas: for example, constructing well-connected networks of computers, card shuffling. The graph theoretical notion originated after the Cheeger isoperimetric constant of a compact Riemannian manifold.
The Cheeger constant is named after the mathematician Jeff Cheeger.
Definition.
Let G be an undirected finite graph with vertex set "V"("G") and edge set "E"("G"). For a collection of vertices "A" ⊆ "V"("G"), let ∂"A" denote the collection of all edges going from a vertex in A to a vertex outside of A (sometimes called the "edge boundary" of A):
formula_0
Note that the edges are unordered, i.e., formula_1. The Cheeger constant of G, denoted "h"("G"), is defined by
formula_2
The Cheeger constant is strictly positive if and only if G is a connected graph. Intuitively, if the Cheeger constant is small but positive, then there exists a "bottleneck", in the sense that there are two "large" sets of vertices with "few" links (edges) between them. The Cheeger constant is "large" if any possible division of the vertex set into two subsets has "many" links between those two subsets.
Example: computer networking.
In applications to theoretical computer science, one wishes to devise network configurations for which the Cheeger constant is high (at least, bounded away from zero) even when |"V"("G")| (the number of computers in the network) is large.
For example, consider a ring network of "N" ≥ 3 computers, thought of as a graph GN. Number the computers 1, 2, ..., "N" clockwise around the ring. Mathematically, the vertex set and the edge set are given by:
formula_3
Take A to be a collection of formula_4 of these computers in a connected chain:
formula_5
So,
formula_6
and
formula_7
This example provides an upper bound for the Cheeger constant "h"("GN"), which also tends to zero as "N" → ∞. Consequently, we would regard a ring network as highly "bottlenecked" for large N, and this is highly undesirable in practical terms. We would only need one of the computers on the ring to fail, and network performance would be greatly reduced. If two non-adjacent computers were to fail, the network would split into two disconnected components.
Cheeger Inequalities.
The Cheeger constant is especially important in the context of expander graphs as it is a way to measure the edge expansion of a graph. The so-called Cheeger inequalities relate the eigenvalue gap of a graph with its Cheeger constant. More explicitly
formula_8
in which formula_9 is the maximum degree for the nodes in formula_10 and formula_11 is the spectral gap of the Laplacian matrix of the graph. The Cheeger inequality is a fundamental result and motivation for spectral graph theory.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\partial A := \\{ \\{x, y\\} \\in E(G) \\ : \\ x \\in A, y \\in V(G) \\setminus A \\}."
},
{
"math_id": 1,
"text": "\\{x, y\\} = \\{y, x\\}"
},
{
"math_id": 2,
"text": "h(G) := \\min \\left\\{\\frac{| \\partial A |}{| A |} \\ : \\ A \\subseteq V(G), 0 < | A | \\leq \\tfrac{1}{2} | V(G)| \\right\\}."
},
{
"math_id": 3,
"text": "\\begin{align}\nV(G_{N}) &= \\{ 1, 2, \\cdots, N-1, N \\} \\\\\nE(G_{N}) &= \\big\\{ \\{1, 2\\}, \\{2, 3\\}, \\cdots, \\{N - 1, N\\}, \\{N, 1\\} \\big\\}\n\\end{align}"
},
{
"math_id": 4,
"text": "\\left \\lfloor \\tfrac{N}{2} \\right \\rfloor"
},
{
"math_id": 5,
"text": "A = \\left \\{ 1, 2, \\cdots, \\left \\lfloor \\tfrac{N}{2} \\right \\rfloor \\right \\}."
},
{
"math_id": 6,
"text": "\\partial A = \\left \\{ \\left \\{ \\left \\lfloor \\tfrac{N}{2} \\right \\rfloor, \\left \\lfloor \\tfrac{N}{2} \\right \\rfloor + 1 \\right \\}, \\{N, 1\\} \\right \\},"
},
{
"math_id": 7,
"text": "\\frac{| \\partial A |}{| A |} = \\frac{2}{\\left \\lfloor \\tfrac{N}{2} \\right \\rfloor} \\to 0 \\mbox{ as } N \\to \\infty."
},
{
"math_id": 8,
"text": " 2h(G) \\geq \\lambda \\geq \\frac{h^2(G)}{2 \\Delta(G)} "
},
{
"math_id": 9,
"text": "\\Delta(G)"
},
{
"math_id": 10,
"text": "G"
},
{
"math_id": 11,
"text": "\\lambda"
}
]
| https://en.wikipedia.org/wiki?curid=7762273 |
7762521 | Fiber bundle construction theorem | Constructs a fiber bundle from a base space, fiber and a set of transition functions
In mathematics, the fiber bundle construction theorem is a theorem which constructs a fiber bundle from a given base space, fiber and a suitable set of transition functions. The theorem also gives conditions under which two such bundles are isomorphic. The theorem is important in the associated bundle construction where one starts with a given bundle and surgically replaces the fiber with a new space while keeping all other data the same.
Formal statement.
Let "X" and "F" be topological spaces and let "G" be a topological group with a continuous left action on "F". Given an open cover {"U""i"} of "X" and a set of continuous functions
formula_0
defined on each nonempty overlap, such that the "cocycle condition"
formula_1
holds, there exists a fiber bundle "E" → "X" with fiber "F" and structure group "G" that is trivializable over {"U""i"} with transition functions "t""ij".
Let "E"′ be another fiber bundle with the same base space, fiber, structure group, and trivializing neighborhoods, but transition functions "t"′"ij". If the action of "G" on "F" is faithful, then "E"′ and "E" are isomorphic if and only if there exist functions
formula_2
such that
formula_3
Taking "t""i" to be constant functions to the identity in "G", we see that two fiber bundles with the same base, fiber, structure group, trivializing neighborhoods, and transition functions are isomorphic.
A similar theorem holds in the smooth category, where "X" and "Y" are smooth manifolds, "G" is a Lie group with a smooth left action on "Y" and the maps "t""ij" are all smooth.
Construction.
The proof of the theorem is constructive. That is, it actually constructs a fiber bundle with the given properties. One starts by taking the disjoint union of the product spaces "U""i" × "F"
formula_4
and then forms the quotient by the equivalence relation
formula_5
The total space "E" of the bundle is "T"/~ and the projection π : "E" → "X" is the map which sends the equivalence class of ("i", "x", "y") to "x". The local trivializations
formula_6
are then defined by
formula_7
Associated bundle.
Let "E" → "X" a fiber bundle with fiber "F" and structure group "G", and let "F"′ be another left "G"-space. One can form an associated bundle "E"′ → "X" with a fiber "F"′ and structure group "G" by taking any local trivialization of "E" and replacing "F" by "F"′ in the construction theorem. If one takes "F"′ to be "G" with the action of left multiplication then one obtains the associated principal bundle. | [
{
"math_id": 0,
"text": "t_{ij} : U_i \\cap U_j \\to G"
},
{
"math_id": 1,
"text": "t_{ik}(x) = t_{ij}(x)t_{jk}(x) \\qquad \\forall x \\in U_i \\cap U_j \\cap U_k"
},
{
"math_id": 2,
"text": "t_i : U_i \\to G"
},
{
"math_id": 3,
"text": "t'_{ij}(x) = t_i(x)^{-1}t_{ij}(x)t_j(x) \\qquad \\forall x \\in U_i \\cap U_j."
},
{
"math_id": 4,
"text": "T = \\coprod_{i\\in I}U_i \\times F = \\{(i,x,y) : i\\in I, x\\in U_i, y\\in F\\}"
},
{
"math_id": 5,
"text": "(j,x,y) \\sim (i,x,t_{ij}(x)\\cdot y)\\qquad \\forall x\\in U_i \\cap U_j, y\\in F."
},
{
"math_id": 6,
"text": "\\phi_i : \\pi^{-1}(U_i) \\to U_i \\times F"
},
{
"math_id": 7,
"text": "\\phi_i^{-1}(x,y) = [(i,x,y)]."
}
]
| https://en.wikipedia.org/wiki?curid=7762521 |
77627375 | Berzins-Delahay equation | Electrochemical equation
In electrochemistry, the Berzins-Delahay equation is analogous to the Randles–Sevcik equation, except that it predicts the peak height (formula_0) of a linear potential scan when the reaction is electrochemically reversible, the reactants are soluble, and the products are deposited on the electrode with a thermodynamic activity of one.
formula_1
Despite the fact that this equation is derived under very simplistic assumptions, considering the complex phenomenon of nucleation, the Berzins-Delahay equation often makes good predictions of formula_0. This is likely because nucleation processes have been resolved at this point, meaning that the fundamental assumptions of the derivation match the physical phenomena well. Corrections for these errant assumptions are available.
Derivation.
This equation is derived using the following governing equations and initial/boundary conditions:
formula_10
formula_11
formula_12
formula_13
Uses.
The Berzins-Delahay equation is primarily used to measure the concentration or the diffusion coefficient of an analyte that participates in a reversible, deposition electrochemical reaction. To validate the application of this equation, one typically checks for a linear relationship between formula_0 and formula_20 and peak potentials (formula_21) that are independent of formula_7. The characteristic shape of a deposition voltammogram, with a sharp reduction (negative current) with a decaying tail and a large oxidation peak that quickly decays to zero current, is also required to verify the reaction has soluble reactants and deposited products.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "i_p"
},
{
"math_id": 1,
"text": "i_p = 0.6105AC\\sqrt{\\frac{(nF)^3Dv}{RT}}"
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "C"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "F"
},
{
"math_id": 6,
"text": "D"
},
{
"math_id": 7,
"text": "v"
},
{
"math_id": 8,
"text": "R"
},
{
"math_id": 9,
"text": "T"
},
{
"math_id": 10,
"text": "\\frac{\\partial C}{\\partial t} = -D \\frac{\\partial^2 C}{\\partial x^2}\n"
},
{
"math_id": 11,
"text": "C(x,0)=C^*"
},
{
"math_id": 12,
"text": "\\lim_{x\\rightarrow\\infty}C(x,t)=C^*"
},
{
"math_id": 13,
"text": "E=E_i +vt = E^{0'} + \\frac{RT}{nF}\\ln\\left(\\frac{C(0,t)}{C^0}\\right)"
},
{
"math_id": 14,
"text": "t"
},
{
"math_id": 15,
"text": "x"
},
{
"math_id": 16,
"text": "E"
},
{
"math_id": 17,
"text": "E_i"
},
{
"math_id": 18,
"text": "E^{0'}"
},
{
"math_id": 19,
"text": "C^0"
},
{
"math_id": 20,
"text": "\\sqrt{v}"
},
{
"math_id": 21,
"text": "E_p"
}
]
| https://en.wikipedia.org/wiki?curid=77627375 |
7763362 | 5-manifold | Manifold of dimension five
In mathematics, a 5-manifold is a 5-dimensional topological manifold, possibly with a piecewise linear or smooth structure.
Non-simply connected 5-manifolds are impossible to classify, as this is harder than solving the word problem for groups. Simply connected compact 5-manifolds were first classified by Stephen Smale and then in full generality by Dennis Barden, while another proof was later given by Aleksey V. Zhubr. This turns out to be easier than the 3- or 4-dimensional case: the 3-dimensional case is the Thurston geometrisation conjecture, and the 4-dimensional case was solved by Michael Freedman (1982) in the topological case, but is a very hard unsolved problem in the smooth case.
In dimension 5, the smooth classification of simply connected manifolds is governed by classical algebraic topology. Namely, two simply connected, smooth 5-manifolds are diffeomorphic if and only if there exists an isomorphism of their second homology groups with integer coefficients, preserving the linking form and the second Stiefel–Whitney class. Moreover, any such isomorphism in second homology is induced by some diffeomorphism. It is undecidable if a given 5-manifold is homeomorphic to formula_0, the 5-sphere.
Examples.
Here are some examples of smooth, closed, simply connected 5-manifolds:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S^5"
},
{
"math_id": 1,
"text": "S^2\\times S^3"
},
{
"math_id": 2,
"text": "S^2\\widetilde{\\times} S^3"
},
{
"math_id": 3,
"text": "S^3"
},
{
"math_id": 4,
"text": "S^2"
},
{
"math_id": 5,
"text": "\\operatorname{SU}(3)/\\operatorname{SO}(3)"
}
]
| https://en.wikipedia.org/wiki?curid=7763362 |
77639699 | Georgian Mathematical Union | Mathematical society in Georgia
The Georgian Mathematical Union or GMU (Georgian: , სმკ) is a mathematical society founded in Georgia in 1923.
The GMU has its headquarters at the Andrea Razmadze Mathematical Institute in Tbilisi.
It is a founding member of the European Mathematical Society and is recognised by the International Mathematical Union.
The Georgian Mathematical Union describes its mission as "to promote mathematical sciences, especially among the young generation, and to establish and strengthen contacts with colleagues from abroad and with international professional organisations."
Since 2010 the GMU has held an annual conference in Batumi; the 13th conference was held as a hybrid event in September 2023.
History.
The predecessor of the Georgian Mathematical Union was the Georgian Mathematical Society, which was founded on 21 February 1923.
Among the founders were Georgian mathematicians Nikolz Muskhelishvili, Giorgi Nikoladze, Archil Kharadze, and Andrei Razmadze, who served as the society's first president.
At the 1924 International Congress of Mathematicians in Toronto the society became a member of the International Mathematical Union.
Razmadze died in 1929, and the society went dormant until being reformed in 1962 by Viktor Kupradze, Boris Khvedelidze, Levan Gokieli and Archil Kharadze.
The reformed society still dates its formation to 1923.
The Georgian Mathematical Society was a founding member of the European Mathematical Society in 1990, and once more joined the IMU in 1991.
In 1994 its name was changed to the Georgian Mathematical Union.
Logo.
The logo of the Georgian Mathematical Union contains the outline of Georgia, with its national flag superimposed, and surrounded by four equations.
Presidents.
The past presidents of the Georgian Mathematical Union are:
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta u + k^2u=0"
}
]
| https://en.wikipedia.org/wiki?curid=77639699 |
7764195 | Rademacher's theorem | Mathematical theorem
In mathematical analysis, Rademacher's theorem, named after Hans Rademacher, states the following: If U is an open subset of R"n" and "f": "U" → R"m" is Lipschitz continuous, then f is differentiable almost everywhere in U; that is, the points in U at which f is "not" differentiable form a set of Lebesgue measure zero. Differentiability here refers to infinitesimal approximability by a linear map, which in particular asserts the existence of the coordinate-wise partial derivatives.
Sketch of proof.
The one-dimensional case of Rademacher's theorem is a standard result in introductory texts on measure-theoretic analysis. In this context, it is natural to prove the more general statement that any single-variable function of bounded variation is differentiable almost everywhere. (This one-dimensional generalization of Rademacher's theorem fails to extend to higher dimensions.)
One of the standard proofs of the general Rademacher theorem was found by Charles Morrey. In the following, let u denote a Lipschitz-continuous function on R"n". The first step of the proof is to show that, for any fixed unit vector v, the v-directional derivative of u exists almost everywhere. This is a consequence of a special case of the Fubini theorem: a measurable set in R"n" has Lebesgue measure zero if its restriction to every line parallel to v has (one-dimensional) Lebesgue measure zero. Considering in particular the set in R"n" where the v-directional derivative of u fails to exist (which must be proved to be measurable), the latter condition is met due to the one-dimensional case of Rademacher's theorem.
The second step of Morrey's proof establishes the linear dependence of the v-directional derivative of u upon "v". This is based upon the following identity:
formula_0
Using the Lipschitz assumption on u, the dominated convergence theorem can be applied to replace the two difference quotients in the above expression by the corresponding v-directional derivatives. Then, based upon the known linear dependence of the v-directional derivative of ζ upon v, the same can be proved of u via the fundamental lemma of calculus of variations.
At this point in the proof, the gradient (defined as the n-tuple of partial derivatives) is guaranteed to exist almost everywhere; for each v, the dot product with v equals the v-directional derivative almost everywhere (although perhaps on a smaller set). Hence, for any countable collection of unit vectors "v"1, "v"2, ..., there is a single set E of measure zero such that the gradient and each "v""i"-directional derivative exist everywhere on the complement of E, and are linked by the dot product. By selecting "v"1, "v"2, ... to be dense in the unit sphere, it is possible to use the Lipschitz condition to prove the existence of "every" directional derivative everywhere on the complement of E, together with its representation as the dot product of the gradient with the direction.
Morrey's proof can also be put into the context of generalized derivatives. Another proof, also via a reduction to the one-dimensional case, uses the technology of approximate limits.
Applications.
Rademacher's theorem can be used to prove that, for any "p" ≥ 1, the Sobolev space W1,"p"(Ω) is preserved under a bi-Lipschitz transformation of the domain, with the chain rule holding in its standard form. With appropriate modification, this also extends to the more general Sobolev spaces W"k","p"(Ω).
Rademacher's theorem is also significant in the study of geometric measure theory and rectifiable sets, as it allows the analysis of first-order differential geometry, specifically tangent planes and normal vectors. Higher-order concepts such as curvature remain more subtle, since their usual definitions require more differentiability than is achieved by the Rademacher theorem. In the presence of convexity, second-order differentiability is achieved by the Alexandrov theorem, the proof of which can be modeled on that of the Rademacher theorem. In some special cases, the Rademacher theorem is even used as part of the proof.
Generalizations.
Alberto Calderón proved the more general fact that if Ω is an open bounded set in R"n" then every function in the Sobolev space W1,"p"(Ω) is differentiable almost everywhere, provided that "p" > "n". Calderón's theorem is a relatively direct corollary of the Lebesgue differentiation theorem and Sobolev embedding theorem. Rademacher's theorem is a special case, due to the fact that any Lipschitz function on Ω is an element of the space W1,∞(Ω).
There is a version of Rademacher's theorem that holds for Lipschitz functions from a Euclidean space into an arbitrary metric space in terms of metric differentials instead of the usual derivative.
References.
<templatestyles src="Reflist/styles.css" />
Sources
<templatestyles src="Refbegin/styles.css" />
External links.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\int_{\\mathbf{R}^n}\\frac{u(x+h\\nu)-u(x)}{h}\\zeta(z)\\,d\\mathcal{L}^n(x)=-\\int_{\\mathbf{R}^n}\\frac{\\zeta(x)-\\zeta(x-h\\nu)}{h}u(x)\\,d\\mathcal{L}^n(x)."
}
]
| https://en.wikipedia.org/wiki?curid=7764195 |
7764228 | Spinning dust | In astronomy, spinning dust emission is a mechanism proposed to explain anomalous microwave emission from the Milky Way. The emission could arise from the electric dipole of very rapidly spinning (10–60 GHz) extremely small (nanometer) dust grains as suggested by Bruce T. Draine and Alex Lazarian in 1998, most likely polycyclic aromatic hydrocarbons. The anomalous emission was first discovered as a by-product of Cosmic Microwave Background observations which make very sensitive measurements of the microwave sky which have to identify and remove contamination from the galaxy. The smallest dust grains are thought to have only hundreds of atoms.
History.
Anomalous microwave emission was first seen as a surprising statistical correlation of microwave sky variations with far infrared (FIR) emission (Kogut et al. 1996, Leitch et al. 1997). This signal traced the warm galactic dust emission which was unexpected as the extrapolated infrared dust signal to microwave frequencies should have been at least an order of magnitude lower than that seen. Kogut et al. had correlated COBE Differential Microwave Radiometer observations at centimeter wavelengths with DIRBE dust emission at 140 μm, while Leitch et al. had correlated Owens Valley Radio Observatory ring observations at 14.5 and 32 GHz with IRAS 100 μm. The suggestion at the time was the correlation was due to free-free or Bremsstrahlung emission from ionized gas caused by young hot stars which are formed in these dusty regions.
Physics.
Small dust grains are likely to have a permanent electric dipole moment μ due to the intrinsic dipole moment of molecules within the grain and uneven charge distribution. Interactions with the ambient gas and radiation field cause the grains to spin. Denoting by ω their (vector) angular velocity, and assuming the grains to be spherical for simplicity (so they are not wobbling), Larmor's formula gives the power radiated by such a rotating dipole:
formula_0
where μ⊥ is the component of μ perpendicular to ω. This power is radiated at the rotation frequency, ν = ω/(2 π). In order to compute the emissivity due to this radiation process, one requires the probability distribution function "f(ω)" for the rotation rate ω, as well as the number density of dust grains. Both functions depend on the grain size, and so does the dipole moment μ. The distribution function "f(ω)" depends on the ambient gas temperature, density and ionization fraction and on the local radiation field. Refined models (Hoang et al. 2010) account for the wobbling of non-spherical dust grains, in which case the angular velocity is not constant for a given angular momentum and power is radiated at multiple frequencies.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nP(\\omega) = \\frac23 \\frac{\\ddot{\\boldsymbol{\\mu}}^2}{c^3} = \\frac23 \\frac{\\omega^4 \\mu_{\\bot}^2}{c^3}, \n"
}
]
| https://en.wikipedia.org/wiki?curid=7764228 |
7765005 | Fenske equation |
The Fenske equation in continuous fractional distillation is an equation used for calculating the minimum number of theoretical plates required for the separation of a binary feed stream by a fractionation column that is being operated at total reflux (i.e., which means that no overhead product distillate is being withdrawn from the column).
The equation was derived in 1932 by Merrell Fenske, a professor who served as the head of the chemical engineering department at the Pennsylvania State University from 1959 to 1969.
When designing large-scale, continuous industrial distillation towers, it is very useful to first calculate the minimum number of theoretical plates required to obtain the desired overhead product composition.
Common versions of the Fenske equation.
This is one of the many different but equivalent versions of the Fenske equation valid only for binary mixtures:
formula_0
where:
For a multi-component mixture the following formula holds.
For ease of expression, the more volatile and the less volatile components are commonly referred to as the light key (LK) and the heavy key (HK), respectively. Using that terminology, the above equation may be expressed as:
formula_5
or also:
formula_6
If the relative volatility of the light key to the heavy key is constant from the column top to the column bottom, then formula_7 is simply formula_8. If the relative volatility is not constant from top to bottom of the column, then the following approximation may be used:
formula_9
where:
The above forms of the Fenske equation can be modified for use in the total reflux distillation of multi-component feeds. It is also helpful in solving liquid–liquid extraction problems, because an extraction system can also be represented as a series of equilibrium stages and relative solubility can be substituted for relative volatility.
Another form of the Fenske equation.
A derivation of another form of the Fenske equation for use in gas chromatography is available on the U.S. Naval Academy's web site. Using Raoult's law and Dalton's Law for a series of condensation and evaporation cycles (i.e., equilibrium stages), the following form of the Fenske equation is obtained:
formula_12
where:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ N = \\frac{\\log \\, \\bigg[ \\Big(\\frac{X_d}{1-X_d}\\Big)\\Big(\\frac{1-X_b}{X_b} \\Big) \\bigg]}{\\log \\, \\alpha_{avg}} "
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "X_d"
},
{
"math_id": 3,
"text": "X_b"
},
{
"math_id": 4,
"text": "\\alpha_{avg}"
},
{
"math_id": 5,
"text": "\\ N = \\frac{\\log \\, \\bigg[ \\Big(\\frac{LK_d}{HK_d}\\Big)\\Big(\\frac{HK_b}{LK_b} \\Big) \\bigg]}{\\log \\, \\alpha_{avg}} "
},
{
"math_id": 6,
"text": "\\ N = \\frac{\\log \\, \\bigg[ \\Big(\\frac{LK_d}{1-LK_d}\\Big)\\Big(\\frac{1-LK_b}{LK_b} \\Big) \\bigg]}{\\log \\, \\alpha_{avg}} "
},
{
"math_id": 7,
"text": "\\alpha_{avg.}"
},
{
"math_id": 8,
"text": "\\alpha"
},
{
"math_id": 9,
"text": "\\alpha_{avg.} = \\sqrt {(\\alpha_t)(\\alpha_b)}"
},
{
"math_id": 10,
"text": "\\alpha_t"
},
{
"math_id": 11,
"text": "\\alpha_b"
},
{
"math_id": 12,
"text": "\\ \\frac{Z_a}{Z_b} = \\frac{X_a}{X_b} \\left (\\frac{P^0_a}{P^0_b} \\right) ^N "
},
{
"math_id": 13,
"text": "Z_n"
},
{
"math_id": 14,
"text": "X_n"
},
{
"math_id": 15,
"text": "{P^0_n}"
}
]
| https://en.wikipedia.org/wiki?curid=7765005 |
77650360 | Neural differential equation | In machine learning, a neural differential equation is a differential equation whose right-hand side is parametrized by the weights "θ" of a neural network. In particular, a neural ordinary differential equation (neural ODE) is an ordinary differential equation of the form
formula_0
Connection with residual neural networks.
Neural ODEs can be interpreted as a residual neural network with a continuum of layers rather than a discrete number of layers. Applying the Euler method with a unit time step to a neural ODE yields the forward propagation equation of a residual neural network:
formula_1
with ℓ being the ℓ-th layer of this residual neural network. While the forward propagation of a residual neural network is done by applying a sequence of transformations starting at the input layer, the forward propagation computation of a neural ODE is done by solving a differential equation. More precisely, the output formula_2 associated to the input formula_3 of the neural ODE is obtained by solving the initial value problem
formula_4
and assigning the value formula_5 to formula_2.
Universal differential equations.
In physics-informed contexts where additional information is known, neural ODEs can be combined with an existing first-principles model to build a physics-informed neural network model called universal differential equations (UDE). For instance, an UDE version of the Lotka-Volterra model can be written as
formula_6
where the terms formula_7 and formula_8 are correction terms parametrized by neural networks.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\mathrm{d} \\mathbf{h}(t)}{\\mathrm{d} t}=f_\\theta(\\mathbf{h}(t), t) "
},
{
"math_id": 1,
"text": "\\mathbf{h}_{\\ell+1} = f_{\\theta}(\\mathbf{h}_{\\ell}, \\ell) + \\mathbf{h}_{\\ell},"
},
{
"math_id": 2,
"text": "\\mathbf{h}_{\\text{out}} "
},
{
"math_id": 3,
"text": "\\mathbf{h}_{\\text{in}} "
},
{
"math_id": 4,
"text": "\\frac{\\mathrm{d} \\mathbf{h}(t)}{\\mathrm{d} t}=f_\\theta(\\mathbf{h}(t), t), \\quad \\mathbf{h}(0)=\\mathbf{h}_{\\text{in}}, "
},
{
"math_id": 5,
"text": "\\mathbf{h}(T) "
},
{
"math_id": 6,
"text": "\\begin{align}\n \\frac{dx}{dt} &= \\alpha x - \\beta x y + f_{\\theta}(x(t),y(t)), \\\\\n \\frac{dy}{dt} &= - \\gamma y + \\delta x y + g_{\\theta}(x(t),y(t)),\n\\end{align}"
},
{
"math_id": 7,
"text": "f_{\\theta} "
},
{
"math_id": 8,
"text": "g_{\\theta} "
}
]
| https://en.wikipedia.org/wiki?curid=77650360 |
7765817 | Borsuk's conjecture | Can every bounded subset of Rn be partitioned into (n+1) smaller diameter sets?
The Borsuk problem in geometry, for historical reasons incorrectly called Borsuk's conjecture, is a question in discrete geometry. It is named after Karol Borsuk.
Problem.
In 1932, Karol Borsuk showed that an ordinary 3-dimensional ball in Euclidean space can be easily dissected into 4 solids, each of which has a smaller diameter than the ball, and generally n-dimensional ball can be covered with "n" + 1 compact sets of diameters smaller than the ball. At the same time he proved that n subsets are not enough in general. The proof is based on the Borsuk–Ulam theorem. That led Borsuk to a general question:
<templatestyles src="Template:Blockquote/styles.css" />
The following question remains open: Can every bounded subset E of the space formula_0 be partitioned into ("n" + 1) sets, each of which has a smaller diameter than E?
The question was answered in the positive in the following cases:
The problem was finally solved in 1993 by Jeff Kahn and Gil Kalai, who showed that the general answer to Borsuk's question is no. They claim that their construction shows that "n" + 1 pieces do not suffice for "n" = 1325 and for each "n" > 2014. However, as pointed out by Bernulf Weißbach, the first part of this claim is in fact false. But after improving a suboptimal conclusion within the corresponding derivation, one can indeed verify one of the constructed point sets as a counterexample for "n" = 1325 (as well as all higher dimensions up to 1560).
Their result was improved in 2003 by Hinrichs and Richter, who constructed finite sets for "n" ≥ 298, which cannot be partitioned into "n" + 11 parts of smaller diameter.
In 2013, Andriy V. Bondarenko had shown that Borsuk's conjecture is false for all "n" ≥ 65. Shortly after, Thomas Jenrich derived a 64-dimensional counterexample from Bondarenko's construction, giving the best bound up to now.
Apart from finding the minimum number n of dimensions such that the number of pieces "α"("n") > "n" + 1, mathematicians are interested in finding the general behavior of the function "α"("n"). Kahn and Kalai show that in general (that is, for n sufficiently large), one needs formula_1 many pieces. They also quote the upper bound by Oded Schramm, who showed that for every ε, if n is sufficiently large, formula_2. The correct order of magnitude of "α"("n") is still unknown. However, it is conjectured that there is a constant "c" > 1 such that "α"("n") > "cn" for all "n" ≥ 1.
Note.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{R}^n"
},
{
"math_id": 1,
"text": "\\alpha(n) \\ge (1.2)^\\sqrt{n}"
},
{
"math_id": 2,
"text": "\\alpha(n) \\le \\left(\\sqrt{3/2} + \\varepsilon\\right)^n"
}
]
| https://en.wikipedia.org/wiki?curid=7765817 |
776619 | Ocean gyre | Any large system of circulating ocean surface currents
North Atlantic gyre
North Atlantic gyre
North Atlantic gyre
IndianOcean gyre
NorthPacificgyre
SouthPacificgyre
South Atlantic gyre
World map of the five major ocean gyres
In oceanography, a gyre () is any large system of circulating ocean surface currents, particularly those involved with large wind movements. Gyres are caused by the Coriolis effect; planetary vorticity, horizontal friction and vertical friction determine the circulatory patterns from the "wind stress curl" (torque).
"Gyre" can refer to any type of vortex in an atmosphere or a sea, even one that is human-created, but it is most commonly used in terrestrial oceanography to refer to the major ocean systems.
Gyre formation.
The largest ocean gyres are wind-driven, meaning that their locations and dynamics are controlled by the prevailing global wind patterns: easterlies at the tropics and westerlies at the midlatitudes. These wind patterns result in a wind stress curl that drives Ekman pumping in the subtropics (resulting in downwelling) and Ekman suction in subpolar regions (resulting in upwelling). Ekman pumping results in an increased sea surface height at the center of the gyre and anticyclonic geostrophic currents in subtropical gyres. Ekman suction results in a depressed sea surface height and cyclonic geostrophic currents in subpolar gyres.
Wind-driven ocean gyres are asymmetrical, with stronger flows on their western boundary and weaker flows throughout their interior. The weak interior flow that is typical over most of the gyre is a result of the conservation of potential vorticity. In the shallow water equations (applicable for basin-scale flow as the horizontal length scale is much greater than the vertical length scale), potential vorticity is a function of relative (local) vorticity formula_0 (zeta), planetary vorticity formula_1, and the depth formula_2, and is conserved with respect to the material derivative:
formula_3
In the case of the subtropical ocean gyre, Ekman pumping results in water piling up in the center of the gyre, compressing water parcels. This results in a decrease in formula_2, so by the conservation of potential vorticity the numerator formula_4 must also decrease. It can be further simplified by realizing that, in basin-scale ocean gyres, the relative vorticity formula_0 is small, meaning that local changes in vorticity cannot account for the decrease in formula_2. Thus, the water parcel must change its planetary vorticity formula_1 accordingly. The only way to decrease the planetary vorticity is by moving the water parcel equatorward, so throughout the majority of subtropical gyres there is a weak equatorward flow. Harald Sverdrup quantified this phenomenon in his 1947 paper, "Wind Driven Currents in a Baroclinic Ocean", in which the (depth-integrated) Sverdrup balance is defined as:
formula_5
Here, formula_6 is the meridional mass transport (positive north), formula_7 is the Rossby parameter, formula_8 is the water density, and formula_9 is the vertical Ekman velocity due to wind stress curl (positive up). It can be clearly seen in this equation that for a negative Ekman velocity (e.g., Ekman pumping in subtropical gyres), meridional mass transport (Sverdrup transport) is negative (south, equatorward) in the northern hemisphere (formula_10). Conversely, for a positive Ekman velocity (e.g., Ekman suction in subpolar gyres), Sverdrup transport is positive (north, poleward) in the northern hemisphere.
Western intensification.
As the Sverdrup balance argues, subtropical ocean gyres have a weak equatorward flow and subpolar ocean gyres have a weak poleward flow over most of their area. However, there must be some return flow that goes against the Sverdrup transport in order to preserve mass balance. In this respect, the Sverdrup solution is incomplete, as it has no mechanism in which to predict this return flow. Contributions by both Henry Stommel and Walter Munk resolved this issue by showing that the return flow of gyres is done through an intensified western boundary current. Stommel's solution relies on a frictional bottom boundary layer which is not necessarily physical in a stratified ocean (currents do not always extend to the bottom).
Munk's solution instead relies on friction between the return flow and the sidewall of the basin. This allows for two cases: one with the return flow on the western boundary (western boundary current) and one with the return flow on the eastern boundary (eastern boundary current). A qualitative argument for the presence of western boundary current solutions over eastern boundary current solutions can be found again through the conservation of potential vorticity. Considering again the case of a subtropical northern hemisphere gyre, the return flow must be northward. In order to move northward (an increase in planetary vorticity formula_1), there must be a source of positive relative vorticity to the system. The relative vorticity in the shallow-water system is:
formula_11
Here formula_12 is again the meridional velocity and formula_13 is the zonal velocity. In the sense of a northward return flow, the zonal component is neglected and only the meridional velocity is important for relative vorticity. Thus, this solution requires that formula_14 in order to increase the relative vorticity and have a valid northward return flow in the northern hemisphere subtropical gyre.
Due to friction at the boundary, the velocity of flow must go to zero at the sidewall before reaching some maximum northward velocity within the boundary layer and decaying to the southward Sverdrup transport solution far away from the boundary. Thus, the condition that formula_14 can only be satisfied through a western boundary frictional layer, as the eastern boundary frictional layer forces formula_15. One can make similar arguments for subtropical gyres in the southern hemisphere and for subpolar gyres in either hemisphere and see that the result remains the same: the return flow of an ocean gyre is always in the form of a western boundary current.
The western boundary current must transport on the same order of water as the interior Sverdrup transport in a much smaller area. This means western boundary currents are much stronger than interior currents, a phenomenon called "western intensification".
Gyre distribution.
Subtropical gyres.
There are five major subtropical gyres across the world's oceans: the North Atlantic Gyre, the South Atlantic Gyre, the Indian Ocean Gyre, the North Pacific Gyre, and the South Pacific Gyre. All subtropical gyres are anticyclonic, meaning that in the northern hemisphere they rotate clockwise, while the gyres in the southern hemisphere rotate counterclockwise. This is due to the Coriolis force. Subtropical gyres typically consist of four currents: a westward flowing equatorial current, a poleward flowing, narrow, and strong western boundary current, an eastward flowing current in the midlatitudes, and an equatorward flowing, weaker, and broader eastern boundary current.
North Atlantic Gyre.
The North Atlantic Gyre is located in the northern hemisphere in the Atlantic Ocean, between the Intertropical Convergence Zone (ITCZ) in the south and Iceland in the north. The North Equatorial Current brings warm waters west towards the Caribbean and defines the southern edge of the North Atlantic Gyre. Once these waters reach the Caribbean they join the warm waters in the Gulf of Mexico and form the Gulf Stream, a western boundary current. This current then heads north and east towards Europe, forming the North Atlantic Current. The Canary Current flows south along the western coast of Europe and north Africa, completing the gyre circulation. The center of the gyre is the Sargasso Sea, which is characterized by the dense accumulation of "Sargassum" seaweed.
South Atlantic Gyre.
The South Atlantic Gyre is located in the southern hemisphere in the Atlantic Ocean, between the Intertropical Convergence Zone in the north and the Antarctic Circumpolar Current to the south. The South Equatorial Current brings water west towards South America, forming the northern boundary of the South Atlantic gyre. Here, the water moves south in the Brazil Current, the western boundary current of the South Atlantic Gyre. The Antarctic Circumpolar Current forms both the southern boundary of the gyre and the eastward component of the gyre circulation. Eventually, the water reaches the west coast of Africa, where it is brought north along the coast as a part of the eastern boundary Benguela Current, completing the gyre circulation. The Benguela Current experiences the Benguela Niño event, an Atlantic Ocean analogue to the Pacific Ocean's El Niño, and is correlated with a reduction in primary productivity in the Benguela upwelling zone.
Indian Ocean Gyre.
The Indian Ocean Gyre, located in the Indian Ocean, is, like the South Atlantic Gyre, bordered by the Intertropical Convergence Zone in the north and the Antarctic Circumpolar Current to the south. The South Equatorial Current forms the northern boundary of the Indian Ocean Gyre as it flows west along the equator towards the east coast of Africa. At the coast of Africa, the South Equatorial Current is split by Madagascar into the Mozambique Current, flowing south through the Mozambique Channel, and the East Madagascar Current, flowing south along the east coast of Madagascar, both of which are western boundary currents. South of Madagascar the two currents join to form the Agulhas Current. The Agulhas Current flows south until it joins the Antarctic Circumpolar Current, which flows east at the southern edge of the Indian Ocean Gyre. Due to the African continent not extending as far south as the Indian Ocean Gyre, some of the water in the Agulhas Current "leaks" into the Atlantic Ocean, with potentially important effects for global thermohaline circulation. The gyre circulation is completed by the north flowing West Australian Current, which forms the eastern boundary of the gyre.
North Pacific Gyre.
The North Pacific Gyre, one of the largest ecosystems on Earth, is bordered to the south by the Intertropical Convergence Zone and extending north to roughly 50°N. At the southern boundary of the North Pacific Gyre, the North Equatorial Current flows west along the equator towards southeast Asia. The Kuroshio Current is the western boundary current of the North Pacific Gyre, flowing northeast along the coast of Japan. At roughly 50°N, the flow turns east and becomes the North Pacific Current. The North Pacific Current flows east, eventually bifurcating near the west coast of North America into the northward flowing Alaska Current and the southward flowing California Current. The Alaska Current is the eastern boundary current of the subpolar Alaska Gyre, while the California Current is the eastern boundary current that completes the North Pacific Gyre circulation. Within the North Pacific Gyre is the Great Pacific garbage patch, an area of increased plastic waste concentration.
South Pacific Gyre.
The South Pacific Gyre, like its northern counterpart, is one of the largest ecosystems on Earth with an area that accounts for around 10% of the global ocean surface area. Within this massive area is Point Nemo, the location on Earth that is farthest away from all continental landmass (2,688 km away from the closest land). The remoteness of this gyre complicates sampling, causing this gyre to be historically under sampled in oceanographic datasets. At the northern boundary of the South Pacific Gyre, the South Equatorial Current flows west towards southeast Asia and Australia. There, it turns south as it flows in the East Australian Current, a western boundary current. The Antarctic Circumpolar Current again returns the water to the east. The flow turns north along the western coast of South America in the Humboldt Current, the eastern boundary current that completes the South Pacific Gyre circulation. Like the North Pacific Gyre, the South Pacific Gyre has an elevated concentration of plastic waste near the center, termed the South Pacific garbage patch. Unlike the North Pacific garbage patch which was first described in 1988, the South Pacific garbage patch was discovered much more recently in 2016 (a testament to the extreme remoteness of the South Pacific Gyre).
Subpolar gyres.
Subpolar gyres form at high latitudes (around 60°). Circulation of surface wind and ocean water is cyclonic, counterclockwise in the northern hemisphere and clockwise in the southern hemisphere, around a low-pressure area, such as the persistent Aleutian Low and the Icelandic Low. The wind stress curl in this region drives the Ekman suction, which creates an upwelling of nutrient-rich water from the lower depths.
Subpolar circulation in the southern hemisphere is dominated by the Antarctic Circumpolar Current, due to the lack of large landmasses breaking up the Southern Ocean. There are minor gyres in the Weddell Sea and the Ross Sea, the Weddell Gyre and Ross Gyre, which circulate in a clockwise direction.
North Atlantic Subpolar Gyre.
The North Atlantic Subpolar Gyre, located in the North Atlantic Ocean, is characterized by a counterclockwise rotation of surface waters. It plays a crucial role in the global oceanic conveyor belt system, influencing climate and marine ecosystems. The gyre is driven by the convergence of warm, salty waters from the south and cold, fresher waters from the north. As these waters meet, the warm, dense water sinks beneath the lighter, colder water, initiating a complex circulation pattern. The North Atlantic Subpolar Gyre has significant implications for climate regulation, as it helps redistribute heat and nutrients throughout the North Atlantic, influencing weather patterns and supporting diverse marine life. Additionally, changes in the gyre's strength and circulation can impact regional climate variability and may be influenced by broader climate change trends.
The Atlantic Meridional Overturning Circulation (AMOC) is a key component of the global climate system through its transport of heat and freshwater. The North Atlantic Subpolar Gyre is in a region where the AMOC is actively developed and shaped through mixing and water mass transformation. It is a region where large amounts of heat transported northward by the ocean are released into the atmosphere, thereby modifying the climate of northwest Europe. The North Atlantic Subpolar Gyre has a complex topography with a series of basins in which the large-scale circulation is characterized by cyclonic boundary currents and interior recirculation. The North Atlantic Current develops out of the Gulf Stream extension and turns eastward, crossing the Atlantic in a wide band between about 45°N and 55°N creating the southern border of the North Atlantic Subpolar Gyre. There are several branches of the North Atlantic Current, and they flow into an eastern intergyral region in the Bay of Biscay, the Rockall Trough, the Iceland Basin, and the Irminger Sea. Part of the North Atlantic Current flows into the Norwegian Sea, and some recirculate within the boundary currents of the subpolar gyre.
Ross Gyre.
The Ross Gyre is located in the Southern Ocean surrounding Antarctica, just outside of the Ross Sea. This gyre is characterized by a clockwise rotation of surface waters, driven by the combined influence of wind, the Earth's rotation, and the shape of the seafloor. The gyre plays a crucial role in the transport of heat, nutrients, and marine life in the Southern Ocean, affecting the distribution of sea ice and influencing regional climate patterns.
The Ross Sea, Antarctica, is a region where the mixing of distinct water masses and complex interactions with the cryosphere lead to the production and export of dense water, with global-scale impacts. which controls the proximity of the warm waters of the Antarctic Circumpolar Current to the Ross Sea continental shelf, where they may drive ice shelf melting and increase sea level. The deepening of sea level pressures over the Southeast Pacific/Amundsen-Bellingshausen Seas generates a cyclonic circulation cell that reduces sea surface heights north of the Ross Gyre via Ekman suction. The relative reduction of sea surface heights to the north facilitates a northeastward expansion of the outer boundary of the Ross Gyre. Further, the gyre is intensified by a westward ocean stress anomaly over its southern boundary. The ensuing southward Ekman transport anomaly raises sea surface heights over the continental shelf and accelerates the westward throughflow by increasing the cross-slope pressure gradient. The sea level pressure center may have a greater impact on the Ross Gyre transport or the throughflow, depending on its location and strength. This gyre has significant effects on interactions in the Southern Ocean between waters of the Antarctic margin, the Antarctic Circumpolar Current, and intervening gyres with a strong seasonal sea ice cover play a major role in the climate system.
The Ross Sea is the southernmost sea on Earth and holds the United States' McMurdo Station and Italian Zuchelli Station. Even though this gyre is located nearby two of the most prominent research stations in the world for Antarctic study, the Ross Gyre remains one of the least sampled gyres in the world.
Weddell Gyre.
The Weddell Gyre is located in the Southern Ocean surrounding Antarctica, just outside of the Weddell Sea. It is characterized by a clockwise rotation of surface waters, influenced by the combined effects of winds, the Earth's rotation, and the seafloor's topography. Like the Ross Gyre, the Weddell Gyre plays a critical role in the movement of heat, nutrients, and marine life in the Southern Ocean. Insights into the behavior and variability of the Weddell Gyre are crucial for comprehending the interaction between ocean processes in the southern hemisphere and their implications for the global climate system.
This gyre is formed by interactions between the Antarctic Circumpolar Current and the Antarctic Continental Shelf. The Weddell Gyre (WG) is one of the main oceanographic features of the Southern Ocean south of the Antarctic Circumpolar Current which plays an influential role in global ocean circulation as well as gas exchange with the atmosphere. The WG is situated in the Atlantic sector of the Southern Ocean, south of 55–60°S and roughly between 60°W and 30°E (Deacon, 1979). It stretches over the Weddell abyssal plain, where the Weddell Sea is situated, and extends east into the Enderby abyssal plain.
Beaufort Sea Gyre.
The anti-cyclonic Beaufort Gyre is the dominant circulation of the Canada Basin and the largest freshwater reservoir in the Arctic Ocean's western and northern sectors. The Gyre is characterized by a large-scale, quasi-permanent, counterclockwise rotation of surface waters within the Beaufort Sea. This gyre functions as a critical mechanism for the transport of heat, nutrients, and sea ice within the Arctic region, thus influencing the physical and biological characteristics of the marine environment. Negative wind stress curl over the region, mediated by the sea ice pack, leads to Ekman pumping, downwelling of isopycnal surfaces, and storage of ~20,000 km3 of freshwater in the upper few hundred meters of the ocean. The gyre gains energy from winds in the south and loses energy in the north over a mean annual cycle. The strong atmospheric circulation in the autumn, combined with significant areas of open water, demonstrates the effect that wind stress has directly on the surface geostrophic currents. The Beaufort Gyre and the Transpolar Drift are interconnected due to their relationship in their role in transporting sea ice across the Arctic Ocean. Their influence on the distribution of freshwater has broad impacts for global sea level rise and climate dynamics.
Biogeochemistry of Gyres.
Depending on their location around the world, gyres can be regions of high biological productivity or low productivity. Each gyre has a unique ecological profile but can be grouped by region due to dominating characteristics. Generally, productivity is greater for cyclonic gyres (e.g., subpolar gyres) that drive upwelling through Ekman suction and lesser for anticyclonic gyres (e.g., subtropical gyres) that drive downwelling through Ekman pumping, but this can differ between seasons and regions.
Subtropical gyres are sometimes described as "ocean deserts" or "biological deserts", in reference to arid land deserts where little life exists. Due to their oligotrophic characteristics, warm subtropical gyres have some of the least productive waters per unit surface area in the ocean. The downwelling of water that occurs in subtropical gyres takes nutrients deeper in the ocean, removing them from surface waters. Organic particles can also be removed from surface waters through gravitational sinking, where the particle is too heavy to remain suspended in the water column. However, since subtropical gyres cover 60% of the ocean surface, their relatively low production per unit area is made up for by covering massive areas of the Earth. This means that, despite being areas of relatively low productivity and low nutrients, they play a large role in contributing to the overall amount of ocean production.
In contrast to subtropical gyres, subpolar gyres can have a lot of biological activity due to Ekman suction upwelling driven by wind stress curl. Subpolar gyres in the North Atlantic have a "bloom and crash" pattern following seasonal and storm patterns. The highest productivity in the North Atlantic occurs in boreal spring when there are long days and high levels of nutrients. This is different to the subpolar North Pacific, where almost no phytoplankton bloom occurs and patterns of respiration are more consistent through time than in the North Atlantic.
Nutrient availability.
Primary production in the ocean is heavily dependent on the presence of nutrients and the availability of sunlight. Here, nutrients refers to nitrogen, nitrate, phosphate, and silicate, all important nutrients in biogeochemical processes that take place in the ocean. A commonly accepted method for relating different nutrient availabilities to each other in order to describe chemical processes is the Redfield, Ketchum, and Richards (RKR) equation. This equation describes the process of photosynthesis and respiration and the ratios of the nutrients involved.
The RKR Equation for Photosynthesis and Respiration:
<chem>106CO2 +16HNO3 +H3PO4 +122H2O ->(CH2O)106(NH3)16H3PO4 +138O2</chem>
With the correct ratios of nutrients on the left side of the RKR equation and sunlight, photosynthesis takes place to produce plankton (primary production) and oxygen. Typically, the limiting nutrients to production are nitrogen and phosphorus with nitrogen being the most limiting.
Lack of nutrients in the surface waters of subtropical gyres is related to the strong downwelling and sinking of particles that occurs in these areas as mentioned earlier. However, nutrients are still present in these gyres. These nutrients can come from not only vertical transport, but also lateral transport across gyre fronts. This lateral transport helps make up for the large loss of nutrients due to downwelling and particle sinking. However, the major source of nitrate in the nitrate-limited subtropical gyres is a result of biological, not physical, factors. Nitrogen in subtropical gyres is produced primarily by nitrogen-fixing bacteria, which are common throughout most of the oligotrophic waters of subtropical gyres. These bacteria transform atmospheric nitrogen into bioavailable forms.
High-nutrient, low-chlorophyll regions.
The Alaskan Gyre and Western Subarctic Gyre are an iron-limited environment rather than a nitrogen or phosphorus limited environment. This region relies on dust blowing off the state of Alaska and other landmasses nearby to supply iron. Because it is limited by iron instead of nitrogen or phosphorus, it is known as high-nutrient, low-chlorophyll region. Iron limitation in high-nutrient, low-chlorophyll regions results in water that is rich in other nutrients because they have not been removed by the small populations of plankton that live there.
Seasonality in the North Atlantic Subpolar Gyre.
The North Atlantic Subpolar Gyre is an important part of the ocean's carbon dioxide drawdown mechanism. The photosynthesis of phytoplankton communities in this area seasonally depletes surface waters of carbon dioxide, removing it through primary production. This primary production occurs seasonally, with the highest amounts happening in summer. Generally, spring is an important time for photosynthesis as the light limitation imposed during winter is lifted and there are high levels of nutrients available. However, in the North Atlantic Subpolar Gyre, spring productivity is low in comparison to expected levels. It is hypothesized that this low productivity is because phytoplankton are less efficiently using light than they do in the summer months.
Trophic levels.
Ocean gyres typically contain 5–6 trophic levels. The limiting factor for the number of trophic levels is the size of the phytoplankton, which are generally small in nutrient limited gyres. In low oxygen zones, oligotrophs are a large percentage of the phytoplankton.
At the intermediate level, small fishes and squid (especially "ommastrephidae") dominate the nektonic biomass. They are important for the transport of energy from low trophic levels to high trophic levels. In some gyres, "ommastrephidae" are a major part of many animals' diets and can support the existence of large marine life.
Indigenous knowledge of ocean patterns.
Indigenous Traditional Ecological Knowledge recognizes that Indigenous people, as the original caretakers, hold unique relationships with the land and waters. These relationships make TEK difficult to define, as Traditional Knowledge means something different to each person, each community, and each caretaker. The United Nations Declaration on the Rights of Indigenous Peoples begins by reminding readers that “respect for Indigenous knowledge, cultures and traditional practices contributes to sustainable and equitable development and proper management of the environment” Attempts to collect and store this knowledge have been made over the past twenty years. Conglomerates such as The Indigenous Knowledge Social Network (SIKU) https://siku.org/, the Igliniit project, and the Wales Inupiaq Sea Ice Directory have made strides in the inclusion and documentation of indigenous people's thoughts on global climate, oceanographic, and social trends.
One example involves ancient Polynesians and how they discovered and then travelled throughout the Pacific Ocean from modern day Polynesia to Hawaii and New Zealand. Known as wayfinding, navigators would use the stars, winds, and ocean currents to know where they were on the ocean and where they were headed. These navigators were intimately familiar with Pacific currents that create the North Pacific gyre and this way of navigating continues today.
Another example involves the Māori people who came from Polynesia and are an indigenous group in New Zealand. Their way of life and culture has strong connections to the ocean. The Māori believe that the sea is the source of all life and is an energy, called Tangaroa. This energy could manifest in many different ways, like strong ocean currents, calm seas, or turbulent storms. The Māori have a rich oral history of navigation within the Southern Ocean and Antarctic Ocean and a deep understanding their ice and ocean patterns. A current research project is aimed at consolidating these oral histories. Efforts are being made to integrate TEK with Western science in marine and ocean research in New Zealand. Additional research efforts aim to collate indigenous oral histories and incorporate indigenous knowledge into climate change adaptation practices in New Zealand that will directly affect the Māori and other indigenous communities.
Climate change.
Ocean circulation re-distributes the heat and water-resources, therefore determines the regional climate. For example, the western branches of the subtropical gyres flow from the lower latitudes towards higher latitudes, bringing relatively warm and moist air to the adjacent land, contributing to a mild and wet climate (e.g., East China, Japan). In contrast, the eastern boundary currents of the subtropical gyres streaming from the higher latitudes towards lower latitudes, corresponding to a relatively cold and dry climate (e.g., California).
Currently, the core of the subtropical gyres are around 30° in both Hemispheres. However, their positions were not always there. Satellite observational sea surface height and sea surface temperature data suggest that the world's major ocean gyres are slowly moving towards higher latitudes in the past few decades. Such feature show agreement with climate model prediction under anthropogenic global warming. Paleo-climate reconstruction also suggest that during the past cold climate intervals, i.e., ice ages, some of the western boundary currents (western branches of the subtropical ocean gyres) are closer to the equator than their modern positions. These evidence implies that global warming is very likely to push the large-scale ocean gyres towards higher latitudes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\zeta"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "H"
},
{
"math_id": 3,
"text": "{D \\over Dt}\\left (\\frac{{\\zeta + f}}{H} \\right ) = 0"
},
{
"math_id": 4,
"text": "\\zeta + f"
},
{
"math_id": 5,
"text": "fV_g = \\beta\\rho w_E "
},
{
"math_id": 6,
"text": "V_g"
},
{
"math_id": 7,
"text": "\\beta"
},
{
"math_id": 8,
"text": "\\rho"
},
{
"math_id": 9,
"text": "w_E"
},
{
"math_id": 10,
"text": "f>0"
},
{
"math_id": 11,
"text": " \\zeta = {\\partial v \\over \\partial x} - {\\partial u \\over \\partial y}"
},
{
"math_id": 12,
"text": "v"
},
{
"math_id": 13,
"text": "u"
},
{
"math_id": 14,
"text": " \\partial v / \\partial x > 0"
},
{
"math_id": 15,
"text": " \\partial v / \\partial x < 0"
}
]
| https://en.wikipedia.org/wiki?curid=776619 |
77662984 | Law (mathematics) | A mathematical statement which always holds true
In mathematics, a law is a formula that is always true within a given context. Laws describe a relationship, between two or more terms or expressions (which may contain variables), usually using equality or inequality, or between formulas themselves, for instance, in mathematical logic. For example, the formula formula_0 is true for all real numbers "a", and is therefore a law. Laws over an equality are called indentities. For example, formula_1 and formula_2 are identities. Mathematical laws are distinguished from scientific laws which are based on observations, and try to describe or predict a range of natural phenomena. The more significant laws are often called theorems.
formula_3
Notable Examples.
Geometric laws.
with equality only in the degenerate case of a triangle with zero area. In Euclidean geometry and some other geometries, the triangle inequality is a theorem about vectors and vector lengths (norms):
formula_4
where the length of the third side has been replaced by the length of the vector sum u + v. When u and v are real numbers, they can be viewed as vectors in formula_5, and the triangle inequality expresses a relationship between absolute values.
Trigonometric identities.
Geometrically, trigonometric identities are identities involving certain functions of one or more angles. They are distinct from triangle identities, which are identities involving both angles and side lengths of a triangle. Only the former are covered in this article.
These identities are useful whenever expressions involving trigonometric functions need to be simplified. Another important application is the integration of non-trigonometric functions: a common technique which involves first using the substitution rule with a trigonometric function, and then simplifying the resulting integral with a trigonometric identity.
One of the most prominent examples of trigonometric identities involves the equation formula_7 which is true for all real values of formula_8. On the other hand, the equation
formula_9
is only true for certain values of formula_8, not all. For example, this equation is true when formula_10 but false when formula_11.
Another group of trigonometric identities concerns the so-called addition/subtraction formulas (e.g. the double-angle identity formula_12, the addition formula for formula_13), which can be used to break down expressions of larger angles into those with smaller constituents.
Algebraic laws.
Cauchy–Schwarz inequality: An upper bound on the inner product between two vectors in an inner product space in terms of the product of the vector norms. It is considered one of the most important and widely used inequalities in mathematics.
The Cauchy–Schwarz inequality states that for all vectors formula_14 and formula_15 of an inner product space
formula_16
where formula_17
is the inner product. Examples of inner products include the real and complex dot product; see the examples in inner product. Every inner product gives rise to a Euclidean formula_18 norm, called the canonical or induced norm, where the norm of a vector formula_14 is denoted and defined by
formula_19
where formula_20 is always a non-negative real number (even if the inner product is complex-valued). By taking the square root of both sides of the above inequality, the Cauchy–Schwarz inequality can be written in its more familiar form in terms of the norm:
formula_16
Moreover, the two sides are equal if and only if formula_14 and formula_15 are linearly dependent.
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a^2 \\geq 0"
},
{
"math_id": 1,
"text": "(a+b)^2 = a^2 + 2ab + b^2"
},
{
"math_id": 2,
"text": "\\cos^2\\theta + \\sin^2\\theta =1"
},
{
"math_id": 3,
"text": "c \\leq a + b,"
},
{
"math_id": 4,
"text": "\\|\\mathbf u + \\mathbf v\\| \\leq \\|\\mathbf u\\| + \\|\\mathbf v\\| ,"
},
{
"math_id": 5,
"text": "\\R^1"
},
{
"math_id": 6,
"text": "a^2 + b^2 = c^2 ."
},
{
"math_id": 7,
"text": "\\sin^2 \\theta + \\cos^2 \\theta = 1,"
},
{
"math_id": 8,
"text": "\\theta"
},
{
"math_id": 9,
"text": "\\cos\\theta = 1"
},
{
"math_id": 10,
"text": "\\theta = 0,"
},
{
"math_id": 11,
"text": "\\theta = 2"
},
{
"math_id": 12,
"text": "\\sin(2\\theta) = 2\\sin\\theta \\cos\\theta"
},
{
"math_id": 13,
"text": "\\tan(x + y)"
},
{
"math_id": 14,
"text": "\\mathbf{u}"
},
{
"math_id": 15,
"text": "\\mathbf{v}"
},
{
"math_id": 16,
"text": "\\left \\vert \\langle \\bold u, \\bold v \\rangle \\right \\vert \\leq \\langle \\bold u, \\bold u \\rangle \\cdot \\langle \\bold v, \\bold v \\rangle"
},
{
"math_id": 17,
"text": "\\langle \\cdot, \\cdot \\rangle"
},
{
"math_id": 18,
"text": "l_2"
},
{
"math_id": 19,
"text": "\\|\\mathbf{u}\\| := \\sqrt{\\langle \\mathbf{u}, \\mathbf{u} \\rangle},"
},
{
"math_id": 20,
"text": "\\langle \\mathbf{u}, \\mathbf{u} \\rangle"
},
{
"math_id": 21,
"text": "\\neg(P\\lor Q)\\iff(\\neg P)\\land(\\neg Q),"
},
{
"math_id": 22,
"text": "\\neg(P\\land Q)\\iff(\\neg P)\\lor(\\neg Q)"
},
{
"math_id": 23,
"text": "\\neg"
},
{
"math_id": 24,
"text": "\\land"
},
{
"math_id": 25,
"text": "\\lor"
},
{
"math_id": 26,
"text": "\\iff"
}
]
| https://en.wikipedia.org/wiki?curid=77662984 |
7766542 | Microarray analysis techniques | Microarray analysis techniques are used in interpreting the data generated from experiments on DNA (Gene chip analysis), RNA, and protein microarrays, which allow researchers to investigate the expression state of a large number of genes – in many cases, an organism's entire genome – in a single experiment. Such experiments can generate very large amounts of data, allowing researchers to assess the overall state of a cell or organism. Data in such large quantities is difficult – if not impossible – to analyze without the help of computer programs.
Introduction.
Microarray data analysis is the final step in reading and processing data produced by a microarray chip. Samples undergo various processes including purification and scanning using the microchip, which then produces a large amount of data that requires processing via computer software. It involves several distinct steps, as outlined in the image below. Changing any one of the steps will change the outcome of the analysis, so the MAQC Project was created to identify a set of standard strategies. Companies exist that use the MAQC protocols to perform a complete analysis.
Techniques.
Most microarray manufacturers, such as Affymetrix and Agilent, provide commercial data analysis software alongside their microarray products. There are also open source options that utilize a variety of methods for analyzing microarray data.
Aggregation and normalization.
Comparing two different arrays or two different samples hybridized to the same array generally involves making adjustments for systematic errors introduced by differences in procedures and dye intensity effects. Dye normalization for two color arrays is often achieved by local regression. LIMMA provides a set of tools for background correction and scaling, as well as an option to average on-slide duplicate spots. A common method for evaluating how well normalized an array is, is to plot an MA plot of the data. MA plots can be produced using programs and languages such as R and MATLAB.
Raw Affy data contains about twenty probes for the same RNA target. Half of these are "mismatch spots", which do not precisely match the target sequence. These can theoretically measure the amount of nonspecific binding for a given target. Robust Multi-array Average (RMA) is a normalization approach that does not take advantage of these mismatch spots but still must summarize the perfect matches through median polish. The median polish algorithm, although robust, behaves differently depending on the number of samples analyzed. Quantile normalization, also part of RMA, is one sensible approach to normalize a batch of arrays in order to make further comparisons meaningful.
The current Affymetrix MAS5 algorithm, which uses both perfect match and mismatch probes, continues to enjoy popularity and do well in head to head tests.
Factor analysis for Robust Microarray Summarization (FARMS) is a model-based technique for summarizing array data at perfect match probe level. It is based on a factor analysis model for which a Bayesian maximum a posteriori method optimizes the model parameters under the assumption of Gaussian measurement noise. According to the Affycomp benchmark FARMS outperformed all other summarizations methods with respect to sensitivity and specificity.
Identification of significant differential expression.
Many strategies exist to identify array probes that show an unusual level of over-expression or under-expression. The simplest one is to call "significant" any probe that differs by an average of at least twofold between treatment groups. More sophisticated approaches are often related to t-tests or other mechanisms that take both effect size and variability into account. Curiously, the p-values associated with particular genes do not reproduce well between replicate experiments, and lists generated by straight fold change perform much better. This represents an extremely important observation, since the point of performing experiments has to do with predicting general behavior. The MAQC group recommends using a fold change assessment plus a non-stringent p-value cutoff, further pointing out that changes in the background correction and scaling process have only a minimal impact on the rank order of fold change differences, but a substantial impact on p-values.
Clustering.
Clustering is a data mining technique used to group genes having similar expression patterns. Hierarchical clustering, and k-means clustering are widely used techniques in microarray analysis.
Hierarchical clustering.
Hierarchical clustering is a statistical method for finding relatively homogeneous clusters. Hierarchical clustering consists of two separate phases. Initially, a distance matrix containing all the pairwise distances between the genes is calculated. Pearson's correlation and Spearman's correlation are often used as dissimilarity estimates, but other methods, like Manhattan distance or Euclidean distance, can also be applied. Given the number of distance measures available and their influence in the clustering algorithm results, several studies have compared and evaluated different distance measures for the clustering of microarray data, considering their intrinsic properties and robustness to noise. After calculation of the initial distance matrix, the hierarchical clustering algorithm either (A) joins iteratively the two closest clusters starting from single data points (agglomerative, bottom-up approach, which is fairly more commonly used), or (B) partitions clusters iteratively starting from the complete set (divisive, top-down approach). After each step, a new distance matrix between the newly formed clusters and the other clusters is recalculated. Hierarchical cluster analysis methods include:
Different studies have already shown empirically that the Single linkage clustering algorithm produces poor results when employed to gene expression microarray data and thus should be avoided.
K-means clustering.
K-means clustering is an algorithm for grouping genes or samples based on pattern into "K" groups. Grouping is done by minimizing the sum of the squares of distances between the data and the corresponding cluster centroid. Thus the purpose of K-means clustering is to classify data based on similar expression. K-means clustering algorithm and some of its variants (including k-medoids) have been shown to produce good results for gene expression data (at least better than hierarchical clustering methods). Empirical comparisons of k-means, k-medoids, hierarchical methods and, different distance measures can be found in the literature.
Pattern recognition.
Commercial systems for gene network analysis such as Ingenuity and Pathway studio create visual representations of differentially expressed genes based on current scientific literature. Non-commercial tools such as FunRich, GenMAPP and Moksiskaan also aid in organizing and visualizing gene network data procured from one or several microarray experiments. A wide variety of microarray analysis tools are available through Bioconductor written in the R programming language. The frequently cited SAM module and other microarray tools are available through Stanford University. Another set is available from Harvard and MIT.
Specialized software tools for statistical analysis to determine the extent of over- or under-expression of a gene in a microarray experiment relative to a reference state have also been developed to aid in identifying genes or gene sets associated with particular phenotypes. One such method of analysis, known as Gene Set Enrichment Analysis (GSEA), uses a Kolmogorov-Smirnov-style statistic to identify groups of genes that are regulated together. This third-party statistics package offers the user information on the genes or gene sets of interest, including links to entries in databases such as NCBI's GenBank and curated databases such as Biocarta and Gene Ontology. Protein complex enrichment analysis tool (COMPLEAT) provides similar enrichment analysis at the level of protein complexes. The tool can identify the dynamic protein complex regulation under different condition or time points. Related system, PAINT and SCOPE performs a statistical analysis on gene promoter regions, identifying over and under representation of previously identified transcription factor response elements. Another statistical analysis tool is Rank Sum Statistics for Gene Set Collections (RssGsc), which uses rank sum probability distribution functions to find gene sets that explain experimental data. A further approach is contextual meta-analysis, i.e. finding out how a gene cluster responds to a variety of experimental contexts. Genevestigator is a public tool to perform contextual meta-analysis across contexts such as anatomical parts, stages of development, and response to diseases, chemicals, stresses, and neoplasms.
Significance analysis of microarrays (SAM).
Significance analysis of microarrays (SAM) is a statistical technique, established in 2001 by Virginia Tusher, Robert Tibshirani and Gilbert Chu, for determining whether changes in gene expression are statistically significant. With the advent of DNA microarrays, it is now possible to measure the expression of thousands of genes in a single hybridization experiment. The data generated is considerable, and a method for sorting out what is significant and what isn't is essential. SAM is distributed by Stanford University in an R-package.
SAM identifies statistically significant genes by carrying out gene specific t-tests and computes a statistic "dj" for each gene "j", which measures the strength of the relationship between gene expression and a response variable. This analysis uses non-parametric statistics, since the data may not follow a normal distribution. The response variable describes and groups the data based on experimental conditions. In this method, repeated permutations of the data are used to determine if the expression of any gene is significant related to the response. The use of permutation-based analysis accounts for correlations in genes and avoids parametric assumptions about the distribution of individual genes. This is an advantage over other techniques (e.g., ANOVA and Bonferroni), which assume equal variance and/or independence of genes.
Running SAM.
the number of permutations is set by the user when imputing correct values for the data set to run SAM
Response formats.
Types:
Algorithm.
SAM calculates a test statistic for relative difference in gene expression based on permutation analysis of expression data and calculates a false discovery rate. The principal calculations of the program are illustrated below.
The "s"o constant is chosen to minimize the coefficient of variation of "di". r"i" is equal to the expression levels (x) for gene "i" under y experimental conditions.
formula_0
Fold changes (t) are specified to guarantee genes called significant change at least a pre-specified amount. This means that the absolute value of the average expression levels of a gene under each of two conditions must be greater than the fold change (t) to be called positive and less than the inverse of the fold change (t) to be called negative.
The SAM algorithm can be stated as:
Error correction and quality control.
Quality control.
Entire arrays may have obvious flaws detectable by visual inspection, pairwise comparisons to arrays in the same experimental group, or by analysis of RNA degradation. Results may improve by removing these arrays from the analysis entirely.
Background correction.
Depending on the type of array, signal related to nonspecific binding of the fluorophore can be subtracted to achieve better results. One approach involves subtracting the average
signal intensity of the area between spots. A variety of tools for background correction and further analysis are available from TIGR, Agilent (GeneSpring), and Ocimum Bio Solutions (Genowiz).
Spot filtering.
Visual identification of local artifacts, such as printing or washing defects, may likewise suggest the removal of individual spots. This can take a substantial amount of time depending on the quality of array manufacture. In addition, some procedures call for the elimination of all spots with an expression value below a certain intensity threshold.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{False \\ discovery \\ rate \\ (FDR) = \\frac{Median \\ (or \\ 90^{th} \\ percentile) \\ of \\ \\# \\ of \\ falsely \\ called \\ genes}{Number \\ of \\ genes \\ called \\ significant}}"
}
]
| https://en.wikipedia.org/wiki?curid=7766542 |
7766835 | Anamorphism | In computer programming, an anamorphism is a function that generates a sequence by repeated application of the function to its previous result. You begin with some value A and apply a function f to it to get B. Then you apply f to B to get C, and so on until some terminating condition is reached. The anamorphism is the function that generates the list of A, B, C, etc. You can think of the anamorphism as unfolding the initial value into a sequence.
The above layman's description can be stated more formally in category theory: the anamorphism of a coinductive type denotes the assignment of a coalgebra to its unique morphism to the final coalgebra of an endofunctor. These objects are used in functional programming as "unfolds".
The categorical dual (aka opposite) of the anamorphism is the catamorphism.
Anamorphisms in functional programming.
In functional programming, an anamorphism is a generalization of the concept of "unfolds" on coinductive lists. Formally, anamorphisms are generic functions that can corecursively construct a result of a certain type and which is parameterized by functions that determine the next single step of the construction.
The data type in question is defined as the greatest fixed point "ν X . F X" of a functor "F". By the universal property of final coalgebras, there is a unique coalgebra morphism "A → ν X . F X" for any other "F"-coalgebra "a : A → F A". Thus, one can define functions from a type "A" _into_ a coinductive datatype by specifying a coalgebra structure "a" on "A".
Example: Potentially infinite lists.
As an example, the type of potentially infinite lists (with elements of a fixed type "value") is given as the fixed point "[value] = ν X . value × X + 1", i.e. a list consists either of a "value" and a further list, or it is empty. A (pseudo-)Haskell-Definition might look like this:
data [value] = (value:[value]) | []
It is the fixed point of the functor codice_0, where:
data Maybe a = Just a | Nothing
data F value x = Maybe (value, x)
One can easily check that indeed the type codice_1 is isomorphic to codice_2, and thus codice_1 is the fixed point.
The "anamorphism" for lists (then usually known as "unfold") would build a (potentially infinite) list from a state value. Typically, the unfold takes a state value codice_4 and a function codice_5 that yields either a pair of a value and a new state, or a singleton to mark the end of the list. The anamorphism would then begin with a first seed, compute whether the list continues or ends, and in case of a nonempty list, prepend the computed value to the recursive call to the anamorphism.
A Haskell definition of an unfold, or anamorphism for lists, called codice_6, is as follows:
ana :: (state -> Maybe (value, state)) -> state -> [value]
ana f stateOld = case f stateOld of
Nothing -> []
Just (value, stateNew) -> value : ana f stateNew
We can now implement quite general functions using "ana", for example a countdown:
f :: Int -> Maybe (Int, Int)
f current = let oneSmaller = current - 1
in if oneSmaller < 0
then Nothing
else Just (oneSmaller, oneSmaller)
This function will decrement an integer and output it at the same time, until it is negative, at which point it will mark the end of the list. Correspondingly, codice_7 will compute the list codice_8.
Anamorphisms on other data structures.
An anamorphism can be defined for any recursive type, according to a generic pattern, generalizing the second version of "ana" for lists.
For example, the unfold for the tree data structure
data Tree a = Leaf a | Branch (Tree a) a (Tree a)
is as follows
ana :: (b -> Either a (b, a, b)) -> b -> Tree a
ana unspool x = case unspool x of
Left a -> Leaf a
Right (l, x, r) -> Branch (ana unspool l) x (ana unspool r)
To better see the relationship between the recursive type and its anamorphism, note that codice_9 and codice_10 can be defined thus:
The analogy with codice_6 appears by renaming codice_12 in its type:
anaList :: (list_a -> Maybe (a, list_a)) -> (list_a -> List a)
anaTree :: (tree_a -> Either a (tree_a, a, tree_a)) -> (tree_a -> Tree a)
With these definitions, the argument to the constructor of the type has the same type as the return type of the first argument of codice_6, with the recursive mentions of the type replaced with codice_12.
History.
One of the first publications to introduce the notion of an anamorphism in the context of programming was the paper "Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire", by Erik Meijer "et al.", which was in the context of the Squiggol programming language.
Applications.
Functions like codice_15 and codice_16 are examples of anamorphisms. codice_15 takes a pair of lists, say ['a','b','c'] and [1,2,3] and returns a list of pairs [('a',1),('b',2),('c',3)]. codice_18 takes a thing, x, and a function, f, from such things to such things, and returns the infinite list that comes from repeated application of f, i.e. the list [x, (f x), (f (f x)), (f (f (f x))), ...].
zip (a:as) (b:bs) = if (as==[]) || (bs ==[]) -- || means 'or'
then [(a,b)]
else (a,b):(zip as bs)
iterate f x = x:(iterate f (f x))
To prove this, we can implement both using our generic unfold, codice_6, using a simple recursive routine:
zip2 = ana unsp fin
where
fin (as,bs) = (as==[]) || (bs ==[])
unsp ((a:as), (b:bs)) = ((a,b),(as,bs))
iterate2 f = ana (\a->(a,f a)) (\x->False)
In a language like Haskell, even the abstract functions codice_20, codice_21 and codice_6 are merely defined terms, as we have seen from the definitions given above.
Anamorphisms in category theory.
In category theory, anamorphisms are the categorical dual of catamorphisms (and catamorphisms are the categorical dual of anamorphisms).
That means the following.
Suppose ("A", "fin") is a final "F"-coalgebra for some endofunctor "F" of some category into itself.
Thus, "fin" is a morphism from "A" to "FA", and since it is assumed to be final we know that whenever ("X", "f") is another "F"-coalgebra (a morphism "f" from "X" to "FX"), there will be a unique homomorphism "h" from ("X", "f") to ("A", "fin"), that is a morphism "h" from "X" to "A" such that "fin . h = Fh . f".
Then for each such "f" we denote by ana f that uniquely specified morphism "h".
In other words, we have the following defining relationship, given some fixed "F", "A", and "fin" as above:
Notation.
A notation for ana "f" found in the literature is formula_2. The brackets used are known as lens brackets, after which anamorphisms are sometimes referred to as "lenses".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "h = \\mathrm{ana}\\ f"
},
{
"math_id": 1,
"text": "\\mathrm{fin}\\circ h = Fh \\circ f"
},
{
"math_id": 2,
"text": "[\\!(f)\\!]"
}
]
| https://en.wikipedia.org/wiki?curid=7766835 |
77668849 | Mean payoff game | Mathematical game played on a graph
In game theory, a mean payoff game is a zero-sum game played on the vertices of a weighted directed graph. The game is played as follows: at the start of the game, a token is placed on one of the vertices of the graph. Each vertex is assigned to either the Maximizer of the Minimizer. The player that controls the current vertex the token is on, may choose one outgoing edge along which the token moves next. In doing so, the Minimizer pays the maximizer the number that is on the edge. Then, again, the player controlling the next vertex the token gets can choose where it goes, and this continues indefinitely. The objective for the Maximizer is to maximize their long term average payoff, and the Minimizer has the opposite objective.
Formal definition.
A mean payoff game consists of a graph formula_0, and a function formula_1 where formula_2 is the set of vertices, which are partitioned between the players, and where formula_3 is the weight of an edge. Often, the graph is assumed to be "sinkless", which means that every vertex has at least one outgoing edge. A "play" is a possible outcome of the game, which is an inifinite walk on the graph, we could write this as a sequence of edges: formula_4 where the head of formula_5 equals the tail of formula_6. The objective value of the game can then be written as follows:
formula_7
A strategy for the Maximizer is a function formula_8, where formula_9 is the set of finite walks that start at the initial vertex and end at some vertex formula_10, which returns an outgoing edge of the end vertex formula_11. A strategy formula_12 for the Minimizer can be defined analogously. If both players fix a strategy, say they pick strategies formula_13 and formula_12, then the outcome of the game is fixed, and the resulting play is the path formula_14.
One of the fundamental results for mean payoff games is that they are "positionally determined." This means in our case that the game has a unique "value", and that each player has a strategy that can attain the value, and that strategy is "positional", e.g. it only depends on the current vertex the token is on. In formulas, the following equation holds for the value formula_15:
formula_16
Solving mean payoff games.
Solving a mean payoff game can mean several things, although in practice finding one often also yields the other:
It is a major open problem in computer science whether there exists a polynomial time algorithm for solving any of the above problems. These problems are one of the few to be contained in both the classes NP and coNP but not known to be in P. Currently, the fastest algorithm is a randomized strategy improvement algorithm, which runs in time formula_17, where formula_18 is the number of Maximizer vertices. The best deterministic algorithms run in time formula_19, where now formula_20 is the number of edges and formula_18 the total number of vertices.
Three of the most well-known algorithms for solving mean payoff games are the following (each of which has their own slight variants):
Related games and problems.
The problem of solving parity games can be polynomial-time reduced to solving mean payoff games. Solving mean payoff games can be shown to be polynomial-time equivalent to many core problems concerning tropical linear programming. Another closely related game to the mean payoff game is the energy game, in which the Maximizer tries to maximize the smallest cumulative sum within the play instead of the long-term average.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G=(V_{Max}\\cup V_{Min},E)"
},
{
"math_id": 1,
"text": "w:E\\to\\mathbb{R}"
},
{
"math_id": 2,
"text": "V=V_{Max}\\cup V_{Min}"
},
{
"math_id": 3,
"text": "w(e)"
},
{
"math_id": 4,
"text": "\\pi = e_1, e_2, e_3, \\ldots"
},
{
"math_id": 5,
"text": "e_i"
},
{
"math_id": 6,
"text": "e_{i+1}"
},
{
"math_id": 7,
"text": "O(\\pi)=\\liminf_{t\\to\\infty}\\frac{1}{t}\\sum_{i=1}^{t}w(e_i)"
},
{
"math_id": 8,
"text": "\\sigma:FW_{Max}\\to E "
},
{
"math_id": 9,
"text": "FW_{v} "
},
{
"math_id": 10,
"text": "v\\in V_{Max} "
},
{
"math_id": 11,
"text": "v "
},
{
"math_id": 12,
"text": "\\tau "
},
{
"math_id": 13,
"text": "\\sigma "
},
{
"math_id": 14,
"text": "\\pi_{\\sigma\\tau} "
},
{
"math_id": 15,
"text": "V(G,w) "
},
{
"math_id": 16,
"text": "V(G,w)=\\max_{\\sigma\\in\\text{(positional Max strategies)}}\\inf_{\\tau\\in\\text{(Min strategies)}}O(\\pi_{\\sigma\\tau})=\\min_{\\tau\\in\\text{(positional Min strategies)}}\\sup_{\\sigma\\in \\text{(Max strategies)}}O(\\pi_{\\sigma\\tau}) "
},
{
"math_id": 17,
"text": "O(2^{\\sqrt{n \\log(n)}}) "
},
{
"math_id": 18,
"text": "n "
},
{
"math_id": 19,
"text": "m2^{n/2} "
},
{
"math_id": 20,
"text": "m "
}
]
| https://en.wikipedia.org/wiki?curid=77668849 |
7767038 | K-server problem | Computational problem of interest in computer science
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in computer science:
Is there a formula_0-competitive algorithm for solving the formula_0-server problem in an arbitrary metric space?
The k-server problem is a problem of theoretical computer science in the category of online algorithms, one of two abstract problems on metric spaces that are central to the theory of competitive analysis (the other being metrical task systems). In this problem, an online algorithm must control the movement of a set of "k" "servers", represented as points in a metric space, and handle "requests" that are also in the form of points in the space. As each request arrives, the algorithm must determine which server to move to the requested point. The goal of the algorithm is to keep the total distance all servers move small, relative to the total distance the servers could have moved by an optimal adversary who knows in advance the entire sequence of requests.
The problem was first posed by Mark Manasse, Lyle A. McGeoch and Daniel Sleator (1988). The most prominent open question concerning the "k"-server problem is the so-called "k"-server conjecture, also posed by Manasse et al. This conjecture states that there is an algorithm for solving the "k"-server problem in an arbitrary metric space and for any number "k" of servers that has competitive ratio exactly "k". Manasse et al. were able to prove their conjecture when "k" = 2, and for more general values of "k" for some metric spaces restricted to have exactly "k"+1 points. Chrobak and Larmore (1991) proved the conjecture for tree metrics. The special case of metrics in which all distances are equal is called the "paging problem" because it models the problem of page replacement algorithms in memory caches, and was also already known to have a k-competitive algorithm (Sleator and Tarjan 1985). Fiat et al. (1990) first proved that there exists an algorithm with finite competitive ratio for any constant "k" and any metric space, and finally Koutsoupias and Papadimitriou (1995) proved that Work Function Algorithm (WFA) has competitive ratio 2"k" - 1. However, despite the efforts of many other researchers, reducing the competitive ratio to k or providing an improved lower bound remains open as of 2014[ [update]]. The most common believed scenario is that the Work Function Algorithm is "k"-competitive. To this direction, in 2000 Bartal and Koutsoupias showed that this is true for some special cases (if the metric space is a line, a weighted star or any metric of "k"+2 points).
The "k"-server conjecture has also a version for randomized algorithms, which asks if exists a randomized algorithm with competitive ratio O(log "k") in any arbitrary metric space (with at least "k" + 1 points). In 2011, a randomized algorithm with competitive bound Õ(log2k log3n) was found. In 2017, a randomized algorithm with competitive bound O(log6 k) was announced, but was later retracted. In 2022 it was shown that the randomized version of the conjecture is false.
Example.
To make the problem more concrete, imagine sending customer support technicians to customers when they have trouble with their equipment. In our example problem there are two technicians, Mary and Noah, serving three customers, in San Francisco, California; Washington, DC; and Baltimore, Maryland. As a "k"-server problem, the servers are the technicians, so "k" = 2 and this is a 2-server problem. Washington and Baltimore are apart, while San Francisco is away from both, and initially Mary and Noah are both in San Francisco.
Consider an algorithm for assigning servers to requests that always assigns the closest server to the request, and suppose that each weekday morning the customer in Washington needs assistance while each weekday afternoon the customer in Baltimore needs assistance, and that the customer in San Francisco never needs assistance. Then, our algorithm will assign one of the servers (say Mary) to the Washington area, after which she will always be the closest server and always be assigned to all customer requests. Thus, every day our algorithm incurs the cost of traveling between Washington and Baltimore and back, . After a year of this request pattern, the algorithm will have incurred travel: 3,000 to send Mary to the East Coast, and 17,500 for the trips between Washington and Baltimore. On the other hand, an optimal adversary who knows the future request schedule could have sent both Mary and Noah to Washington and Baltimore respectively, paying of travel once but then avoiding any future travel costs. The competitive ratio of our algorithm on this input is 20,500/6,000 or approximately 3.4, and by adjusting the parameters of this example the competitive ratio of this algorithm can be made arbitrarily large.
Thus we see that always assigning the closest server can be far from optimal. On the other hand, it seems foolish for an algorithm that does not know future requests to send both of its technicians away from San Francisco, as the next request could be in that city and it would have to send someone back immediately. So it seems that it is difficult or impossible for a "k"-server algorithm to perform well relative to its adversary. However, for the 2-server problem, there exists an algorithm that always has a total travel distance of at most twice the adversary's distance.
The "k"-server conjecture states that similar solutions exist for problems with any larger number of technicians. | [
{
"math_id": 0,
"text": "k"
}
]
| https://en.wikipedia.org/wiki?curid=7767038 |
77671044 | Mill's Inequality | Mill's Inequality is a useful tail bound on Normally distributed random variables.
<templatestyles src="Math_theorem/styles.css" />
Mill's Inequality — Let formula_0. Then formula_1
The looser bound shows the exponential shape. Compare this to the Chernoff bound:
formula_2 | [
{
"math_id": 0,
"text": "Z \\sim N(0,1)"
},
{
"math_id": 1,
"text": "\\operatorname{P} (|Z| > t) \n\\le \\sqrt{\\frac{2}{\\pi}} \\frac{\\exp(-t^2/2)}{t}\n \\le \\frac{\\exp(-t^2/2)}{t}"
},
{
"math_id": 2,
"text": "\\operatorname{P} (|Z| > t) \\le 2 \\exp (-t^2/2)"
}
]
| https://en.wikipedia.org/wiki?curid=77671044 |
77685882 | Santa Barbara Amorphous-15 | Silica-based ordered mesoporous material
SBA-15, an acronym for Santa Barbara Amorphous-15, is a silica-based ordered mesoporous material that was first synthesized by researchers at the university of California Santa Barbra in 1998. This material proved important for scientists in various fields such as material sciences, drug delivery, catalysis, fuel cells and many other due to its desirable properties and ease of production.
Synthesis procedure.
The procedure is a typical Liquid-Crystal templating that consists of three steps:
Structure.
The interest in SBA-15 comes from the fact that its mostly mesopoures – meaning the pores are in the range of 2 nm to 50 nm according to the IUPAC definition and the fact that these pores have a well defined structure that is cylindrical shape in hexagonal ordering with their relatively thick pore walls which gives thermal stability.
The sorption isotherms of these materials, demonstrate typical hysteric behavior, which is still under discussion for its causes.
TEM.
The transmission electron microscopy of the sample shows the cylindrical pores but also highlights then fact that the pores of this material exhibit geometric deformations.
SAXS.
The small-angle X-ray scattering pattern shows typical Bragg peaks to the hexagonal structure of the material. The peak positions, is directly related to the lattice parameter.
formula_0
where h and k are the miller indices.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\tq_{hk} = \\frac{4 \\pi}{a \\sqrt{3}} \\sqrt{h^2 + k^2 + hk}"
}
]
| https://en.wikipedia.org/wiki?curid=77685882 |
77689 | Femtosecond | One quadrillionth of a second
<templatestyles src="Template:Infobox/styles-images.css" />
A femtosecond is a unit of time in the International System of Units (SI) equal to 10−15 or <templatestyles src="Fraction/styles.css" />1⁄1 000 000 000 000 000 of a second; that is, one quadrillionth, or one millionth of one billionth, of a second. For context, a femtosecond is to a second as a second is to about 31.71 million years; a ray of light travels approximately 0.3 μm (micrometers) in 1 femtosecond, a distance comparable to the diameter of a virus. The first to make femtosecond measurements was the Egyptian Nobel Laureate Ahmed Zewail, for which he was awarded the Nobel Prize in Chemistry in 1999. Professor Zewail used lasers to measure the movement of particles at the femtosecond scale, thereby allowing chemical reactions to be observed for the first time.
The word "femtosecond" is formed by the SI prefix "femto" and the SI unit "second". Its symbol is fs.
A femtosecond is equal to 1000 attoseconds, or 1/1000 picosecond. Because the next higher SI unit is 1000 times larger, times of 10−14 and 10−13 seconds are typically expressed as tens or hundreds of femtoseconds.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{\\lambda\\over{c}} = {600 \\times 10^{-9}~{\\rm m} \\over 3 \\times 10^8~{\\rm m}~{\\rm s}^{-1}} = 2.0 \\times 10^{-15}~{\\rm s}"
}
]
| https://en.wikipedia.org/wiki?curid=77689 |
7768943 | Distance-regular graph | Graph property
In the mathematical field of graph theory, a distance-regular graph is a regular graph such that for any two vertices v and w, the number of vertices at distance j from v and at distance k from w depends only upon j, k, and the distance between v and w.
Some authors exclude the complete graphs and disconnected graphs from this definition.
Every distance-transitive graph is distance-regular. Indeed, distance-regular graphs were introduced as a combinatorial generalization of distance-transitive graphs, having the numerical regularity properties of the latter without necessarily having a large automorphism group.
Intersection arrays.
The intersection array of a distance-regular graph is the array formula_0 in which formula_1 is the diameter of the graph and for each formula_2, formula_3 gives the number of neighbours of formula_4 at distance formula_5 from formula_6 and formula_7 gives the number of neighbours of formula_4 at distance formula_8 from formula_6 for any pair of vertices formula_4 and formula_6 at distance formula_9. There is also the number formula_10 that gives the number of neighbours of formula_4 at distance formula_9 from formula_6. The numbers formula_11 are called the intersection numbers of the graph. They satisfy the equation formula_12 where formula_13 is the valency, i.e., the number of neighbours, of any vertex.
It turns out that a graph formula_14 of diameter formula_15 is distance regular if and only if it has an intersection array in the preceding sense.
Cospectral and disconnected distance-regular graphs.
A pair of connected distance-regular graphs are cospectral if their adjacency matrices have the same spectrum. This is equivalent to their having the same intersection array.
A distance-regular graph is disconnected if and only if it is a disjoint union of cospectral distance-regular graphs.
Properties.
Suppose formula_14 is a connected distance-regular graph of valency formula_16 with intersection array formula_0. For each formula_17 let formula_18 denote the number of vertices at distance formula_16 from any given vertex and let formula_19 denote the formula_20-regular graph with adjacency matrix formula_21 formed by relating pairs of vertices on formula_14 at distance formula_9.
Spectral properties.
If formula_14 is strongly regular, then formula_34 and formula_35.
Examples.
Some first examples of distance-regular graphs include:
Classification of distance-regular graphs.
There are only finitely many distinct connected distance-regular graphs of any given valency formula_37.
Similarly, there are only finitely many distinct connected distance-regular graphs with any given eigenvalue multiplicity formula_38 (with the exception of the complete multipartite graphs).
Cubic distance-regular graphs.
The cubic distance-regular graphs have been completely classified.
The 13 distinct cubic distance-regular graphs are K4 (or Tetrahedral graph), K3,3, the Petersen graph, the Cubical graph, the Heawood graph, the Pappus graph, the Coxeter graph, the Tutte–Coxeter graph, the Dodecahedral graph, the Desargues graph, Tutte 12-cage, the Biggs–Smith graph, and the Foster graph.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "( b_0, b_1, \\ldots, b_{d-1}; c_1, \\ldots, c_d ) "
},
{
"math_id": 1,
"text": "d"
},
{
"math_id": 2,
"text": "1 \\leq j \\leq d "
},
{
"math_id": 3,
"text": "b_j "
},
{
"math_id": 4,
"text": "u "
},
{
"math_id": 5,
"text": "j+1 "
},
{
"math_id": 6,
"text": "v "
},
{
"math_id": 7,
"text": "c_j "
},
{
"math_id": 8,
"text": "j - 1 "
},
{
"math_id": 9,
"text": "j "
},
{
"math_id": 10,
"text": "a_j"
},
{
"math_id": 11,
"text": "a_j, b_j, c_j"
},
{
"math_id": 12,
"text": "a_j + b_j + c_j = k,"
},
{
"math_id": 13,
"text": "k = b_0"
},
{
"math_id": 14,
"text": "G "
},
{
"math_id": 15,
"text": "d "
},
{
"math_id": 16,
"text": "k"
},
{
"math_id": 17,
"text": "0 \\leq j \\leq d, "
},
{
"math_id": 18,
"text": "k_j"
},
{
"math_id": 19,
"text": "G_{j} "
},
{
"math_id": 20,
"text": "k_{j} "
},
{
"math_id": 21,
"text": "A_j "
},
{
"math_id": 22,
"text": "\\frac{k_{j+1}}{k_{j}} = \\frac{b_{j}}{c_{j+1}} "
},
{
"math_id": 23,
"text": "0 \\leq j < d "
},
{
"math_id": 24,
"text": "b_0 > b_1 \\geq \\cdots \\geq b_{d-1} > 0 "
},
{
"math_id": 25,
"text": "1 = c_1 \\leq \\cdots \\leq c_d \\leq b_0 "
},
{
"math_id": 26,
"text": "d + 1 "
},
{
"math_id": 27,
"text": "k,"
},
{
"math_id": 28,
"text": "-k"
},
{
"math_id": 29,
"text": "G"
},
{
"math_id": 30,
"text": "k \\leq \\frac{1}{2} (m - 1)(m + 2)"
},
{
"math_id": 31,
"text": "m > 1"
},
{
"math_id": 32,
"text": "G,"
},
{
"math_id": 33,
"text": "d \\leq 3m - 4"
},
{
"math_id": 34,
"text": "n \\leq 4m - 1"
},
{
"math_id": 35,
"text": "k \\leq 2m - 1"
},
{
"math_id": 36,
"text": "2"
},
{
"math_id": 37,
"text": "k > 2"
},
{
"math_id": 38,
"text": "m > 2"
}
]
| https://en.wikipedia.org/wiki?curid=7768943 |
7769842 | Conference graph | Special case of a strongly regular graph
In the mathematical area of graph theory, a conference graph is a strongly regular graph with parameters "v", "k" = ("v" − 1)/2, λ = ("v" − 5)/4, and μ = ("v" − 1)/4. It is the graph associated with a symmetric conference matrix, and consequently its order "v" must be 1 (modulo 4) and a sum of two squares.
Conference graphs are known to exist for all small values of "v" allowed by the restrictions, e.g., "v" = 5, 9, 13, 17, 25, 29, and (the Paley graphs) for all prime powers congruent to 1 (modulo 4). However, there are many values of "v" that are allowed, for which the existence of a conference graph is unknown.
The eigenvalues of a conference graph need not be integers, unlike those of other strongly regular graphs. If the graph is connected, the eigenvalues are "k" with multiplicity 1, and two other eigenvalues,
formula_0
each with multiplicity ("v" − 1)/2. | [
{
"math_id": 0,
"text": "\\frac{-1 \\pm \\sqrt v}{2} , "
}
]
| https://en.wikipedia.org/wiki?curid=7769842 |
7770047 | Conference matrix | Matrix in math with special properties
In mathematics, a conference matrix (also called a C-matrix) is a square matrix "C" with 0 on the diagonal and +1 and −1 off the diagonal, such that "C"T"C" is a multiple of the identity matrix "I". Thus, if the matrix has order "n", "C"T"C" = ("n"−1)"I".
Some authors use a more general definition, which requires there to be a single 0 in each row and column but not necessarily on the diagonal.
Conference matrices first arose in connection with a problem in telephony. They were first described by Vitold Belevitch, who also gave them their name. Belevitch was interested in constructing ideal telephone conference networks from ideal transformers and discovered that such networks were represented by conference matrices, hence the name. Other applications are in statistics, and another is in elliptic geometry.
For "n" > 1, there are two kinds of conference matrix. Let us normalize "C" by, first (if the more general definition is used), rearranging the rows so that all the zeros are on the diagonal, and then negating any row or column whose first entry is negative. (These operations do not change whether a matrix is a conference matrix.)
Thus, a normalized conference matrix has all 1's in its first row and column, except for a 0 in the top left corner, and is 0 on the diagonal. Let "S" be the matrix that remains when the first row and column of "C" are removed. Then either "n" is evenly even (a multiple of 4) and "S" is skew-symmetric (as is the normalized "C" if its first row is negated), or "n" is oddly even (congruent to 2 modulo 4) and "S" is symmetric (as is the normalized "C").
Symmetric conference matrices.
If "C" is a symmetric conference matrix of order "n" > 1, then not only must "n" be congruent to 2 mod 4 but also "n" − 1 must be a sum of two squares; there is a clever proof by elementary matrix theory in van Lint and Seidel. "n" will always be the sum of two squares if "n" − 1 is a prime power.
Given a symmetric conference matrix, the matrix "S" can be viewed as the Seidel adjacency matrix of a graph. The graph has "n" − 1 vertices, corresponding to the rows and columns of "S", and two vertices are adjacent if the corresponding entry in "S" is negative. This graph is strongly regular of the type called (after the matrix) a conference graph.
The existence of conference matrices of orders "n" allowed by the above restrictions is known only for some values of "n". For instance, if "n" = "q" + 1 where "q" is a prime power congruent to 1 mod 4, then the Paley graphs provide examples of symmetric conference matrices of order "n", by taking "S" to be the Seidel matrix of the Paley graph.
The first few possible orders of a symmetric conference matrix are "n" = 2, 6, 10, 14, 18, (not 22, since 21 is not a sum of two squares), 26, 30, (not 34 since 33 is not a sum of two squares), 38, 42, 46, 50, 54, (not 58), 62 (sequence in the OEIS); for every one of these, it is known that a symmetric conference matrix of that order exists. Order 66 seems to be an open problem.
Example.
The essentially unique conference matrix of order 6 is given by
formula_0.
All other conference matrices of order 6 are obtained from this one by flipping the signs of some row and/or column (and by taking permutations of rows and/or columns, according to the definition in use).
Skew-symmetric conference matrices.
Skew-symmetric matrices can also be produced by the Paley construction. Let "q" be a prime power with residue 3 mod 4. Then there is a Paley digraph of order "q" which leads to a skew-symmetric conference matrix of order "n" = "q" + 1. The matrix is obtained by taking for "S" the "q" × "q" matrix that has a +1 in position ("i", "j"&hairsp;) and −1 in position ("j", "i") if there is an arc of the digraph from "i" to "j", and zero diagonal. Then "C" constructed as above from "S", but with the first row all negative, is a skew-symmetric conference matrix.
This construction solves only a small part of the problem of deciding for which evenly even numbers "n" there exist skew-symmetric conference matrices of order "n".
Generalizations.
Sometimes a conference matrix of order "n" is just defined as a weighing matrix of the form "W"("n, n"−1), where
"W"("n,w") is said to be of weight "w" > 0 and order "n" if it is a square matrix of size "n" with entries from {−1, 0, +1} satisfying "W&hairsp;W" T = "w&hairsp;I". Using this definition, the zero element is no more required to be on the diagonal, but it is easy to see that still there must be exactly one zero element in each row and column. For example, the matrix
formula_1
would satisfy this relaxed definition, but not the more strict one requiring the zero elements to be on the diagonal.
A conference design is a generalization of conference matrices to non-rectangular matrices. A conference design C is an formula_2 matrix, with entries from {−1, 0, +1} satisfying formula_3, where
formula_4 is the formula_5 identity matrix and at most one zero in each row.
The foldover designs of conference designs can be used as definitive screening designs.
Telephone conference circuits.
Belevitch obtained complete solutions for conference matrices for all values of "n" up to 38 and provided circuits for some of the smaller matrices. An "ideal conference network" is one where the loss of signal is entirely due to the signal being split between multiple conference subscriber ports. That is, there are no dissipation losses within the network. The network must contain ideal transformers only and no resistances. An "n"-port ideal conference network exists if and only if there exists a conference matrix of order "n". For instance, a 3-port conference network can be constructed with the well-known hybrid transformer circuit used for 2-wire to 4-wire conversion in telephone handsets and line repeaters. However, there is no order 3 conference matrix and this circuit does not produce an "ideal" conference network. A resistance is needed for matching which dissipates signal, or else signal is lost through mismatch.
As mentioned above, a necessary condition for a conference matrix to exist is that "n"−1 must be the sum of two squares. Where there is more than one possible sum of two squares for "n"−1 there will exist multiple essentially different solutions for the corresponding conference network. This situation occurs at "n" of 26 and 66. The networks are particularly simple when "n"−1 is a perfect square ("n" = 2, 10, 26, ...).
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{pmatrix}0 &+1 &+1 &+1 &+1& +1\\\\+1& 0 &+1 &-1 &-1& +1\\\\+1& +1& 0 &+1 &-1& -1\\\\+1& -1& +1& 0 &+1& -1\\\\+1& -1& -1& +1& 0& +1\\\\+1& +1& -1& -1& +1& 0 \\end{pmatrix}"
},
{
"math_id": 1,
"text": "\\begin{pmatrix}\n1& 0& 1& 1\\\\\n0& -1& -1& 1\\\\\n1& -1& 0& -1\\\\\n1& 1& -1& 0\n\\end{pmatrix}"
},
{
"math_id": 2,
"text": "N \\times k"
},
{
"math_id": 3,
"text": "W^{\\mathrm{T}} W = (N-1)I_k"
},
{
"math_id": 4,
"text": "I_k"
},
{
"math_id": 5,
"text": "k \\times k"
}
]
| https://en.wikipedia.org/wiki?curid=7770047 |
7770362 | Retiming | Retiming is the technique of moving the structural location of latches or registers in a digital circuit to improve its performance, area, and/or power characteristics in such a way that preserves its functional behavior at its outputs. Retiming was first described by Charles E. Leiserson and James B. Saxe in 1983.
The technique uses a directed graph where the vertices represent asynchronous combinational blocks and the directed edges represent a series of registers or latches (the number of registers or latches can be zero). Each vertex has a value corresponding to the delay through the combinational circuit it represents. After doing this, one can attempt to optimize the circuit by pushing registers from output to input and vice versa - much like bubble pushing. Two operations can be used - deleting a register from each input of a vertex while adding a register to all outputs, and conversely adding a register to each input of vertex and deleting a register from all outputs. In all cases, if the rules are followed, the circuit will have the same functional behavior as it did before retiming.
Formal description.
The initial formulation of the retiming problem as described by Leiserson and Saxe is as follows. Given a directed graph formula_0 whose vertices represent logic gates or combinational delay elements in a circuit, assume there is a directed edge formula_1 between two elements that are connected directly or through one or more registers. Let the "weight" of each edge formula_2 be the number of registers present along edge formula_3 in the initial circuit. Let formula_4 be the propagation delay through vertex formula_5. The goal in retiming is to compute an integer "lag" value formula_6 for each vertex such that the retimed weight formula_7 of every edge is non-negative. There is a proof that this preserves the output functionality.
Minimizing the clock period with network flow.
The most common use of retiming is to minimize the clock period. A simple technique to optimize the clock period is to search for the minimum feasible period (e.g. using binary search).
The feasibility of a clock period formula_8 can be checked in one of several ways. The linear program below is feasible if and only if formula_8 is a feasible clock period. Let formula_9 be the minimum number of registers along any path from formula_10 to formula_5 (if such a path exists), and formula_11 is the maximum delay along any path from formula_10 to formula_5 with W(u,v) registers. The dual of this program is a minimum cost circulation problem, which can be solved efficiently as a network problem. The limitations of this approach arise from the enumeration and size of the formula_12 and formula_13 matrices.
Minimizing the clock period with MILP.
Alternatively, feasibility of a clock period formula_8 can be expressed as a mixed-integer linear program (MILP). A solution will exist and a valid lag function formula_6 will be returned if and only if the period is feasible.
Other formulations and extensions.
Alternate formulations allow the minimization of the register count and the minimization of the register count under a delay constraint. The initial paper includes extensions that allow the consideration of fan-out sharing and a more general delay model. Subsequent work has addressed the inclusion of register delays, load-dependent delay models, and hold constraints.
Problems.
Retiming has found industrial use, albeit sporadic. Its primary drawback is that the state encoding of the circuit is destroyed, making debugging, testing, and verification substantially more difficult. Some retimings may also require complicated initialization logic to have the circuit start in an identical initial state. Finally, the changes in the circuit's topology have consequences in other logical and physical synthesis steps that make design closure difficult.
Alternatives.
Clock skew scheduling is a related technique for optimizing sequential circuits. Whereas retiming relocates the structural position of the registers, clock skew scheduling moves their temporal position by scheduling the arrival time of the clock signals. The lower bound of the achievable minimum clock period of both techniques is the maximum mean cycle time (i.e. the total combinational delay along any path divided by the number of registers along it). | [
{
"math_id": 0,
"text": "G:=(V,E)"
},
{
"math_id": 1,
"text": "e:=(u,v)"
},
{
"math_id": 2,
"text": "w(e)"
},
{
"math_id": 3,
"text": "e"
},
{
"math_id": 4,
"text": "d(v)"
},
{
"math_id": 5,
"text": "v"
},
{
"math_id": 6,
"text": "r(v)"
},
{
"math_id": 7,
"text": "w_r(e):=w(e)+r(v)-r(u)"
},
{
"math_id": 8,
"text": "T"
},
{
"math_id": 9,
"text": "W(u,v)"
},
{
"math_id": 10,
"text": "u"
},
{
"math_id": 11,
"text": "D(u,v)"
},
{
"math_id": 12,
"text": "W"
},
{
"math_id": 13,
"text": "D"
}
]
| https://en.wikipedia.org/wiki?curid=7770362 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.