id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
56512
|
Capillary
|
Smallest type of blood vessel
A capillary is a small blood vessel, from 5 to 10 micrometres in diameter, and is part of the microcirculation system. Capillaries are microvessels and the smallest blood vessels in the body. They are composed of only the tunica intima (the innermost layer of an artery or vein), consisting of a thin wall of simple squamous endothelial cells. They are the site of the exchange of many substances from the surrounding interstitial fluid, and they convey blood from the smallest branches of the arteries (arterioles) to those of the veins (venules). Other substances which cross capillaries include water, oxygen, carbon dioxide, urea, glucose, uric acid, lactic acid and creatinine. Lymph capillaries connect with larger lymph vessels to drain lymphatic fluid collected in microcirculation.
Etymology.
"Capillary" comes from the Latin word , meaning "of or resembling hair", with use in English beginning in the mid-17th century. The meaning stems from the tiny, hairlike diameter of a capillary. While capillary is usually used as a noun, the word also is used as an adjective, as in "capillary action", in which a liquid flows without influence of external forces, such as gravity.
Structure.
Blood flows from the heart through arteries, which branch and narrow into arterioles, and then branch further into capillaries where nutrients and wastes are exchanged. The capillaries then join and widen to become venules, which in turn widen and converge to become veins, which then return blood back to the heart through the venae cavae. In the mesentery, metarterioles form an additional stage between arterioles and capillaries.
Individual capillaries are part of the capillary bed, an interweaving network of capillaries supplying tissues and organs. The more metabolically active a tissue is, the more capillaries are required to supply nutrients and carry away products of metabolism. There are two types of capillaries: true capillaries, which branch from arterioles and provide exchange between tissue and the capillary blood, and sinusoids, a type of open-pore capillary found in the liver, bone marrow, anterior pituitary gland, and brain circumventricular organs. Capillaries and sinusoids are short vessels that directly connect the arterioles and venules at opposite ends of the beds. Metarterioles are found primarily in the mesenteric microcirculation.
Lymphatic capillaries are slightly larger in diameter than blood capillaries, and have closed ends (unlike the blood capillaries open at one end to the arterioles and open at the other end to the venules). This structure permits interstitial fluid to flow into them but not out. Lymph capillaries have a greater internal oncotic pressure than blood capillaries, due to the greater concentration of plasma proteins in the lymph.
Types.
Blood capillaries are categorized into three types: continuous, fenestrated, and sinusoidal (also known as discontinuous).
Continuous.
Continuous capillaries are continuous in the sense that the endothelial cells provide an uninterrupted lining, and they only allow smaller molecules, such as water and ions, to pass through their intercellular clefts. Lipid-soluble molecules can passively diffuse through the endothelial cell membranes along concentration gradients. Continuous capillaries can be further divided into two subtypes:
# Those with numerous transport vesicles, which are found primarily in skeletal muscles, fingers, gonads, and skin.
# Those with few vesicles, which are primarily found in the central nervous system. These capillaries are a constituent of the blood–brain barrier.
Fenestrated.
Fenestrated capillaries have pores known as "fenestrae" (Latin for "windows") in the endothelial cells that are 60–80 nanometres (nm) in diameter. They are spanned by a diaphragm of radially oriented fibrils that allows small molecules and limited amounts of protein to diffuse. In the renal glomerulus there are cells with no diaphragms, called podocyte foot processes or pedicels, which have slit pores with a function analogous to the diaphragm of the capillaries. Both of these types of blood vessels have continuous basal laminae and are primarily located in the endocrine glands, intestines, pancreas, and the glomeruli of the kidney.
Sinusoidal.
Sinusoidal capillaries or discontinuous capillaries are a special type of open-pore capillary, also known as a "sinusoid", that have wider fenestrations that are 30–40 micrometres (μm) in diameter, with wider openings in the endothelium. Fenestrated capillaries have diaphragms that cover the pores whereas sinusoids lack a diaphragm and just have an open pore. These types of blood vessels allow red and white blood cells (7.5 μm – 25 μm diameter) and various serum proteins to pass, aided by a discontinuous basal lamina. These capillaries lack pinocytotic vesicles, and therefore use gaps present in cell junctions to permit transfer between endothelial cells, and hence across the membrane. Sinusoids are irregular spaces filled with blood and are mainly found in the liver, bone marrow, spleen, and brain circumventricular organs.
Development.
During early embryonic development, new capillaries are formed through vasculogenesis, the process of blood vessel formation that occurs through a novel production of endothelial cells that then form vascular tubes. The term "angiogenesis" denotes the formation of new capillaries from pre-existing blood vessels and already-present endothelium which divides. The small capillaries lengthen and interconnect to establish a network of vessels, a primitive vascular network that vascularises the entire yolk sac, connecting stalk, and chorionic villi.
Function.
The capillary wall performs an important function by allowing nutrients and waste substances to pass across it. Molecules larger than 3 nm such as albumin and other large proteins pass through transcellular transport carried inside vesicles, a process which requires them to go through the cells that form the wall. Molecules smaller than 3 nm such as water and gases cross the capillary wall through the space between cells in a process known as paracellular transport. These transport mechanisms allow bidirectional exchange of substances depending on osmotic gradients. Capillaries that form part of the blood–brain barrier only allow for transcellular transport as tight junctions between endothelial cells seal the paracellular space.
Capillary beds may control their blood flow via autoregulation. This allows an organ to maintain constant flow despite a change in central blood pressure. This is achieved by myogenic response, and in the kidney by tubuloglomerular feedback. When blood pressure increases, arterioles are stretched and subsequently constrict (a phenomenon known as the Bayliss effect) to counteract the increased tendency for high pressure to increase blood flow.
In the lungs, special mechanisms have been adapted to meet the needs of increased necessity of blood flow during exercise. When the heart rate increases and more blood must flow through the lungs, capillaries are recruited and are also distended to make room for increased blood flow. This allows blood flow to increase while resistance decreases. Extreme exercise can make capillaries vulnerable, with a breaking point similar to that of collagen.
Capillary permeability can be increased by the release of certain cytokines, anaphylatoxins, or other mediators (such as leukotrienes, prostaglandins, histamine, bradykinin, etc.) highly influenced by the immune system.
Starling equation.
The transport mechanisms can be further quantified by the Starling equation. The Starling equation defines the forces across a semipermeable membrane and allows calculation of the net flux:
formula_0
where:
formula_1 is the net driving force,
formula_2 is the proportionality constant, and
formula_3 is the net fluid movement between compartments.
By convention, outward force is defined as positive, and inward force is defined as negative. The solution to the equation is known as the net filtration or net fluid movement ("J""v"). If positive, fluid will tend to "leave" the capillary (filtration). If negative, fluid will tend to "enter" the capillary (absorption). This equation has a number of important physiologic implications, especially when pathologic processes grossly alter one or more of the variables.
According to Starling's equation, the movement of fluid depends on six variables:
Clinical significance.
Disorders of capillary formation as a developmental defect or acquired disorder are a feature in many common and serious disorders. Within a wide range of cellular factors and cytokines, issues with normal genetic expression and bioactivity of the vascular growth and permeability factor vascular endothelial growth factor (VEGF) appear to play a major role in many of the disorders. Cellular factors include reduced number and function of bone-marrow derived endothelial progenitor cells. and reduced ability of those cells to form blood vessels.
Therapeutics.
Major diseases where altering capillary formation could be helpful include conditions where there is excessive or abnormal capillary formation such as cancer and disorders harming eyesight; and medical conditions in which there is reduced capillary formation either for familial or genetic reasons, or as an acquired problem.
Blood sampling.
Capillary blood sampling can be used to test for blood glucose (such as in blood glucose monitoring), hemoglobin, pH and lactate. It is generally performed by creating a small cut using a blood lancet, followed by sampling by capillary action on the cut with a test strip or small pipette. It is also used to test for sexually transmitted infections that are present in the blood stream, such as HIV, syphilis, and hepatitis B and C, where a finger is lanced and a small amount of blood is sampled into a test tube.
History.
William Harvey did not explicitly predict the existence of capillaries, but he saw the need for some sort of connection between the arterial and venous systems. In 1653, he wrote, "...the blood doth enter into every member through the arteries, and does return by the veins, and that the veins are the vessels and ways by which the blood is returned to the heart itself; and that the blood in the members and extremities does pass from the arteries into the veins (either mediately by an anastomosis, or immediately through the porosities of the flesh, or both ways) as before it did in the heart and thorax out of the veins, into the arteries..."
Marcello Malpighi was the first to observe directly and correctly describe capillaries, discovering them in a frog's lung 8 years later, in 1661.
August Krogh discovered how capillaries provide nutrients to animal tissue. For his work he was awarded the 1920 Nobel Prize in Physiology or Medicine.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "J_v = K_f [(P_c - P_i) - \\sigma(\\pi_c - \\pi_i)],"
},
{
"math_id": 1,
"text": " (P_c - P_i) - \\sigma(\\pi_c - \\pi_i) "
},
{
"math_id": 2,
"text": " K_f "
},
{
"math_id": 3,
"text": " J_v "
}
] |
https://en.wikipedia.org/wiki?curid=56512
|
565214
|
7000 (number)
|
Natural number
7000 (seven thousand) is the natural number following 6999 and preceding 7001.
Selected numbers in the range 7001–7999.
Prime numbers.
There are 107 prime numbers between 7000 and 8000:
7001, 7013, 7019, 7027, 7039, 7043, 7057, 7069, 7079, 7103, 7109, 7121, 7127, 7129, 7151, 7159, 7177, 7187, 7193, 7207, 7211, 7213, 7219, 7229, 7237, 7243, 7247, 7253, 7283, 7297, 7307, 7309, 7321, 7331, 7333, 7349, 7351, 7369, 7393, 7411, 7417, 7433, 7451, 7457, 7459, 7477, 7481, 7487, 7489, 7499, 7507, 7517, 7523, 7529, 7537, 7541, 7547, 7549, 7559, 7561, 7573, 7577, 7583, 7589, 7591, 7603, 7607, 7621, 7639, 7643, 7649, 7669, 7673, 7681, 7687, 7691, 7699, 7703, 7717, 7723, 7727, 7741, 7753, 7757, 7759, 7789, 7793, 7817, 7823, 7829, 7841, 7853, 7867, 7873, 7877, 7879, 7883, 7901, 7907, 7919, 7927, 7933, 7937, 7949, 7951, 7963, 7993
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sum_{k=1}^{n}n^{floor(\\frac{n}{k})-1}"
}
] |
https://en.wikipedia.org/wiki?curid=565214
|
56525552
|
Elementary flow
|
In the larger context of the Navier-Stokes equations (and especially in the context of potential theory), elementary flows are basic flows that can be combined, using various techniques, to construct more complex flows. In this article the term "flow" is used interchangeably with the term "solution" due to historical reasons.
The techniques involved to create more complex solutions can be
for example by superposition, by techniques such as topology or considering them as local solutions on a certain neighborhood, subdomain or boundary layer and to be patched together. Elementary flows can be considered the basic building blocks (fundamental solutions, local solutions and solitons) of the different types of equations derived from the Navier-Stokes equations. Some of the flows reflect specific constraints such as incompressible or irrotational flows, or both, as in the case of potential flow, and some of the flows may be limited to the case of two dimensions.
Due to the relationship between fluid dynamics and field theory, elementary flows are relevant not only to aerodynamics but to all field theory in general. To put it in perspective boundary layers can be interpreted as topological defects on generic manifolds, and considering fluid dynamics analogies and limit cases in electromagnetism, quantum mechanics and general relativity one can see how all these solutions are at the core of recent developments in theoretical physics such as the ads/cft duality, the SYK model, the physics of nematic liquids, strongly correlated systems and even to quark gluon plasmas.
Two-dimensional uniform flow.
For steady-state, spatially uniform flow of a fluid in the xy plane, the velocity vector is
formula_0
where
formula_1 is the absolute magnitude of the velocity (i.e., formula_2);
formula_3 is the angle the velocity vector makes with the positive x axis (formula_3 is positive for angles measured in a counterclockwise sense from the positive x axis); and
formula_4 and formula_5 are the unit basis vectors of the xy coordinate system.
Because this flow is incompressible (i.e., formula_6) and two-dimensional, its velocity can be expressed in terms of a stream function, formula_7:
formula_8
formula_9
where
formula_10
and formula_11 is a constant.
In cylindrical coordinates:
formula_12
formula_13
and
formula_14
This flow is irrotational (i.e., formula_15) so its velocity can be expressed in terms of a potential function, formula_16:
formula_17
formula_18
where
formula_19
and formula_20 is a constant.
In cylindrical coordinates
formula_21
formula_22
formula_23
Two-dimensional line source.
The case of a vertical line emitting at a fixed rate a constant quantity of fluid Q per unit length is a line source. The problem has a cylindrical symmetry and can be treated in two dimensions on the orthogonal plane.
Line sources and line sinks (below) are important elementary flows because they play the role of monopole for incompressible fluids (which can also be considered examples of solenoidal fields i.e. divergence free fields). Generic flow patterns can be also de-composed in terms of multipole expansions, in the same manner as for electric and magnetic fields where the monopole is essentially the first non-trivial (e.g. constant) term of the expansion.
This flow pattern is also both irrotational and incompressible.
This is characterized by a cylindrical symmetry:
formula_24
Where the total outgoing flux is constant
formula_25
Therefore,
formula_26
This is derived from a stream function
formula_27
or from a potential function
formula_28
Two-dimensional line sink.
The case of a vertical line absorbing at a fixed rate a constant quantity of fluid "Q" per unit length is a line sink. Everything is the same as the case of a line source a part from the negative sign.
formula_29
This is derived from a stream function
formula_30
or from a potential function
formula_31
Given that the two results are the same a part from a minus sign we can treat transparently both line sources and line sinks with the same stream and potential functions permitting "Q" to assume both positive and negative values and absorbing the minus sign into the definition of "Q".
Two-dimensional doublet or dipole line source.
If we consider a line source and a line sink at a distance d we can reuse the results above and the stream function will be
formula_32
The last approximation is to the first order in d.
Given
formula_33
It remains
formula_34
The velocity is then
formula_35
formula_36
And the potential instead
formula_37
Two-dimensional vortex line.
This is the case of a vortex filament rotating at constant speed, there is a cylindrical symmetry and the problem can be solved in the orthogonal plane.
Dual to the case above of line sources, vortex lines play the role of monopoles for irrotational flows.
Also in this case the flow is also both irrotational and incompressible and therefore a case of potential flow.
This is characterized by a cylindrical symmetry:
formula_38
Where the total circulation is constant for every closed line around the central vortex
formula_39
and is zero for any line not including the vortex.
Therefore,
formula_40
This is derived from a stream function
formula_41
or from a potential function
formula_42
Which is dual to the previous case of a line source
Generic two-dimensional potential flow.
Given an incompressible two-dimensional flow which is also irrotational we have:
formula_43
Which is in cylindrical coordinates
formula_44
We look for a solution with separated variables:
formula_45
which gives
formula_46
Given the left part depends only on "r" and the right parts depends only on formula_47, the two parts must be equal to a constant independent from "r" and formula_47. The constant shall be positive.
Therefore,
formula_48
formula_49
The solution to the second equation is a linear combination of formula_50 and formula_51
In order to have a single-valued velocity (and also a single-valued stream function) "m" shall be a positive integer.
therefore the most generic solution is given by
formula_52
The potential is instead given by
formula_53
|
[
{
"math_id": 0,
"text": "\\mathbf{v} = v_0 \\cos(\\theta_0)\\, \\mathbf{e}_x +v_0 \\sin(\\theta_0)\\, \\mathbf{e}_y "
},
{
"math_id": 1,
"text": "v_0"
},
{
"math_id": 2,
"text": "v_0 = |\\mathbf{v}|"
},
{
"math_id": 3,
"text": "\\theta_0"
},
{
"math_id": 4,
"text": "\\mathbf{e}_x"
},
{
"math_id": 5,
"text": "\\mathbf{e}_y"
},
{
"math_id": 6,
"text": "\\nabla \\cdot \\mathbf{v} = 0"
},
{
"math_id": 7,
"text": "\\psi"
},
{
"math_id": 8,
"text": "v_x = \\frac {\\partial \\psi} {\\partial y}"
},
{
"math_id": 9,
"text": "v_y = - \\frac {\\partial \\psi} {\\partial x}"
},
{
"math_id": 10,
"text": "\\psi = \\psi_0 - v_0 \\sin (\\theta_0)\\, x + v_0 \\cos (\\theta_0)\\, y"
},
{
"math_id": 11,
"text": "\\psi_0"
},
{
"math_id": 12,
"text": "v_r = - \\frac 1 r \\frac{\\partial \\psi} {\\partial \\theta}"
},
{
"math_id": 13,
"text": "v_\\theta = \\frac{\\partial \\psi} {\\partial r}"
},
{
"math_id": 14,
"text": "\\psi = \\psi_0 + v_0\\, r \\sin (\\theta - \\theta_0)"
},
{
"math_id": 15,
"text": "\\nabla \\times \\mathbf{v} = \\mathbf{0}"
},
{
"math_id": 16,
"text": "\\phi"
},
{
"math_id": 17,
"text": "v_x = - \\frac{\\partial \\phi} {\\partial x}"
},
{
"math_id": 18,
"text": "v_y = - \\frac {\\partial \\phi} {\\partial y}"
},
{
"math_id": 19,
"text": "\\phi = \\phi_0 - v_0 \\cos (\\theta_0)\\, x - v_0 \\sin (\\theta_0)\\, y"
},
{
"math_id": 20,
"text": "\\phi_0"
},
{
"math_id": 21,
"text": "v_r = \\frac {\\partial \\phi} {\\partial r}"
},
{
"math_id": 22,
"text": "v_\\theta = \\frac {1}{r} \\frac {\\partial \\phi} {\\partial \\theta}"
},
{
"math_id": 23,
"text": "\\phi = \\phi_0 - v_0\\, r \\cos(\\theta - \\theta_0) "
},
{
"math_id": 24,
"text": "\\mathbf{v} = v_r(r) \\mathbf{e}_r"
},
{
"math_id": 25,
"text": "\\int_S \\mathbf{v} \\cdot d \\mathbf{S} = \\int_{0}^{2 \\pi} ( v_r(r) \\, \\mathbf{e}_r ) \\cdot ( \\mathbf{e}_r \\, r \\, d \\theta ) = \\! 2 \\pi \\, r \\, v_r(r) = Q"
},
{
"math_id": 26,
"text": "v_r = \\frac {Q}{2 \\pi r}"
},
{
"math_id": 27,
"text": "\\psi(r,\\theta) = -\\frac{Q}{2 \\pi } \\theta"
},
{
"math_id": 28,
"text": "\\phi(r,\\theta) = -\\frac{Q}{2 \\pi } \\ln r"
},
{
"math_id": 29,
"text": "v_r = - \\frac {Q}{2 \\pi r}"
},
{
"math_id": 30,
"text": "\\psi(r,\\theta) = \\frac{Q}{2 \\pi } \\theta"
},
{
"math_id": 31,
"text": "\\phi(r,\\theta) = \\frac{Q}{2 \\pi } \\ln r"
},
{
"math_id": 32,
"text": "\\psi(\\mathbf{r}) = \\psi_Q(\\mathbf{r} - \\mathbf{d}/2) - \\psi_Q(\\mathbf{r} + \\mathbf{d}/2) \\ \\simeq \\mathbf{d} \\cdot \\nabla \\psi_Q(\\mathbf{r})\n"
},
{
"math_id": 33,
"text": "\\mathbf{d} = d [ \\cos (\\theta_0) \\mathbf{e}_x + \\sin (\\theta_0) \\mathbf{e}_y] = d [ \\cos (\\theta-\\theta_0) \\mathbf{e}_r + \\sin (\\theta-\\theta_0) \\mathbf{e}_\\theta]\n"
},
{
"math_id": 34,
"text": "\n\\psi(r,\\theta) = - \\frac{Q d}{2 \\pi} \\frac{\\sin(\\theta-\\theta_0)}{r}\n"
},
{
"math_id": 35,
"text": "\nv_r(r,\\theta) = \\frac{Q d}{2 \\pi} \\frac{\\cos(\\theta-\\theta_0)}{r^2}\n"
},
{
"math_id": 36,
"text": "\nv_\\theta(r,\\theta) = \\frac{Q d}{2 \\pi} \\frac{\\sin(\\theta-\\theta_0)}{r^2}\n"
},
{
"math_id": 37,
"text": "\n\\phi(r,\\theta) = \\frac{Q d}{2 \\pi} \\frac{\\cos(\\theta-\\theta_0)}{r}\n"
},
{
"math_id": 38,
"text": "\\mathbf{v} = v_\\theta(r) \\, \\mathbf{e}_\\theta"
},
{
"math_id": 39,
"text": "\\oint \\mathbf{v} \\cdot d \\mathbf{s} = \\int_{0}^{2 \\pi} (v_\\theta(r) \\, \\mathbf{e}_\\theta) \\cdot (\\mathbf{e}_\\theta \\, r \\, d\\theta) = \\! 2 \\pi \\, r\\, v_\\theta(r) = \\Gamma"
},
{
"math_id": 40,
"text": "v_\\theta = \\frac {\\Gamma}{2 \\pi r}"
},
{
"math_id": 41,
"text": "\\psi(r,\\theta) = \\frac{\\Gamma}{2 \\pi } \\ln r"
},
{
"math_id": 42,
"text": "\\phi(r,\\theta) = - \\frac{\\Gamma}{2 \\pi } \\theta"
},
{
"math_id": 43,
"text": "\\nabla^2 \\psi = 0"
},
{
"math_id": 44,
"text": "\\frac{1}{r} \\frac{\\partial}{\\partial r} \\left(r \\frac{\\partial \\psi}{\\partial r}\\right) + \\frac{1}{r^2} \\frac{\\partial^2 \\psi}{\\partial \\theta^2}= 0"
},
{
"math_id": 45,
"text": "\\psi(r,\\theta) = R(r) \\Theta(\\theta)"
},
{
"math_id": 46,
"text": "\\frac{r}{R(r)} \\frac{d}{dr} \\left(r \\frac{d R(r)}{dr}\\right) = -\\frac{1}{\\Theta(\\theta)} \\frac{d^2 \\Theta(\\theta)}{d\\theta^2}"
},
{
"math_id": 47,
"text": "\\theta"
},
{
"math_id": 48,
"text": "r \\frac{d}{dr} \\left(r \\frac{d}{dr} R(r)\\right) = m^2 R(r) "
},
{
"math_id": 49,
"text": "\\frac{d^2 \\Theta(\\theta)}{d\\theta^2} = - m^2 \\Theta(\\theta)"
},
{
"math_id": 50,
"text": "e^{i m \\theta}"
},
{
"math_id": 51,
"text": "e^{-i m \\theta}"
},
{
"math_id": 52,
"text": "\\psi = \\alpha_0 + \\beta_0 \\ln r + \\sum_{m > 0}{\\left(\\alpha_m r^m + \\beta_m r^{-m}\\right)\\sin {[m(\\theta -\n \\theta_m)]}}"
},
{
"math_id": 53,
"text": "\\phi = \\alpha_0 - \\beta_0 \\theta + \\sum_{m \\mathop > 0}{(\\alpha_m r^m - \\beta_m r^{-m})\\cos {[m(\\theta -\n \\theta_m)]}}"
}
] |
https://en.wikipedia.org/wiki?curid=56525552
|
56525571
|
S and L spaces
|
Two types of topological spaces with different logical foundations
In mathematics, S-space is a regular topological space that is hereditarily separable but is not a Lindelöf space. L-space is a regular topological space that is hereditarily Lindelöf but not separable. A space is separable if it has a countable dense set and hereditarily separable if every subspace is separable.
It had been believed for a long time that S-space problem and L-space problem are dual, i.e. if there is an S-space in some model of set theory then there is an L-space in the same model and vice versa – which is not true.
It was shown in the early 1980s that the existence of S-space is independent of the usual axioms of ZFC. This means that to prove the existence of an S-space or to prove the non-existence of S-space, we need to assume axioms beyond those of ZFC. The L-space problem (whether an L-space can exist without assuming additional set-theoretic assumptions beyond those of ZFC) was not resolved until recently.
Todorcevic proved that under PFA there are no S-spaces. This means that every regular formula_0 hereditarily separable space is Lindelöf. For some time, it was believed the L-space problem would have a similar solution (that its existence would be independent of ZFC).
Todorcevic showed that there is a model of set theory with Martin's axiom where there is an L-space but there are no S-spaces. Further, Todorcevic found a compact S-space from a Cohen real.
In 2005, Moore solved the L-space problem by constructing an L-space without assuming additional axioms and by combining Todorcevic's rho functions with number theory.
|
[
{
"math_id": 0,
"text": "T_1"
}
] |
https://en.wikipedia.org/wiki?curid=56525571
|
5652832
|
Stewart–Walker lemma
|
The Stewart–Walker lemma provides necessary and sufficient conditions for the linear perturbation of a tensor field to be gauge-invariant. formula_0 if and only if one of the following holds
1. formula_1
2. formula_2 is a constant scalar field
3. formula_2 is a linear combination of products of delta functions formula_3
Derivation.
A 1-parameter family of manifolds denoted by formula_4 with formula_5 has metric formula_6. These manifolds can be put together to form a 5-manifold formula_7. A smooth curve formula_8 can be constructed through formula_7 with tangent 5-vector formula_9, transverse to formula_4. If formula_9 is defined so that if formula_10 is the family of 1-parameter maps which map formula_11 and formula_12 then a point formula_13 can be written as formula_14. This also defines a pull back formula_15 that maps a tensor field formula_16 back onto formula_17. Given sufficient smoothness a Taylor expansion can be defined
formula_18
formula_19 is the linear perturbation of formula_20. However, since the choice of formula_9 is dependent on the choice of gauge another gauge can be taken. Therefore the differences in gauge become formula_21. Picking a chart where formula_22 and formula_23 then formula_24 which is a well defined vector in any formula_25 and gives the result
formula_26
The only three possible ways this can be satisfied are those of the lemma.
Sources.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Delta \\delta T = 0"
},
{
"math_id": 1,
"text": "T_{0} = 0"
},
{
"math_id": 2,
"text": "T_{0}"
},
{
"math_id": 3,
"text": "\\delta_{a}^{b}"
},
{
"math_id": 4,
"text": "\\mathcal{M}_{\\epsilon}"
},
{
"math_id": 5,
"text": "\\mathcal{M}_{0} = \\mathcal{M}^{4}"
},
{
"math_id": 6,
"text": "g_{ik} = \\eta_{ik} + \\epsilon h_{ik}"
},
{
"math_id": 7,
"text": "\\mathcal{N}"
},
{
"math_id": 8,
"text": "\\gamma"
},
{
"math_id": 9,
"text": "X"
},
{
"math_id": 10,
"text": "h_{t}"
},
{
"math_id": 11,
"text": "\\mathcal{N} \\to \\mathcal{N}"
},
{
"math_id": 12,
"text": "p_{0} \\in \\mathcal{M}_{0}"
},
{
"math_id": 13,
"text": "p_{\\epsilon} \\in \\mathcal{M}_{\\epsilon}"
},
{
"math_id": 14,
"text": "h_{\\epsilon}(p_{0})"
},
{
"math_id": 15,
"text": "h_{\\epsilon}^{*}"
},
{
"math_id": 16,
"text": "T_{\\epsilon} \\in \\mathcal{M}_{\\epsilon} "
},
{
"math_id": 17,
"text": "\\mathcal{M}_{0}"
},
{
"math_id": 18,
"text": "h_{\\epsilon}^{*}(T_{\\epsilon}) = T_{0} + \\epsilon \\, h_{\\epsilon}^{*}(\\mathcal{L}_{X}T_{\\epsilon}) + O(\\epsilon^{2})"
},
{
"math_id": 19,
"text": "\\delta T = \\epsilon h_{\\epsilon}^{*}(\\mathcal{L}_{X}T_{\\epsilon}) \\equiv \\epsilon (\\mathcal{L}_{X}T_{\\epsilon})_{0}"
},
{
"math_id": 20,
"text": "T"
},
{
"math_id": 21,
"text": "\\Delta \\delta T = \\epsilon(\\mathcal{L}_{X}T_{\\epsilon})_0 - \\epsilon(\\mathcal{L}_{Y}T_{\\epsilon})_0 = \\epsilon(\\mathcal{L}_{X-Y}T_\\epsilon)_0"
},
{
"math_id": 22,
"text": "X^{a} = (\\xi^\\mu,1)"
},
{
"math_id": 23,
"text": "Y^a = (0,1)"
},
{
"math_id": 24,
"text": "X^{a}-Y^{a} = (\\xi^{\\mu},0)"
},
{
"math_id": 25,
"text": "\\mathcal{M}_\\epsilon"
},
{
"math_id": 26,
"text": "\\Delta \\delta T = \\epsilon \\mathcal{L}_{\\xi}T_0."
}
] |
https://en.wikipedia.org/wiki?curid=5652832
|
56530120
|
Jean Cerf
|
French mathematician (born 1928)
Jean Cerf (born 1928) is a French mathematician, specializing in topology.
Education and career.
Jean Cerf was born in Strasbourg, France, in 1928. He studied at the École Normale Supérieure, graduating in sciences in 1947. After passing his "agrégation" in mathematics in 1950, he obtained a doctorate with thesis supervised by Henri Cartan. Cerf became a "maître de conférences" at the University of Lille and was later appointed a professor at the University of Paris XI. He was also a director of research at CNRS.
Cerf's research deals with differential topology, cobordism, and symplectic topology. In 1966 he was an Invited Speaker at the ICM in Moscow. In 1968 Cerf proved that every orientation-preserving diffeomorphism of formula_0 is isotopic to the identity. In 1970 Cerf proved the pseudo-isotopy theory for simply connected manifolds. In 1970 he was awarded the "prix Servant", together with Bernard Malgrange and André Néron (for independent work). 1971 he was the president of the Société Mathématique de France.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "S^3"
}
] |
https://en.wikipedia.org/wiki?curid=56530120
|
56532324
|
Quasi-stationary distribution
|
Type of random process
In probability a quasi-stationary distribution is a random process that admits one or several absorbing states that are reached almost surely, but is initially distributed such that it can evolve for a long time without reaching it. The most common example is the evolution of a population: the only equilibrium is when there is no one left, but if we model the number of people it is likely to remain stable for a long period of time before it eventually collapses.
Formal definition.
We consider a Markov process formula_0 taking values in formula_1. There is a measurable set formula_2of absorbing states and formula_3. We denote by formula_4 the hitting time of formula_5, also called killing time. We denote by formula_6 the family of distributions where formula_7 has original condition formula_8. We assume that formula_5 is almost surely reached, i.e. formula_9.
The general definition is: a probability measure formula_10 on formula_11 is said to be a quasi-stationary distribution (QSD) if for every measurable set formula_12 contained in formula_11, formula_13where formula_14.
In particular formula_15
General results.
Killing time.
From the assumptions above we know that the killing time is finite with probability 1. A stronger result than we can derive is that the killing time is exponentially distributed: if formula_10 is a QSD then there exists formula_16 such that formula_17.
Moreover, for any formula_18 we get formula_19.
Existence of a quasi-stationary distribution.
Most of the time the question asked is whether a QSD exists or not in a given framework. From the previous results we can derive a condition necessary to this existence.
Let formula_20. A necessary condition for the existence of a QSD is formula_21 and we have the equality formula_22
Moreover, from the previous paragraph, if formula_10 is a QSD then formula_23. As a consequence, if formula_24 satisfies formula_25 then there can be no QSD formula_10 such that formula_26 because other wise this would lead to the contradiction formula_27.
A sufficient condition for a QSD to exist is given considering the transition semigroup formula_28 of the process before killing. Then, under the conditions that formula_11 is a compact Hausdorff space and that formula_29 preserves the set of continuous functions, i.e. formula_30, there exists a QSD.
History.
The works of Wright on gene frequency in 1931 and of Yaglom on branching processes in 1947 already included the idea of such distributions. The term quasi-stationarity applied to biological systems was then used by Bartlett in 1957, who later coined "quasi-stationary distribution".
Quasi-stationary distributions were also part of the classification of killed processes given by Vere-Jones in 1962 and their definition for finite state Markov chains was done in 1965 by Darroch and Seneta.
Examples.
Quasi-stationary distributions can be used to model the following processes:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(Y_t)_{t \\geq 0}"
},
{
"math_id": 1,
"text": "\\mathcal{X}"
},
{
"math_id": 2,
"text": "\\mathcal{X}^{\\mathrm{tr}}"
},
{
"math_id": 3,
"text": "\\mathcal{X}^a = \\mathcal{X} \\setminus \\mathcal{X}^{\\operatorname{tr}}"
},
{
"math_id": 4,
"text": "T"
},
{
"math_id": 5,
"text": "\\mathcal{X}^{\\operatorname{tr}}"
},
{
"math_id": 6,
"text": "\\{ \\operatorname{P}_x \\mid x \\in \\mathcal{X} \\}"
},
{
"math_id": 7,
"text": "\\operatorname{P}_x"
},
{
"math_id": 8,
"text": "Y_0 = x \\in \\mathcal{X}"
},
{
"math_id": 9,
"text": "\\forall x \\in \\mathcal{X}, \\operatorname{P}_x(T < \\infty) = 1"
},
{
"math_id": 10,
"text": "\\nu"
},
{
"math_id": 11,
"text": "\\mathcal{X}^a"
},
{
"math_id": 12,
"text": "B"
},
{
"math_id": 13,
"text": "\\forall t \\geq 0, \\operatorname{P}_\\nu(Y_t \\in B \\mid T > t) = \\nu(B)"
},
{
"math_id": 14,
"text": "\\operatorname{P}_\\nu = \\int_{\\mathcal{X}^a} \\operatorname{P}_x \\, \\mathrm{d} \\nu(x)"
},
{
"math_id": 15,
"text": "\\forall B \\in \\mathcal{B}(\\mathcal{X}^a), \\forall t \\geq 0, \\operatorname{P}_\\nu(Y_t \\in B, T > t) = \\nu(B) \\operatorname{P}_\\nu(T > t)."
},
{
"math_id": 16,
"text": "\\theta(\\nu) > 0"
},
{
"math_id": 17,
"text": "\\forall t \\in \\mathbf{N}, \\operatorname{P}_\\nu(T > t) = \\exp(-\\theta(\\nu) \\times t)"
},
{
"math_id": 18,
"text": "\\vartheta < \\theta(\\nu)"
},
{
"math_id": 19,
"text": "\\operatorname{E}_\\nu(e^{\\vartheta t}) < \\infty"
},
{
"math_id": 20,
"text": "\\theta_x^* := \\sup \\{ \\theta \\mid \\operatorname{E}_x(e^{\\theta T}) < \\infty \\}"
},
{
"math_id": 21,
"text": "\\exists x \\in \\mathcal{X}^a, \\theta_x^* > 0"
},
{
"math_id": 22,
"text": "\\theta_x^* = \\liminf_{t \\to \\infty} -\\frac{1}{t} \\log(\\operatorname{P}_x(T > t))."
},
{
"math_id": 23,
"text": "\\operatorname{E}_\\nu \\left( e^{\\theta(\\nu)T} \\right) = \\infty"
},
{
"math_id": 24,
"text": "\\vartheta > 0"
},
{
"math_id": 25,
"text": "\\sup_{x \\in \\mathcal{X}^a} \\{ \\operatorname{E}_x(e^{\\vartheta T}) \\} < \\infty"
},
{
"math_id": 26,
"text": "\\vartheta = \\theta(\\nu)"
},
{
"math_id": 27,
"text": "\\infty = \\operatorname{E}_\\nu \\left( e^{\\theta(\\nu)T} \\right) \\leq \\sup_{x \\in \\mathcal{X}^a} \\{ \\operatorname{E}_x(e^{\\theta(\\nu) T}) \\} < \\infty "
},
{
"math_id": 28,
"text": "(P_t, t \\geq 0)"
},
{
"math_id": 29,
"text": "P_1"
},
{
"math_id": 30,
"text": "P_1(\\mathcal{C}(\\mathcal{X}^a)) \\subseteq \\mathcal{C}(\\mathcal{X}^a)"
}
] |
https://en.wikipedia.org/wiki?curid=56532324
|
5653710
|
Biracks and biquandles
|
Special ordered sets
In mathematics, biquandles and biracks are sets with binary operations that generalize quandles and racks. Biquandles take, in the theory of virtual knots, the place that quandles occupy in the theory of classical knots. Biracks and racks have the same relation, while a biquandle is a birack which satisfies some additional conditions.
Definitions.
Biquandles and biracks have two binary operations on a set formula_0 written formula_1 and formula_2. These satisfy the following three axioms:
1. formula_3
2. formula_4
3. formula_5
These identities appeared in 1992 in reference [FRS] where the object was called a species.
The superscript and subscript notation is useful here because it dispenses with the need for brackets. For example,
if we write formula_6 for formula_2 and formula_7 for formula_1 then the
three axioms above become
1. formula_8
2. formula_9
3. formula_10
If in addition the two operations are invertible, that is given formula_11 in the set formula_0 there are unique formula_12 in the set formula_0 such that formula_13 and formula_14 then the set formula_0 together with the two operations define a birack.
For example, if formula_0, with the operation formula_1, is a rack then it is a birack if we define the other operation to be the identity, formula_15.
For a birack the function formula_16 can be defined by
formula_17
Then
1. formula_18 is a bijection
2. formula_19
In the second condition, formula_20 and formula_21 are defined by formula_22 and formula_23. This condition is sometimes known as the set-theoretic Yang-Baxter equation.
To see that 1. is true note that formula_24 defined by
formula_25
is the inverse to
formula_26
To see that 2. is true let us follow the progress of the triple formula_27 under formula_28. So
formula_29
On the other hand, formula_30. Its progress under formula_31 is
formula_32
Any formula_18 satisfying 1. 2. is said to be a "switch" (precursor of biquandles and biracks).
Examples of switches are the identity, the "twist" formula_33 and formula_34 where formula_1 is the operation of a rack.
A switch will define a birack if the operations are invertible. Note that the identity switch does not do this.
Biquandles.
A biquandle is a birack which satisfies some additional structure, as described by Nelson and Rische. The axioms of a biquandle are "minimal" in the sense that they are the weakest restrictions that can be placed on the two binary operations while making the biquandle of a virtual knot invariant under Reidemeister moves.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " X "
},
{
"math_id": 1,
"text": " a^b "
},
{
"math_id": 2,
"text": " a_b "
},
{
"math_id": 3,
"text": " (a^b)^{c_b}= {a^c}^{b^c} "
},
{
"math_id": 4,
"text": " {a_b}_{c_b}= {a_c}_{b^c} "
},
{
"math_id": 5,
"text": " {a_b}^{c_b}= {a^c}_{b^c} "
},
{
"math_id": 6,
"text": " a*b "
},
{
"math_id": 7,
"text": " a\\mathbin{**}b "
},
{
"math_id": 8,
"text": " (a\\mathbin{**}b)\\mathbin{**}(c*b)=(a\\mathbin{**}c)\\mathbin{**}(b\\mathbin{**}c) "
},
{
"math_id": 9,
"text": " (a*b)*(c*b)=(a*c)*(b\\mathbin{**}c) "
},
{
"math_id": 10,
"text": " (a*b)\\mathbin{**}(c*b)=(a\\mathbin{**}c)*(b\\mathbin{**}c) "
},
{
"math_id": 11,
"text": " a, b "
},
{
"math_id": 12,
"text": " x, y "
},
{
"math_id": 13,
"text": " x^b=a "
},
{
"math_id": 14,
"text": " y_b=a "
},
{
"math_id": 15,
"text": " a_b=a "
},
{
"math_id": 16,
"text": " S:X^2 \\rightarrow X^2 "
},
{
"math_id": 17,
"text": " S(a,b_a)=(b,a^b).\\,"
},
{
"math_id": 18,
"text": " S "
},
{
"math_id": 19,
"text": " S_1S_2S_1=S_2S_1S_2 \\, "
},
{
"math_id": 20,
"text": " S_1 "
},
{
"math_id": 21,
"text": " S_2 "
},
{
"math_id": 22,
"text": " S_1(a,b,c)=(S(a,b),c)"
},
{
"math_id": 23,
"text": " S_2(a,b,c)=(a,S(b,c))"
},
{
"math_id": 24,
"text": " S' "
},
{
"math_id": 25,
"text": " S'(b,a^b)=(a,b_a)\\, "
},
{
"math_id": 26,
"text": " S \\,"
},
{
"math_id": 27,
"text": " (c,b_c,a_{bc^b}) "
},
{
"math_id": 28,
"text": " S_1S_2S_1 "
},
{
"math_id": 29,
"text": " (c,b_c,a_{bc^b}) \\to (b,c^b,a_{bc^b}) \\to (b,a_b,c^{ba_b}) \\to (a, b^a, c^{ba_b}). "
},
{
"math_id": 30,
"text": " (c,b_c,a_{bc^b}) = (c, b_c, a_{cb_c}) "
},
{
"math_id": 31,
"text": " S_2S_1S_2 "
},
{
"math_id": 32,
"text": " (c, b_c, a_{cb_c}) \\to (c, a_c, {b_c}^{a_c}) \\to (a, c^a, {b_c}^{a_c}) = (a, c^a, {b^a}_{c^a}) \\to (a, b_a, c_{ab_a}) = (a, b^a, c^{ba_b}). "
},
{
"math_id": 33,
"text": " T(a,b)=(b,a) "
},
{
"math_id": 34,
"text": " S(a,b)=(b,a^b) "
}
] |
https://en.wikipedia.org/wiki?curid=5653710
|
56538308
|
Concavification
|
In mathematics, concavification is the process of converting a non-concave function to a concave function. A related concept is convexification – converting a non-convex function to a convex function. It is especially important in economics and mathematical optimization.
Concavification of a quasiconcave function by monotone transformation.
An important special case of concavification is where the original function is a quasiconcave function. It is known that:
Therefore, a natural question is: "given a quasiconcave function" formula_0, "does there exist a monotonically increasing" formula_1 "such that" formula_2 "is concave?"
Example and Counter Example.
As an example, consider the function formula_3 on the domain formula_4. This function is quasiconcave, but it is not concave (in fact, it is strictly convex). It can be concavified, for example, using the monotone transformation formula_5, since formula_6 is concave.
Not every concave function can be concavified in this way. A counter example was shown by Fenchel. His example is: formula_7. Fenchel proved that this function is quasiconcave, but there is no monotone transformation formula_8 such that formula_9 is concave.
Based on these examples, we define a function to be concavifiable if there exists a monotone transformation that makes it concave. The question now becomes: "what quasiconcave functions are concavifiable?"
Concavifiability.
Yakar Kannai treats the question in depth in the context of utility functions, giving sufficient conditions under which continuous convex preferences can be represented by concave utility functions.
His results were later generalized by Connell and Rasmussen, who give necessary and sufficient conditions for concavifiability. They show that the function formula_10 violates their conditions and thus is not concavifiable. They prove that this function is strictly quasiconcave and its gradient is non-vanishing, but it is not concavifiable.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f : \\mathbb{R}^n \\to \\mathbb{R}"
},
{
"math_id": 1,
"text": "g : \\mathbb{R} \\to \\mathbb{R}"
},
{
"math_id": 2,
"text": "x \\mapsto g(f(x))"
},
{
"math_id": 3,
"text": "x \\mapsto f(x) = x^2"
},
{
"math_id": 4,
"text": "x\\geq 0"
},
{
"math_id": 5,
"text": "t \\mapsto g(t) = t^{1/4}"
},
{
"math_id": 6,
"text": "x \\mapsto g(f(x))=\\sqrt{x}"
},
{
"math_id": 7,
"text": "(x,y) \\mapsto f(x,y) := y + \\sqrt{x+y^2}"
},
{
"math_id": 8,
"text": "g : \\mathbb{R}\\to\\mathbb{R}"
},
{
"math_id": 9,
"text": "(x, y) \\mapsto g(f(x,y))"
},
{
"math_id": 10,
"text": "(x,y) \\mapsto f(x,y) = e^{e^x}\\cdot y"
}
] |
https://en.wikipedia.org/wiki?curid=56538308
|
56541610
|
Viscosity models for mixtures
|
The shear viscosity (or viscosity, in short) of a fluid is a material property that describes the friction between internal neighboring fluid surfaces (or sheets) flowing with different fluid velocities. This friction is the effect of (linear) momentum exchange caused by molecules with sufficient energy to move (or "to jump") between these fluid sheets due to fluctuations in their motion. The viscosity is not a material constant, but a material property that depends on temperature, pressure, fluid mixture composition, local velocity variations. This functional relationship is described by a mathematical viscosity model called a constitutive equation which is usually far more complex than the defining equation of shear viscosity. One such complicating feature is the relation between the viscosity model for a pure fluid and the model for a fluid mixture which is called mixing rules. When scientists and engineers use new arguments or theories to develop a new viscosity model, instead of improving the reigning model, it may lead to the first model in a new class of models. This article will display one or two representative models for different classes of viscosity models, and these classes are:
Selected contributions from these development directions is displayed in the following sections. This means that some known contributions of research and development directions are not included. For example, is the group contribution method applied to a shear viscosity model not displayed. Even though it is an important method, it is thought to be a method for parameterization of a selected viscosity model, rather than a viscosity model in itself.
The microscopic or molecular origin of fluids means that transport coefficients like viscosity can be calculated by time correlations which are valid for both gases and liquids, but it is computer intensive calculations. Another approach is the Boltzmann equation which describes the statistical behaviour of a thermodynamic system not in a state of equilibrium. It can be used to determine how physical quantities change, such as heat energy and momentum, when a fluid is in transport, but it is computer intensive simulations.
From Boltzmann's equation one may also analytical derive (analytical) mathematical models for properties characteristic to fluids such as viscosity, thermal conductivity, and electrical conductivity (by treating the charge carriers in a material as a gas). See also convection–diffusion equation. The mathematics is so complicated for polar and non-spherical molecules that it is very difficult to get practical models for viscosity. The purely theoretical approach will therefore be left out for the rest of this article, except for some visits related to dilute gas and significant structure theory.
Use, definition and dependence.
The classic Navier-Stokes equation is the balance equation for momentum density for an isotropic, compressional and viscous fluid that is used in fluid mechanics in general and fluid dynamics in particular:
formula_1
On the right hand side is (the divergence of) the total stress tensor formula_2 which consists of a pressure tensor formula_3 and a dissipative (or viscous or deviatoric) stress tensor formula_4. The dissipative stress consists of a compression stress tensor formula_5 (term no. 2) and a shear stress tensor formula_6 (term no. 3). The rightmost term formula_7 is the gravitational force which is the body force contribution, and formula_8 is the mass density, and formula_9 is the fluid velocity.
formula_10
For fluids, the spatial or Eularian form of the governing equations is preferred to the material or Lagrangian form, and the concept of velocity gradient is preferred to the equivalent concept of strain rate tensor. Stokes assumptions for a wide class of fluids therefore says that for an isotropic fluid the compression and shear stresses are proportional to their velocity gradients, formula_11 and formula_12 respectively, and named this class of fluids for Newtonian fluids. The classic defining equation for volume viscosity formula_13 and shear viscosity formula_14 are respectively:
formula_15
formula_16
The classic compression velocity "gradient" is a diagonal tensor that describes a compressing (alt. expanding) flow or attenuating sound waves:
formula_17
The classic Cauchy shear velocity gradient, is a symmetric and traceless tensor that describes a pure shear flow (where pure means excluding normal outflow which in mathematical terms means a traceless matrix) around e.g. a wing, propeller, ship hull or in e.g. a river, pipe or vein with or without bends and boundary skin:
formula_18
where the symmetric gradient matrix with non-zero trace is
formula_19
How much the volume viscosity contributes to the flow characteristics in e.g. a choked flow such as convergent-divergent nozzle or valve flow is not well known, but the shear viscosity is by far the most utilized viscosity coefficient. The volume viscosity will now be abandoned, and the rest of the article will focus on the shear viscosity.
Another application of shear viscosity models is Darcy's law for multiphase flow.
formula_20 where a = water, oil, gas
and formula_21 and formula_22 are absolute and relative permeability, respectively. These 3 (vector) equations models flow of water, oil and natural gas in subsurface oil and gas reservoirs in porous rocks. Although the pressures changes are big, the fluid phases will flow slowly through the reservoir due to the flow restriction caused by the porous rock.
The above definition is based on a shear-driven fluid motion that in its most general form is modelled by a shear stress tensor and a velocity gradient tensor. The fluid dynamics of a shear flow is, however, very well illustrated by the simple Couette flow. In this experimental layout, the shear stress formula_6 and the shear velocity gradient formula_12 (where now formula_23) takes the simple form:
formula_24
Inserting these simplifications gives us a defining equation that can be used to interpret experimental measurements:
formula_25
where formula_26 is the area of the moving plate and the stagnant plate, formula_27 is the spatial coordinate normal to the plates. In this experimental setup, value for the force formula_28 is first selected. Then a maximum velocity formula_29 is measured, and finally both values are entered in the equation to calculate viscosity. This gives one value for the viscosity of the selected fluid. If another value of the force is selected, another maximum velocity will be measured. This will result in another viscosity value if the fluid is a non-Newtonian fluid such as paint, but it will give the same viscosity value for a Newtonian fluid such as water, petroleum oil or gas. If another parameter like temperature, formula_30, is changed, and the experiment is repeated with the same force, a new value for viscosity will be the calculated, for both non-Newtonian and Newtonian fluids. The great majority of material properties varies as a function of temperature, and this goes for viscosity also. The viscosity is also a function of pressure and, of course, the material itself. For a fluid mixture, this means that the shear viscosity will also vary according to the fluid composition. To map the viscosity as a function of all these variables require a large sequence of experiments that generates an even larger set of numbers called measured data, observed data or observations. Prior to, or at the same time as, the experiments is a material property model (or short material model) proposed to describe or explain the observations. This mathematical model is called the constitutive equation for shear viscosity. It is usually an explicit function that contains some empirical parameters that is adjusted in order to match the observations as good as the mathematical function is capable to do.
For a Newtonian fluid, the constitutive equation for shear viscosity is generally a function of temperature, pressure, fluid composition:
formula_31
where formula_32 is the liquid phase composition with molfraction formula_33 for fluid component i, and formula_34 and formula_35 are the gas phase and total fluid compositions, respectively. For a non-Newtonian fluid (in the sense of a generalized Newtonian fluid), the constitutive equation for shear viscosity is also a function of the shear velocity gradient:
formula_36
The existence of the velocity gradient in the functional relationship for non-Newtonian fluids says that viscosity is generally not an equation of state, so the term constitutional equation will in general be used for viscosity equations (or functions). The free variables in the two equations above, also indicates that specific constitutive equations for shear viscosity will be quite different from the simple defining equation for shear viscosity that is shown further up. The rest of this article will show that this is certainly true. Non-Newtonian fluids will therefore be abandoned, and the rest of this article will focus on Newtonian fluids.
Dilute gas limit and scaled variables.
Elementary kinetic theory.
In textbooks on elementary kinetic
theory
one can find results for dilute gas modeling that have widespread use. Derivation of the kinetic model for shear viscosity usually starts by considering a Couette flow where two parallel plates are separated by a gas layer. This non-equilibrium flow is superimposed on a Maxwell–Boltzmann equilibrium distribution of molecular motions.
Let formula_37 be the collision cross section of one molecule colliding with another. The number density formula_38 is defined as the number of molecules per (extensive) volume formula_39. The collision cross section per volume or collision cross section density is formula_40, and it is related to the mean free path formula_41 by
formula_42
Combining the kinetic equations for molecular motion with the defining equation of shear viscosity gives the well known equation for shear viscosity for dilute gases:
formula_43
where
formula_44
where formula_45 is the Boltzmann constant, formula_46 is the Avogadro constant, formula_47 is the gas constant, formula_48 is the molar mass and formula_49 is the molecular mass. The equation above presupposes that the gas density is low (i.e. the pressure is low), hence the subscript zero in the variable formula_50. This implies that the kinetic translational energy dominates over rotational and vibrational molecule energies. The viscosity equation displayed above further presupposes that there is only one type of gas molecules, and that the gas molecules are perfect elastic hard core particles of spherical shape. This assumption of particles being like billiard balls with radius formula_51, implies that the collision cross section of one molecule can be estimated by
formula_52
formula_53
But molecules are not hard particles. For a reasonably spherical molecule the interaction potential is more like the Lennard-Jones potential or even more like the Morse potential. Both have a negative part that attracts the other molecule from distances much longer than the hard core radius, and thus models the van der Waals forces. The positive part models the repulsive forces as the electron clouds of the two molecules overlap. The radius for zero interaction potential is therefore appropriate for estimating (or defining) the collision cross section in kinetic gas theory, and the r-parameter (conf. formula_54) is therefore called kinetic radius. The d-parameter (where formula_55) is called kinetic diameter.
The macroscopic collision cross section formula_56 is often associated with the critical molar volume formula_57, and often without further proof or supporting arguments, by
formula_58
where formula_59 is molecular shape parameter that is taken as an empirical tuning parameter, and the pure numerical part is included in order to make the final viscosity formula more suitably for practical use. Inserting this interpretation of formula_60, and use of reduced temperature formula_61, gives
formula_62
formula_63
which implies that the empirical parameter formula_59 is dimensionless, and that formula_64 and formula_65 have the same units. The parameter formula_64 is a scaling parameter that involves the gas constant formula_47 and the critical molar volume formula_66, and it used to scale the viscosity. In this article the viscosity scaling parameter will frequently be denoted by formula_67 which involve one or more of the parameters formula_47, formula_66, formula_68 in addition to critical temperature formula_69 and molar mass formula_48. Incomplete scaling parameters, such as the parameter formula_70 above where the gas constant formula_47 is absorbed into the empirical constant, will often be encountered in practice. In this case the viscosity equation becomes
formula_71
where the empirical parameter formula_72 is not dimensionless, and a proposed viscosity model for dense fluid will not be dimensionless if formula_70 is the common scaling factor. Notice that
formula_73
Inserting the critical temperature in the equation for dilute viscosity gives
formula_74
The default values of the parameters formula_75 and formula_76 should be fairly universal values, although formula_76 depends on the unit system. However, the critical molar volume in the scaling parameters formula_77 and formula_78 is not easily accessible from experimental measurements, and that is a significant disadvantage. The general equation of state for a real gas is usually written as
formula_79
where the critical compressibility factor formula_80, which reflects the volumetric deviation of the real gases from the ideal gas, is also not easily accessible from laboratory experiments. However, critical pressure and critical temperature are more accessible from measurements. It should be added that critical viscosity is also not readily available from experiments.
Uyehara and Watson (1944) proposed to absorb
a universal average value of formula_80 (and the gas constant formula_81) into a default value of the tuning parameter formula_82 as a practical solution of the difficulties of getting experimental values for formula_57 and/or formula_80. The visocity model for a dilute gas is then
formula_83
formula_84
By inserting the critical temperature in the formula above, the critical viscosity is calculated as
formula_85
Based on an average critical compressibility factor of formula_86 and measured critical viscosity values of 60 different molecule types, Uyehara and Watson (1944) determined an average value of formula_82 to be
formula_87
The cubic equation of state (EOS) are very popular equations that are sufficiently accurate for most industrial computations both in vapor-liquid equilibrium and molar volume. Their weakest points are perhaps molar volum in the liquid region and in the critical region.
Accepting the cubic EOS, the molar hard core volume formula_88 can be calculated from the turning point constraint at the critical point. This gives
formula_89
where the constant formula_90 is a universal constant that is specific for the selected variant of the cubic EOS. This says that using formula_91, and disregarding fluid component variations of formula_80, is in practice equivalent to say that the macroscopic collision cross section is proportional to the hard core molar volume rather than the critical molar volume.
In a fluid mixture like a petroleum gas or oil there are lots of molecule types, and within this mixture there are families of molecule types (i.e. groups of fluid components). The simplest group is the n-alkanes which are long chains of CH2-elements. The more CH2-elements, or carbon atoms, the longer molecule. Critical viscosity and critical thermodynamic properties of n-alkanes therefore show a trend, or functional behaviour, when plotted against molecular mass or number of carbon atoms in the molecule (i.e. carbon number). Parameters in equations for properties like viscosity usually also show such trend behaviour. This means that
formula_92
This says that the scaling parameter formula_91 alone is not a true or complete scaling factor unless all fluid components have a fairly similar (and preferably spherical) shape.
The most important result of this kinetic derivation is perhaps not the viscosity formula, but the semi-empirical
parameter formula_91 that is used extensively throughout the industry and applied science communities as a scaling factor for (shear) viscosity. The literature often reports the reciprocal parameter and denotes it as formula_93.
The dilute gas viscosity contribution to the total viscosity
of a fluid will only be important when predicting the viscosity of vapors at low pressures or the viscosity of dense fluids
at high temperatures. The viscosity model for dilute gas, that is shown above, is widely used throughout the industry and applied science communities. Therefore, many researchers do not specify a dilute gas viscosity model when they propose a total viscosity model, but leave it to the user to select and include the dilute gas contribution. Some researchers do not include a separate dilute gas model term, but propose an overall gas viscosity model that cover the entire pressure and temperature range they investigated.
In this section our central macroscopic variables and parameters and their units are temperature formula_94 [K], pressure formula_95 [bar], molar mass formula_48 [g/mol], low density (low pressure or dilute) gas viscosity formula_65 [μP]. It is, however, common in the industry to use another unit for liquid and high density gas viscosity formula_0 [cP].
Kinetic theory.
From Boltzmann's equation Chapman and Enskog derived a viscosity model for a dilute gas.
formula_96
where formula_97 is (the absolute value of) the energy-depth of the potential well (see e.g. Lennard-Jones interaction potential). The term formula_98 is called the collision integral, and it is occurs as a general function of temperature that the user must specify, and that is not a simple task. This illustrates the situation for the molecular or statistical approach: The (analytical) mathematics gets incredible complex for polar and non-spherical molecules making it very difficult to achieve practical models for viscosity based on a statistical approach. The purely statistical approach will therefore be left out in the rest of this article.
Empirical correlation.
Zéberg-Mikkelsen (2001) proposed empirical models for gas viscosity of fairly spherical molecules that is displayed in the section on Friction Force theory and its models for dilute gases and simple light gases. These simple empirical correlations illustrate that empirical methods competes with the statistical approach with respect to gas viscosity models for simple fluids (simple molecules).
Kinetic theory with empirical extension.
The gas viscosity model of Chung et alios (1988)
is combination of the Chapman–Enskog(1964)
kinetic theory of viscosity for dilute gases and the empirical expression of
Neufeld et alios (1972)
for the reduced collision integral, but expanded empirical to handle polyatomic, polar and hydrogen
bonding fluids over a wide temperature range. This viscosity model illustrates a successful combination of kinetic theory and empiricism, and it is displayed in the section of Significant structure theory and its model for the gas-like contribution to the total fluid viscosity.
Trend functions and scaling.
In the section with models based on elementary kinetic theory, several variants of scaling the viscosity equation was discussed, and they are displayed below for fluid component i, as a service to the reader.
formula_99
formula_100
formula_101
Zéberg-Mikkelsen (2001) proposed an empirical correlation for the formula_102 parameter for n-alkanes, which is
formula_103
formula_104
The critical molar volume of component i formula_102 is related to the critical mole density formula_105 and critical mole concentration formula_106 by the equation formula_107. From the above equation for formula_108 it follows that
formula_109
where formula_110 is the compressibility factor for component i, which is often used as an alternative to formula_102. By establishing a trend function for the parameter formula_102 for a homologous series, groups or families of molecules, parameter values for unknown fluid components in the homologous group can be found by interpolation and extrapolation, and parameter values can easily re-generateat at later need. Use of trend functions for parameters of homologous groups of molecules have greatly enhanced the usefulness of viscosity equations (and thermodynamic EOSs) for fluid mixtures such as petroleum gas and oil.
Uyehara and Watson (1944) proposed a correlation for critical viscosity (for fluid component i) for n-alkanes using their average parameter formula_111 and the classical pressure dominated scaling parameter formula_112 :
formula_113
formula_114
Zéberg-Mikkelsen (2001) proposed an empirical correlation for critical viscosity ηci parameter for n-alkanes, which is
formula_115
formula_116
The unit equations for the two constitutive equations above by Zéberg-Mikkelsen (2001) are
formula_117
Inserting the critical temperature in the three viscosity equations from elementary kinetic theory gives three parameter equations.
formula_118
formula_119
The three viscosity equations now coalesce to a single viscosity equation
formula_120
because a nondimensional scaling is used for the entire viscosity equation. The standard nondimensionality reasoning goes like this: Creating nondimensional variables (with subscript D) by scaling gives
formula_121
Claiming nondimensionality gives
formula_122
The collision cross section and the critical molar volume which are both difficult to access experimentally, are avoided or circumvented. On the other hand, the critical viscosity has appeared as a new parameter, and critical viscosity is just as difficult to access experimentally as the other two parameters. Fortunately, the best viscosity equations have become so accurate that they justify calculation in the critical point, especially if the equation is matched to surrounding experimental data points.
Classic mixing rules.
Classic mixing rules for gas.
Wilke (1950) derived a mixing rule based on kinetic gas theory
formula_123
formula_124
The Wilke mixing rule is capable of describing the correct viscosity behavior of gas mixtures showing a nonlinear and non-monotonical behavior, or showing a characteristic bump shape, when the viscosity is plotted versus mass density at critical temperature, for mixtures containing molecules of very different sizes. Due to its complexity, it has not gained widespread use. Instead, the slightly simpler mixing rule proposed by Herning and Zipperer (1936), is found to be suitable for gases of hydrocarbon mixtures.
Classic mixing rules for liquid.
The classic Arrhenius (1887). mixing rule for liquid mixtures is
formula_125
where formula_126 is the viscosity of the liquid mixture, formula_127 is the viscosity (equation) for fluid component i when flowing as a pure fluid, and formula_33 is the molfraction of component i in the liquid mixture.
The Grunberg-Nissan (1949) mixing rule extends the Arrhenius rule to
formula_128
where formula_129 are empiric binary interaction coefficients that are special for the Grunberg-Nissan theory. Binary interaction coefficients are widely used in cubic EOS where they often are used as tuning parameters, especially if component j is an uncertain component (i.e. have uncertain parameter values).
Katti-Chaudhri (1964) mixing rule is
formula_130
where formula_131 is the partial molar volume of component i, and formula_132 is the molar volume of the liquid phase and comes from the vapor-liquid equilibrium (VLE) calculation or the EOS for single phase liquid.
A modification of the Katti-Chaudhri mixing rule is
formula_133
formula_134
where formula_135 is the excess activation energy of the viscous flow, and formula_136 is the energy that is characteristic of intermolecular interactions between component i and component j, and therefore is responsible for the excess energy of activation for viscous flow. This mixing rule is theoretically justified by Eyring's representation of the viscosity of a pure fluid according to Glasstone et alios (1941). The quantity formula_137 has been obtained from the time-correlation expression for shear viscosity by Zwanzig (1965).
Power series.
Very often one simply selects a known correlation for the dilute gas viscosity formula_50, and subtracts this contribution from the total viscosity which is measured in the laboratory. This gives a residual viscosity term, often denoted formula_138, which represents the contribution of the dense fluid, formula_139.
formula_140
The dense fluid viscosity is thus defined as the viscosity in excess of the dilute gas viscosity. This technique is often used in developing mathematical models for both purely empirical correlations and models with a theoretical support. The dilute gas viscosity contribution becomes important when the zero density limit (i.e. zero pressure limit) is approached. It is also very common to scale the dense fluid viscosity by the critical viscosity, or by an estimate of the critical viscosity, which is a characteristic point far into the dense fluid region. The simplest model of the dense fluid viscosity is a (truncated) power series of reduced mole density or pressure.
Jossi et al. (1962) presented such a model based on reduced mole density, but its most widespread form is the version proposed by
Lohrenz et al. (1964) which is displayed below.
formula_141
The LBC-function is then expanded in a (truncated) power series with empirical coefficients as displayed below.
formula_142
The final viscosity equation is thus
formula_143
formula_144
formula_145
Local nomenclature list:
<templatestyles src="Div col/styles.css"/>
formula_153
formula_154
formula_155
formula_156
Mixture.
The formula for formula_50 that was chosen by LBC, is displayed in the section called Dilute gas contribution.
formula_157
formula_158
formula_159
formula_160
Mixing rules.
The subscript C7+ refers to the collection of hydrocarbon molecules in a reservoir fluid with oil and/or gas that have 7 or more carbon atoms in the molecule. The critical volume of C7+ fraction has unit ft3/lb mole, and it is calculated by
formula_161
where formula_162 is the specific gravity of the C7+ fraction.
formula_163
formula_164
formula_165
The molar mass formula_166 (or molecular mass) is normally not included in the EOS formula, but it usually enters the characterization of the EOS parameters.
EOS.
From the equation of state the molar volume of the reservoir fluid (mixture) is calculated.
formula_167
The molar volume formula_168 is converted to mole density formula_169 (also called mole concentration and denoted formula_170), and then scaled to be reduced mole density formula_171.
formula_172
Dilute gas contribution.
The correlation for dilute gas viscosity of a mixture is taken from Herning and Zipperer
(1936) and is
formula_173
The correlation for dilute gas viscosity of the individual components is taken from
Stiel and Thodos (1961) and is
formula_174
where
formula_175
formula_176
Corresponding state principle.
The principle of corresponding states (CS principle or CSP) was first formulated by van der Waals, and it says that two fluids (subscript a and z) of a group (e.g. fluids of non-polar molecules) have approximately the same reduced molar volume (or reduced compressibility factor) when compared at the same reduced temperature and reduced pressure. In mathematical terms this is
formula_177
When the common CS principle above is applied to viscosity, it reads
formula_178
Note that the CS principle was originally formulated for equilibrium states, but it is now applied on a transport property - viscosity, and this tells us that another CS formula may be needed for viscosity.
In order to increase the calculation speed for viscosity calculations based on CS theory, which is important in e.g. compositional reservoir simulations, while keeping the accuracy of the CS method, Pedersen et al.
(1984, 1987, 1989) proposed a CS method that uses a simple (or conventional) CS formula when calculating the reduced mass density that is used in the rotational coupling constants (displayed in the sections below), and a more complex CS formula, involving the rotational coupling constants, elsewhere.
Mixture.
The simple corresponding state principle is extended by including a rotational coupling coefficient formula_179 as suggested by Tham and Gubbins
(1970). The reference fluid is methane, and it is given the subscript z.
formula_180
formula_181
formula_182
Mixing rules.
The interaction terms for critical temperature and critical volume are
formula_183
formula_184
The parameter formula_102 is usually uncertain or not available. One therefore wants to avoid this parameter. Replacing formula_110 with the generic average parameter formula_185 for all components, gives
formula_186
formula_187
formula_188
The above expression for formula_189 is now inserted into the equation for formula_190. This gives the following mixing rule
formula_191
Mixing rule for the critical pressure of the mixture is established in a similar way.
formula_192
formula_193
formula_194
The mixing rule for molecular weight is much simpler, but it is not entirely intuitive. It is an empirical combination of the more intuitive formulas with mass weighting formula_195 and mole weighting formula_196.
formula_197
formula_198
The rotational coupling parameter for the mixture is
formula_199
Reference fluid.
The accuracy of the final viscosity of the CS method needs a very accurate density prediction of the reference fluid. The molar volume of the reference fluid methane is therefore calculated by a special EOS, and the Benedict-Webb-Rubin (1940) equation of state variant suggested by McCarty (1974), and abbreviated BWRM, is recommended by Pedersen et al. (1987) for this purpose. This means that the fluid mass density in a grid cell of the reservoir model may be calculated via e.g. a cubic EOS or by an input table with unknown establishment. In order to avoid iterative calculations, the reference (mass) density used in the rotational coupling parameters is therefore calculated using a simpler corresponding state principle which says that
formula_200
The molar volume is used to calculate the mass concentration, which is called (mass) density, and then scaled to be reduced density which is equal to reciprocal of reduced molar volume because there is only on component (molecule type). In mathematical terms this is
formula_201
The formula for the rotational coupling parameter of the mixture is shown further up, and the rotational coupling parameter for the reference fluid (methane) is
formula_202
The methane mass density used in viscosity formulas is based on the extended corresponding state, shown at the beginning of this chapter on CS-methods. Using the BWRM EOS, the molar volume of the reference fluid is calculated as
formula_203
Once again, the molar volume is used to calculate the mass concentration, or mass density, but the reference fluid is a single component fluid, and the reduced density is independent of the relative molar mass. In mathematical terms this is
formula_204
The effect of a changing composition of e.g. the liquid phase is related to the scaling factors for viscosity, temperature and pressure, and that is the corresponding state principle.
The reference viscosity correlation of Pedersen et al. (1987) is
formula_205
The formulas for formula_206, formula_207, formula_208 are taken from Hanley et al.
(1975).
The dilute gas contribution is
formula_209
The temperature dependent factor of the first density contribution is
formula_210
The dense fluid term is
formula_211
where exponential function is written both as formula_212 and as formula_213. The molar volume of the reference fluid methane, which is used to calculate the mass density in the viscosity formulas above, is calculated at a reduced temperature that is proportional to the reduced temperature of the mixture. Due to the high critical temperatures of heavier hydrocarbon molecules, the reduced temperature of heavier reservoir oils (i.e. mixtures) can give a transferred reduced methane temperature that is in the neighborhood of the freezing temperature of methane. This is illustrated using two fairly heavy hydrocarbon molecules, in the table below. The selected temperatures are a typical oil or gas reservoir temperature, the reference temperature of the International Standard Metric Conditions for Natural Gas (and similar fluids) and the freezing temperature of methane (formula_214).
Pedersen et al. (1987) added a fourth term, that is correcting the reference viscosity formula at low reduced temperatures. The temperature functions formula_215 and formula_216 are weight factors. Their correction term is
formula_217
formula_218
formula_219
formula_220
formula_221
formula_222
Equation of state analogy.
Phillips
plotted temperature formula_94 versus viscosity formula_0 for different isobars for propane, and observed a similarity between these isobaric curves and the classic isothermal curves of the formula_223 surface. Later, Little and Kennedy
developed the first viscosity model based on analogy between formula_224 and formula_223 using van der Waals EOS. Van der Waals EOS was the first cubic EOS, but the cubic EOS has over the years been improved and now make up a widely used class of EOS. Therefore, Guo et al.
(1997)
developed two new analogy models for viscosity based on PR EOS (Peng and Robinson 1976) and PRPT EOS (Patel and Teja 1982)
respectively. The following year T.-M. Guo
modified the PR based viscosity model slightly, and it is this version that will be presented below as a representative of EOS analogy models for viscosity.
PR EOS is displayed on the next line.
formula_225
The viscosity equation of Guo (1998) is displayed on the next line.
formula_226
To prepare for the mixing rules, the viscosity equation is re-written for a single fluid component i.
formula_227
Details of how the composite elements of the equation are related to basic parameters and variables, is displayed below.
formula_228
formula_229
formula_230
formula_231
formula_232
formula_233
formula_234
formula_235
formula_236
formula_237
formula_238
formula_239
formula_240
formula_241
formula_242
formula_243
Friction force theory.
Multi-parameter friction force theory.
The multi-parameter version of the friction force theory (short FF theory and FF model), also called friction theory (short F-theory), was developed by Quiñones-Cisneros et al. (2000, 2001a, 2001b and Z 2001, 2004, 2006), and its basic elements, using some well known cubic EOSs, are displayed below.
It is a common modeling technique to accept a viscosity model for dilute gas (formula_65), and then establish a model for the dense fluid viscosity formula_244. The FF theory states that for a fluid under shear motion, the shear stress formula_245 (i.e. the dragging force) acting between two moving layers can be separated into a term formula_246 caused by dilute gas collisions, and a term formula_247 caused by friction in the dense fluid.
formula_248
The dilute gas viscosity (i.e. the limiting viscosity behavior as the pressure, normal stress, goes to zero) and the dense fluid viscosity (the residual viscosity) can be calculated by
formula_249
where du/dy formula_250 is the local velocity gradient orthogonal to the direction of flow. Thus
formula_251
The basic idea of QZS (2000) is that internal surfaces in a Couette flow acts like (or is analogue to) mechanical slabs with friction forces acting on each surface as they slide past each other. According to the Amontons-Coulomb friction law in classical mechanics, the ratio between the kinetic friction force formula_252 and the normal force formula_253 is given by
formula_254
where formula_255 is known as the kinetic friction coefficient, A is the area of the internal flow surface, formula_245 is the shear stress and formula_256 is the normal stress (or pressure formula_95) between neighboring layers in the Couette flow.
formula_257
The FF theory of QZS says that when a fluid is brought to have shear motion, the attractive and repulsive intermolecular forces will contribute to amplify or diminish the mechanical properties of the fluid. The friction shear stress term
formula_247 of the dense fluid can therefore be considered to consist of an attractive friction shear contribution formula_258 and a repulsive friction shear contribution formula_259. Inserting this gives us
formula_260
The well known cubic equation of states (SRK, PR and PRSV EOS), can be written in a general form as
formula_261
The parameter pair (u,w)=(1,0) gives the SRK EOS, and (u,w)=(2,-1) gives both the PR EOS and the PRSV EOS because they differ only in the temperature and composition dependent parameter / function a. Input variables are, in our case, pressure (P), temperature (T) and for mixtures also fluid composition which can be single phase (or total) composition formula_262, vapor (gas) composition formula_263 or liquid (in our example oil) composition formula_264. Output is the molar volume of the phase (V). Since the cubic EOS is not perfect, the molar volume is more uncertain than the pressure and temperature values.
The EOS consists of two parts that are related to van der Waals forces, or interactions, that originates in the static electric fields of the colliding parts /spots of the two (or more) colliding molecules. The repulsive part of the EOS is usually modeled as a hard core behavior of molecules, hence the symbol (Ph), and the attractive part (Pa) is based on the attractive interaction between molecules (conf. van der Waals force). The EOS can therefore be written as
formula_265
Assume that the molar volume (V) is known from EOS calculations, and prior vapor-liquid equilibrium (VLE) calculations for mixtures. Then the two functions formula_266 and formula_267 can be utilized, and these functions are expected to be a more accurate and robust than the molar volume (V) itself. These functions are
formula_268
formula_269
The friction theory therefore assumes that the residual attractive stress formula_270 and the residual repulsive stress formula_271 are functions of the attractive pressure term formula_272 and the repulsive pressure term formula_273, respectively.
formula_274
The first attempt is, of course, to try a linear function in the pressure terms / functions.
formula_275
All formula_276 coefficients are in general functions of temperature and composition, and they are called friction functions. In order to achieve high accuracy over a wide pressure and temperature ranges, it turned out that a second order term was needed even for non-polar molecules types such as hydrocarbon fluids in oil and gas reservoirs, in order to achieve a high accuracy at very high pressures. A test with a presumably difficult 3-component mixture of non-polar molecule types needed a third order power to achieve high accuracy at the most extreme super-critical pressures.
formula_277
This article will concentrate on the second order version, but the third order term will be included whenever possible in order to show the total set of formulas. As an introduction to mixture notation, the above equation is repeated for component i in a mixture.
formula_278
The unit equations for the central variables in the multi-parameter FF-model is
formula_279
Friction functions.
Friction functions for fluid component i in the 5 parameter model for pure n-alkane molecules are presented below.
formula_280
formula_281
formula_282
formula_283
Friction functions for fluid component i in the 7- and 8-parameter models are presented below.
formula_284
formula_285
formula_282
formula_286
formula_283
The empirical constants in the friction functions are called friction constants. Friction constants for some n-alkanes in the 5 parameter model using SRK and PRSV EOS (and thus PR EOS) is presented in tables below. Friction constants for some n-alkanes in the 7 parameter model using PRSV EOS are also presented in a table below. The constant formula_287 for three fluid components are presented below in the last table of this table-series.
formula_288
Mixture.
In the single phase regions, the mole volume of the fluid mixture is determined by the input variables are pressure (P), temperature (T) and (total) fluid composition formula_35. In the two-phase gas-liquid region a vapor-liquid equilibrium (VLE) calculation splits the fluid into a vapor (gas) phase with composition formula_34 and phase mixture molfraction ng and a liquid phase (in our example oil) with composition formula_32 and phase mixture molfraction no. For liquid phase, vapor phase and single phase fluid the relation to VLE and EOS variables are
formula_289
formula_290
In a compositional reservoir simulator the pressure is calculated dynamically for each grid cell and each timestep. This gives dynamic pressures for vapor and liquid (oil) or single phase fluid. Assuming zero capillary pressure between hydrocarbon liquid (oil) and gas, the simulator software code will give a single dynamic pressure formula_291 which applies to both the vapor mixture and the liquid (oil) mixture. In this case the reservoir simulator software code may use
formula_292
or
formula_293
The friction model for viscosity of a mixture is
formula_294
formula_295
The cubic power term is only needed when molecules with a fairly rigid 2-D structure are included in the mixture, or the user requires a very high accuracy at exemely high pressures. The standard model includes only linear and quadratic terms in the pressure functions.
formula_296
formula_297
formula_298
Mixing rules.
where the empirical weight fraction is
formula_299
The recommended values for formula_300 are
These values are established from binary mixtures of n-alkanes using a 5-parameter viscosity model, and they seems to be used for 7- and 8-parameter models also. The motivation for this weight parameter formula_303, and thus the formula_300-parameter, is that in asymmetric mixtures like CH4 - C10H12, the lightest component tends to decrease the viscosity of the mixture more than linearly when plotted versus molfraction of the light component (or the heavy component).
The friction coefficients of some selected fluid components is presented in the tables below for the 5,7 and 8-parameter models. For convenience are critical viscosities also included in the tables.
One-parameter friction force theory.
The one-parameter version of the friction force theory (FF1 theory and FF1 model) was developed by Quiñones-Cisneros et al.
(2000, 2001a, 2001b and
Z 2001, 2004),
and its basic elements, using some well known cubic EOSs, are displayed below.
The first step is to define the reduced dense fluid (or frictional) viscosity for a pure (i.e. single component) fluid by dividing by the critical viscosity. The same goes for the dilute gas viscosity.
formula_304
The second step is to replace the attractive and repulsive pressure functions by reduced pressure functions. This will of course, affect the friction functions also. New friction functions are therefore introduced. They are called reduced friction functions, and they are of a more universal nature. The reduced frictional viscosity is
formula_305
Returning to the unreduced frictional viscosity and rephrasinge the formula, gives
formula_306
Critical viscosity is seldom measured and attempts to predict it by formulas are few. For a pure fluid, or component i in a fluid mixture, a formula from kinetic theory is often used to estimate critical viscosity.
formula_307
where formula_308 is a constant, and critical molar volume Vci is assumed to be proportional to the collision cross section. The critical molar volume Vci is significantly more uncertain than the parameters Pci and Tci. To get rid of Vci, the critical compressibility factor Zci is often replaced by a universal average value. This gives
formula_309
where formula_310 is a constant. Based on an average critical compressibility factor of Zc = 0.275 and measured critical viscosity values of 60 different molecule types, Uyehara and Watson (1944) determined an average value of Kp to be
formula_311
Zéberg-Mikkelsen (2001) proposed an empirical correlation for Vci, with parameters for n-alkanes, which is
formula_312
where formula_107. From the above equation and the definition of the compressibility factor it follows that
formula_109
Zéberg-Mikkelsen (2001) also proposed an empirical correlation for ηci, with parameters for n-alkanes, which is
formula_313
The unit equations for the two constitutive equations above by Zéberg-Mikkelsen (2001) are
formula_314
The next step is to split the formulas into formulas for well defined components (designated by subscript d) with respect critical viscosity and formulas for uncertain components (designated by subscript u) where critical viscosity is estimated using formula_315 and the universal constant formula_310 which will be treated as a tuning parameter for the current mixture. The dense fluid viscosity (for fluid component i in a mixture) is then written as
formula_316
The formulas from friction theory is then related to well defined and uncertain fluid components. The result is
formula_317
formula_318
formula_319
However, in order to obtain the characteristic critical viscosity of the heavy pseudocomponents, the following modification of the Uyehara and Watson (1944) expression for the critical viscosity can be used. The frictional (or residual) viscosity is then written as
formula_320
The unit equations are formula_321 and formula_322 and formula_323.
formula_324
formula_325
formula_326
Reduced friction functions.
The unit equation of formula_327 is formula_328.
The 1-parameter model have been developed based on single component fluids in the series from methane to n-octadecane (C1H4 to C18H38). The empirical parameters in the reduced friction functions above are treated as universal constants, and they are listed in the following table. For convenience are critical viscosities included in the tables for models with 5- and 7-parameters that was presented further up.
Mixture.
The mixture viscosity is given by
formula_329
The mixture viscosity of well defined components is given by
formula_330
The mixture viscosity function of uncertain components is given by
formula_331
The mixture viscosity can be tuned to measured viscosity data by optimizing (regressing) the parameter formula_332.
where the mixture friction coefficients are obtained by eq(I.7.45) through eq(I.7.47) and formula_333 and formula_334 are the attractive and repulsive pressure term of the mixture.
Mixing rules.
The mixing rules for the well defined components are
formula_335
formula_336
formula_337
QZS recommends to drop the dilute gas term for the uncertain fluid components which are usually the heavier (hydrocarbon) components. The formula is kept here for consistency. The mixing rules for the uncertain components are
formula_338
formula_339
formula_340
formula_341
Dilute gas limit.
Zéberg-Mikkelsen (2001) proposed an empirical model for dilute gas viscosity of fairly spherical molecules as follows
formula_342
or
formula_343
formula_344
The unit equations for viscosity and temperature are
formula_345
The second term is a correction term for high temperatures. Note that most formula_346 parameters are negative.
Light gases.
Zéberg-Mikkelsen (2001) proposed a FF-model for light gas viscosity as follows
formula_347
The friction functions for light gases are simple
formula_348
formula_349
formula_350
The FF-model for light gas is valid for low, normal, critical and super critical conditions for these gases. Although the FF-model for viscosity of dilute gas is recommended, any accurate viscosity model for dilute gas can also be used with good results.
The unit equations for viscosity and temperature are
formula_351
Transition state analogy.
This article started with viscosity for mixtures by displaying equations for dilute gas based on elementary kinetic theory, hard core (kinetic) theory and proceeded to selected theories (and models) that aimed at modeling viscosity for dense gases, dense fluids and supercritical fluids. Many or most of these theories where based on a philosophy of how gases behaves with molecules flying around, colliding with other molecules and exchanging (linear) momentum and thus creating viscosity. When the fluid became liquid, the models started to deviate from measurements because a small error in the calculated molar volume from the EOS is related to a large change in pressure and vica versa, and thus also in viscosity. The article has now come to the other end where theories (or models) are based on a philosophy of how a liquid behaves and give rise to viscosity. Since molecules in a liquid are much closer to each other, one may wonder how often a molecule in one sliding fluid surface finds a free volume in the neighboring sliding surface that is big enough for the molecule to jump into it. This may be rephrased as: when do a molecule have enough energy in its fluctuating movements to squeeze into a small open volume in the neighboring sliding surface, similar to a molecule that collides with another molecule and locks into it in a chemical reaction, and thus creates a new compound, as modeled in the transition state theory (TS theory and TS model).
Free volume theory.
The free volume theory (short FV theory and FV model) originates from
Doolittle (1951)
who proposed that viscosity is related to the free volume fraction formula_352 in a way that is analogous to the Arrhenius equation. The viscosity model of Doolittle (1951) is
formula_353
where formula_354 is the molar volume and formula_355 is the molar hard core volume.
There where, however, little activity on the FV theory until
Allal et al. (1996, 2001a)
proposed a relation between the free volume fraction and parameters (and/or variables) at the molecular level of the fluid (also called the microstructure of the fluid). The 1996-model became the start of a period with high research activity where different models were put forward. The surviving model was presented by Allal et al. (2001b), and this model will be displayed below.
The viscosity model is composed of a dilute gas contribution formula_50 (or formula_356 ) and a dense-fluid contribution formula_139 (or dense-state contribution formula_357 or formula_358).
formula_359
Allal et al. (2001b) showed that the dense-fluid contribution to viscosity can be related to the friction coefficient formula_13 of the sliding fluid surface, and Dulliens (1963) has shown that the self-diffusion coefficient formula_360 is related to the friction coefficient of an internal fluid surface. These two relations are shown here:
formula_361
By eliminating the friction coefficient formula_13, Boned et al. (2004)
expressed the characteristic length formula_362 as
formula_363
The right hand side corresponds to the so-called Dullien invariant which was derived by Dullien (1963, 1972).
A result from this is that the characteristic length formula_364 is interpreted as the average momentum transfer distance to a molecule that will enter a free volume site and collide with a neighboring molecule.
The friction coefficient formula_13 is modeled by Allal et alios (2001b) as
formula_365
The free volume fraction is now related to the energy E by
formula_366
formula_367
where formula_368 is the total energy a molecule must use in order to diffuse into a vacant volume, and formula_369 is connected to the work (or energy) necessary to form or expand a vacant volume available for diffusion of a molecule. The energy formula_370 is the barrier energy that the molecule must overcome in order to diffuse, and it is modeled to be proportional to mass density in order to improve match of measured viscosity data. Note that the sensitive term formula_371 in the denominator of Doolittle's (1951) model has disappeared, making the viscosity model of Allal et alios (2001b) more robust to numerical calculations of liquid molar volume by an imperfect EOS. The pre-exponential factor A is now a function and becomes
formula_372
The viscosity model proposed by Allal et al.(2001b) is thus
formula_373
A digression is that the self-diffusion coefficient of Boned et al. (2004) becomes
formula_374
Local nomenclature list:
<templatestyles src="Div col/styles.css"/>
Mixture.
The mixture viscosity is
formula_389
The dilute gas viscosity formula_390 is taken from Chung et al.(1988) which is displayed in the section on SS theory. The dense fluid contribution to viscosity in FV theory is
formula_391
where formula_392 are three characteristic parameters of the fluid w.r.t. viscosity calculations. For fluid mixtures are these three parameters calculated using mixing rules. If the self-diffusion coefficient is included in the governing equations, probably via the diffusion equation, use of four characteristic parameters (i.e. use of Lp and Ld instead of Lc) will give a consistent flow model, but flow studies that involves the diffusion equation belongs a small class of special studies.
The unit for the viscosity is [Pas], when all other units are kept in SI units.
Mixing rules.
At the end of the intensive research period
Allal et al. (2001c)
and Canet (2001)
proposed two different set of mixing rules, and according to
Almasi (2015)
there has been no agreement in the literature about which are the best mixing rules. Almasi (2015) therefore recommended the classic linear mole weighted mixing rules which are displayed below for a mixture of N fluid components.
formula_393
formula_394
formula_395
formula_396
The three characteristic viscosity parameters formula_397 are usually established by optimizing the viscosity formula against measured viscosity data for pure fluids (i.e. single component fluids).
Trend functions.
The three characteristic viscosity parameters formula_398 are usually established by optimizing the viscosity formula against measured viscosity data for pure fluids (i.e. single component fluids). Data for these parameters can then be stored in databases together with data for other chemical and physical material properties and information. This happens more often if use of the equation becomes widespread. Hydrocarbon molecules is a huge group of molecules that has several subgroups which itself contains molecules of the same basic structure, but with different lengths. The alkanes is the simplest of these groups. A material property of molecules in such a group normally shows up as a function when plotted against another material property. A mathematical function is then selected based physical/chemical knowledge, experience and intuition, and the empirical parameters (i.e. constants) in the function are determined by curve fitting. Such a function is called a trend or trend function, and the group of molecule types is called a homologous series. Llovell et al. (2013a, 2013b)
proposed trend functions for the three FV parameters formula_399 for alkanes.
Oliveira et al. (2014)
proposed trend functions for the FV parameters for fatty acid methyl esters (FAME) and fatty acid ethyl esters (FAEE), both including compounds with up to three unsaturated bonds, which are displayed below.
formula_400
formula_401
formula_402
The molar mass M [g/mol] (or molecular mass / weight) associated with the parameters used in curve fitting process (where formula_403, formula_404, and formula_405 are empirical parameters) corresponds to carbon numbers in the range 8-24 and 8-20 for FAME and FAEE respectively.
Significant structure theory.
Viscosity models based on significant structure theory, a designation originating from
Eyring,
(short SS theory and SS model) has in the first two decades of the 2000s evolved in a development relay. It starting with
Macías-Salinas et al.(2003),
continued with a significant contribution from
Cruz-Reyes et al.(2005),
followed by a third stage of development by
Macías-Salinas et al.(2013),
whose model is displayed here. The SS theories have three basic assumptions:
The fraction of gas-like molecules formula_406 and solid-like molecules formula_407 are
formula_408
where formula_354 is the molar volume of the phase in question, formula_409 is the molar volume of solid-like molecules and formula_355 is the molar hard core volume. The viscosity of the fluid is a mixture of these two classes of molecules
formula_410
Gas-like contribution.
The gas-like viscosity contribution is taken from the viscosity model of
Chung et al.(1984, 1988),
which is based on the Chapman–Enskog(1964)
kinetic theory of viscosity for dilute gases and the empirical expression of
Neufeld et al.(1972)
for the reduced collision integral, but expanded empirical to handle polyatomic, polar and hydrogen
bonding fluids over a wide temperature range. The viscosity model of Chung et al.(1988) is
formula_411
formula_412
where
formula_413
Local nomenclature list:
<templatestyles src="Div col/styles.css"/>
Solid-like contribution.
In the 2000s, the development of the solid-like viscosity contribution started with
Macías-Salinas et al.(2003) who used the Eyring equation in TS theory as an analogue to the solid-like viscosity contribution, and as a generalization of the first exponential liquid viscosity model proposed by Reynolds(1886).
The Eyring equation models irreversible chemical reactions at constant pressure, and the equation therefore uses Gibbs activation energy, formula_424, to model the transition state energy that the system uses to move matter (i.e. separate molecules) from the initial state to the final state (i.e. the new compound). In the Couette flow, the system moves matter from one sliding surface to another, due to fluctuating internal energy, and probably also due to pressure and the pressure gradient. Besides, the pressure effect on viscosity is somewhat different for systems in a medium pressure range than it is for systems in a very high pressure range.
Cruz-Reyes et al.(2005) uses Helmholtz energy (F = U-TS = G-PV) as potential in the exponential function. This gives
formula_425
Cruz-Reyes et al.(2005) states that the Gibbs activation energy is negative proportional to the internal energy of vaporization (and thus calculated at a point on the freezing curve), but Macías-Salinas et al.(2013) changes that to be the residual internal energy, formula_426, at the general pressure and temperature of the system. One could alternatively use the grand potential (formula_427 = U-TS-G = -PV, sometimes called Landau energy or potential) in the exponential function and argue that the Couette flow is not a homogeneous system, such that a term with the residual internal energy must be added. Both arguments gives the proposed solid-like contribution which is
formula_428
The pre-exponential factor formula_429 is taken as
formula_430
The jumping frequency of a molecule that jumps from its initial position to a vacant site, formula_431, is made dependent on the number of vacancies, formula_432, and pressure in order to extend the applicability of formula_433 to much wider ranges of temperature and pressure than a constant jumping frequency would do. The final jumping frequency model is
formula_434
A recurrent problem for viscosity models is the calculation of liquid molar volume for a given pressure using an EOS that is not perfect. This calls for introduction of some empirical parameters. Use of adjustable proportionality factors for both the residual internal energy and the Z-factor is a natural choice. The sensitivity of P versus V-b values for liquids makes it natural to introduce an empirical exponent (power) to the dimensionless Z-factor. The empirical power turns out to be very effective in the high pressure (high Z-factor) region. The solid-like viscosity contribution proposed by Macías-Salinas et al.(2013) is then
formula_435
Local nomenclature list:
<templatestyles src="Div col/styles.css"/>
formula_449
formula_450
formula_451
Mixture.
In order to clarify the mathematical statements above, the solid-like contribution for a fluid mixture is displayed in more details below.
formula_452
Mixing rules.
The variables formula_453 and all EOS parameters for a fluid mixture are taken from the EOS (conf. W) and the mixing rules used by the EOS (conf. Q). More details on this is displayed below.
A fluid of n mole in the single phase region where the total fluid composition is formula_35 [molefractions]:
formula_454
Gas phase of ng mole in two-phase region where the gas composition is formula_34 [molefractions]:
formula_455
Liquid phase of nl mole in two-phase region where the liquid composition is formula_32 [molefractions]:
formula_456
where
formula_457
formula_458
Since nearly all input to this viscosity model is provided by the EOS and the equilibrium calculations, this SS model (or TS model) for viscosity should be very simple to use for fluid mixtures. The viscosity model also have some empirical parameters that can be used as tuning parameters to compensate for imperfect EOS models and secure high accuracy also for fluid mixtures.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\eta"
},
{
"math_id": 1,
"text": " \\rho \\left[\\frac{\\partial \\mathbf{u}}{\\partial t} + \\mathbf{u} \\cdot \\nabla \\mathbf{u}\\right] = \n-\\nabla P + \\nabla[\\zeta(\\nabla\\cdot \\mathbf{u})]\n+ \\nabla\\cdot\\left[\\eta\\left(\\nabla\\mathbf{u} + \\left(\\nabla\\mathbf{u}\\right)^T - \\frac{2}{3} (\\nabla\\cdot\\mathbf{u})\\mathbf{I}\\right) \\right] + \\rho \\mathbf{g}"
},
{
"math_id": 2,
"text": " \\boldsymbol{\\sigma}"
},
{
"math_id": 3,
"text": " \\left(-P\\mathbf{I}\\right) "
},
{
"math_id": 4,
"text": " \\boldsymbol{\\tau}_{d} "
},
{
"math_id": 5,
"text": " \\boldsymbol{\\tau}_{c} "
},
{
"math_id": 6,
"text": " \\boldsymbol{\\tau}_{s} "
},
{
"math_id": 7,
"text": " \\rho \\mathbf{g} "
},
{
"math_id": 8,
"text": " \\rho "
},
{
"math_id": 9,
"text": " \\mathbf{u} "
},
{
"math_id": 10,
"text": " \\boldsymbol{\\sigma} = -P \\mathbf{I} +\\boldsymbol{\\tau}_{d} = -P \\mathbf{I} +\\boldsymbol{\\tau}_{c} +\\boldsymbol{\\tau}_{s} "
},
{
"math_id": 11,
"text": " \\mathbf{C} "
},
{
"math_id": 12,
"text": " \\mathbf{S}_{0} "
},
{
"math_id": 13,
"text": " \\zeta "
},
{
"math_id": 14,
"text": " \\eta "
},
{
"math_id": 15,
"text": " \\boldsymbol{\\tau}_{c} = 3 \\zeta \\mathbf{C} "
},
{
"math_id": 16,
"text": " \\boldsymbol{\\tau}_{s} = 2 \\eta \\mathbf{S}_{0} "
},
{
"math_id": 17,
"text": " \\mathbf{C} = \\frac{1}{3} \\left( \\nabla\\!\\cdot\\!\\mathbf{u} \\right) \\mathbf{I} "
},
{
"math_id": 18,
"text": " \\mathbf{S}_{0} = \\mathbf{S} - \\frac{1}{3} \\left( \\nabla\\!\\cdot\\!\\mathbf{u} \\right) \\mathbf{I} "
},
{
"math_id": 19,
"text": " \\mathbf{S} = \\frac{1}{2} \\left[ \\nabla \\mathbf{u} + \\left( \\nabla \\mathbf{u} \\right)^{\\mathrm{T}} \\right] "
},
{
"math_id": 20,
"text": " \\mathbf{u}_a = -\\eta_a^{-1} \\mathbf{K}_{ra} \\cdot \\mathbf{K} \\cdot \\left( \\nabla P_a - \\rho_a \\mathbf{g} \\right) "
},
{
"math_id": 21,
"text": " \\mathbf{K} "
},
{
"math_id": 22,
"text": " \\mathbf{K}_{ra} "
},
{
"math_id": 23,
"text": " \\mathbf{S}_{0} = \\mathbf{S} "
},
{
"math_id": 24,
"text": " \\tau = \\eta S \\quad \\text{where} \\quad \\tau = \\frac {F} {A} \\quad \\text{and} \\quad S = {d u_{} \\over dy} = {u_{max} \\over y_{max} }"
},
{
"math_id": 25,
"text": " \\frac {F} {A} = \\eta {d u_{} \\over dy} = \\eta { u_{max} \\over y_{max}} "
},
{
"math_id": 26,
"text": " A "
},
{
"math_id": 27,
"text": " y "
},
{
"math_id": 28,
"text": " F "
},
{
"math_id": 29,
"text": " u_{max} "
},
{
"math_id": 30,
"text": " T "
},
{
"math_id": 31,
"text": " \\eta = f(T,P,\\mathbf{w}) \\quad \\text{where} \\quad \\mathbf{w} =\\mathbf{x}, \\mathbf{y},\\mathbf{z},1_{purefluid} "
},
{
"math_id": 32,
"text": " \\mathbf{x} "
},
{
"math_id": 33,
"text": " x_{i} "
},
{
"math_id": 34,
"text": " \\mathbf{y} "
},
{
"math_id": 35,
"text": " \\mathbf{z} "
},
{
"math_id": 36,
"text": " \\eta = f(T,P,\\mathbf{w},\\mathbf{S}_{0}) \\quad \\text{where} \\quad \\mathbf{w} =\\mathbf{x}, \\mathbf{y},\\mathbf{z},1_{purefluid} "
},
{
"math_id": 37,
"text": " \\sigma "
},
{
"math_id": 38,
"text": " C "
},
{
"math_id": 39,
"text": " C = N/V "
},
{
"math_id": 40,
"text": " C \\sigma "
},
{
"math_id": 41,
"text": "l"
},
{
"math_id": 42,
"text": " l = \\frac {1} {\\sqrt{2} C \\sigma} "
},
{
"math_id": 43,
"text": " \\eta_{0} = \\frac {2} {3 \\sqrt{\\pi} } \\cdot \\frac {\\sqrt{m k_{B}T}} {\\sigma}\n= \\frac {2} {3 \\sqrt{\\pi} } \\cdot \\frac {\\sqrt{MRT}} {\\sigma N_{A}} "
},
{
"math_id": 44,
"text": " k_{B} \\cdot N_{A} = R \\quad \\text{and} \\quad M = m \\cdot N_{A} "
},
{
"math_id": 45,
"text": "k_{B}"
},
{
"math_id": 46,
"text": "N_{A}"
},
{
"math_id": 47,
"text": "R"
},
{
"math_id": 48,
"text": "M"
},
{
"math_id": 49,
"text": "m"
},
{
"math_id": 50,
"text": " \\eta_{0} "
},
{
"math_id": 51,
"text": "r"
},
{
"math_id": 52,
"text": " \\sigma = \\pi \\left( 2 r_{} \\right)^2 = \\pi d^2 \\qquad \\qquad \\qquad \\,\n\\quad \\text{for monomolecular gases and monoparticle beam experiments } "
},
{
"math_id": 53,
"text": " \\sigma_{ij} = \\pi \\left( r_{i} + r_{j} \\right)^2 = \\frac{\\pi}{4} \\left( d_{i} + d_{j} \\right)^2 \n\\quad \\text{for binary collision in gas mixtures and dissimilar bullet / target particles} "
},
{
"math_id": 54,
"text": "r, r_{i}"
},
{
"math_id": 55,
"text": "d=2r, d_{i}=2r_{i}"
},
{
"math_id": 56,
"text": " \\sigma \\cdot N_{A}"
},
{
"math_id": 57,
"text": " V_{c} "
},
{
"math_id": 58,
"text": " \\sigma N_{A} \\propto V_{c}^{2/3} \\quad \\text{or} \\quad \\sigma N_{A} = \n\\frac {2} {3 \\sqrt{\\pi} } \\cdot K_{rv}^{-1} V_{c}^{2/3} "
},
{
"math_id": 59,
"text": "K_{rv}"
},
{
"math_id": 60,
"text": " \\sigma N_{A} "
},
{
"math_id": 61,
"text": " T_{r}"
},
{
"math_id": 62,
"text": " \\eta_{0} = \\sqrt{T_{r}} K_{rv} D_{rv} \\quad \\text{where} \\quad T_{r} = T / T_{c} \\quad \\text{and} "
},
{
"math_id": 63,
"text": " D_{rv} = \\left( MRT_{c} \\right)^{1/2} V_{c}^{-2/3} = R^{1/2} D_{v} "
},
{
"math_id": 64,
"text": "D_{rv}"
},
{
"math_id": 65,
"text": "\\eta_{0}"
},
{
"math_id": 66,
"text": "V_{c}"
},
{
"math_id": 67,
"text": "D_{xyz}"
},
{
"math_id": 68,
"text": "P_{c}"
},
{
"math_id": 69,
"text": "T_{c}"
},
{
"math_id": 70,
"text": "D_{v}"
},
{
"math_id": 71,
"text": " \\eta_{0} = \\sqrt{T_{r}} K_{v} D_{v} "
},
{
"math_id": 72,
"text": "K_{v}"
},
{
"math_id": 73,
"text": " \\eta_{0} = \\sqrt{T_{r}} K_{rv} D_{rv} = \\sqrt{T_{r}} K_{v} D_{v} \\implies K_{v} = R^{1/2}K_{rv} "
},
{
"math_id": 74,
"text": " \\eta_{0c} = K_{rv} D_{rv} = K_{v} D_{v} "
},
{
"math_id": 75,
"text": " K_{rv} "
},
{
"math_id": 76,
"text": " K_{v} "
},
{
"math_id": 77,
"text": " D_{rv} "
},
{
"math_id": 78,
"text": " D_{v} "
},
{
"math_id": 79,
"text": " PV = ZRT \\implies P_{c}V_{c} = Z_{c}RT_{c}"
},
{
"math_id": 80,
"text": " Z_{c} "
},
{
"math_id": 81,
"text": " R "
},
{
"math_id": 82,
"text": " K_{p} "
},
{
"math_id": 83,
"text": " \\eta_{0} = \\sqrt{T_{r}} K_{p} D_{p} \\quad \\text{where} \\quad T_{r} = T / T_{c} \\quad \\text{and} "
},
{
"math_id": 84,
"text": " D_{p} = T_{c}^{-1/6} P_{c}^{2/3} M^{1/2} "
},
{
"math_id": 85,
"text": " \\eta_{0c} = K_{p} D_{p} "
},
{
"math_id": 86,
"text": " \\bar Z_{c} = 0.275 "
},
{
"math_id": 87,
"text": " \\bar K_{p} = 7.7 \\cdot 1.01325^{2/3} \\approx 7.77 \n\\quad \\text{for} \\quad \\left[ \\eta_{0} \\right] = \\mu P\n\\quad \\text{and} \\quad \\left[ P_{c} \\right] = bar "
},
{
"math_id": 88,
"text": " b "
},
{
"math_id": 89,
"text": " b = \\Omega_{b} \\frac{RT_{c}}{P_{c}} \\quad \\text{which is similar to} \\quad \nV_{c} = \\bar Z_{c} \\frac{RT_{c}}{P_{c}} "
},
{
"math_id": 90,
"text": " \\Omega_{b} "
},
{
"math_id": 91,
"text": " D_{p} "
},
{
"math_id": 92,
"text": " \\eta_{0cj} = K_{pj} D_{pj} \\neq \\bar K_{p} D_{pj} \\quad \\text{for many or most fluid components j }"
},
{
"math_id": 93,
"text": " \\xi "
},
{
"math_id": 94,
"text": "T"
},
{
"math_id": 95,
"text": "P"
},
{
"math_id": 96,
"text": " \\eta_{0} \\times 10^{6} = 2.6693 \\frac {\\sqrt{MT}} {\\sigma^{2} \\Omega \\left( T^{*} \\right)} \\quad \\text{where} \\quad T^{*} = k_{B}T / \\varepsilon "
},
{
"math_id": 97,
"text": " \\varepsilon "
},
{
"math_id": 98,
"text": " \\Omega ( T^{*} ) "
},
{
"math_id": 99,
"text": " \\eta_{0i} = \\sqrt{T_{ri}} K_{rvi} D_{rvi} \\quad \\text{where} \\quad\nD_{rvi} = \\sqrt{ M_{i} R T_{ci}} \\cdot V_{ci}^{-2/3} "
},
{
"math_id": 100,
"text": " \\eta_{0i} = \\sqrt{T_{ri}} K_{vi} D_{vi} \\ \\ \\, \\quad \\text{where} \\quad\nD_{vi} \\ \\ = \\sqrt{ M_{i} T_{ci}} \\cdot V_{ci}^{-2/3} "
},
{
"math_id": 101,
"text": " \\eta_{0i} = \\sqrt{T_{ri}} K_{pi} D_{pi} \\ \\ \\, \\quad \\text{where} \\quad\nD_{pi} \\ = M_{i}^{1/2}P_{ci}^{2/3} \\cdot T_{ci}^{-1/6} "
},
{
"math_id": 102,
"text": " V_{ci} "
},
{
"math_id": 103,
"text": " V _{ci}^{-1} = A + B \\cdot \\frac {P_{ci}}{RT_{ci}} \\iff\n V _{ci} = \\frac {RT_{ci}}{ART_{ci} + BP_{ci}} "
},
{
"math_id": 104,
"text": " A = 0.000235751 \\ mol/cm^{3} \\quad \\text{and} \\quad B = 3.42770 "
},
{
"math_id": 105,
"text": " \\rho_{nci} "
},
{
"math_id": 106,
"text": "c_{ci} "
},
{
"math_id": 107,
"text": " V _{ci}^{-1} = \\rho_{nci} = c_{ci} "
},
{
"math_id": 108,
"text": " V _{ci}^{-1} "
},
{
"math_id": 109,
"text": " Z _{ci} = \\frac {P_{ci}}{ART_{ci} + BP_{ci}} \\iff \\frac{Z_{ci}RT_{ci}}{P_{ci}V_{ci}} = 1 "
},
{
"math_id": 110,
"text": " Z_{ci} "
},
{
"math_id": 111,
"text": " \\bar K_{p} "
},
{
"math_id": 112,
"text": " D_{pi} "
},
{
"math_id": 113,
"text": " \\eta_{ci} = \\bar K_{p} D_{pi} "
},
{
"math_id": 114,
"text": " \\ \\ \\bar K_{p} \\, = 7.7 \\cdot 1.01325^{2/3} \\approx 7.77\n\\quad \\text{for} \\quad \\left[ \\eta_{0} \\right] = \\mu P\n\\quad \\text{and} \\quad \\left[ P_{c} \\right] = bar "
},
{
"math_id": 115,
"text": " \\eta_{ci} = C \\cdot P_{ci} M_{i}^{D} "
},
{
"math_id": 116,
"text": " \\ C = 0.597556 \\ \\mu P /bar \\cdot (g/mol)^{-D} \\quad \\text{and} \\quad D = 0.601652 "
},
{
"math_id": 117,
"text": " [P_{c}] = bar \\quad \\text{and} \\quad [V_{c}] = [RT_{c}/P_{c}] = cm^{3}/mol \\quad \\text{and} \\quad [T] = K \n\\quad \\text{and} \\quad [Z_{c}] = 1 \\quad \\text{and} \\quad [\\eta_{c}] = \\mu P "
},
{
"math_id": 118,
"text": " \\eta_{ci} = K_{rvi} D_{rvi} = K_{vi} D_{vi} = K_{pi} D_{pi} \\quad \\text{or} \\quad"
},
{
"math_id": 119,
"text": " K_{rvi} = \\frac {\\eta_{ci}} {D_{rvi}} \\quad \\text{and} \\quad\nK_{vi} = \\frac {\\eta_{ci}} {D_{vi}} \\quad \\text{and} \\quad\nK_{pi} = \\frac {\\eta_{ci}} {D_{pi}} "
},
{
"math_id": 120,
"text": " \\eta_{0i} = \\sqrt{T_{ri}} \\eta_{ci} = \\sqrt{T} \\frac{\\eta_{ci}}{\\sqrt{T_{ci}}} "
},
{
"math_id": 121,
"text": " \\eta_{Di} = \\frac{\\eta_{0i}}{\\eta_{ci}} \\quad \\text{and} \\quad T_{Di} = \\frac{T}{T_{ci}} = T_{ri} \\implies \n\\eta_{Di} \\eta_{ci} = \\sqrt{T_{Di}} K_{pi}D_{pi}"
},
{
"math_id": 122,
"text": " \\frac{K_{pi}D_{pi}}{\\eta_{ci}} = 1 \\iff K_{pi} =\\frac{\\eta_{ci}}{D_{pi}} \\implies \\eta_{Di}= \\sqrt{T_{Di}} "
},
{
"math_id": 123,
"text": " \\eta_{gmix} = \\sum_{i=1}^{N} \\frac{\\eta_{gi}} {1 + \\frac{1}{y_i} \\sum_{j=1,j \\ne i}^{N} y_{j} \\varphi_{ij} } "
},
{
"math_id": 124,
"text": " \\varphi_{ij} = \\frac{\\left[ 1+ \\sqrt[2]{\\frac{\\eta_{0i}}{\\eta_{0j}}}\\cdot\\sqrt[4]{\\frac{M_{j}}{M_{i}}} \\right]^2}\n{ \\frac{4}{\\sqrt[2]{2}} \\sqrt[2]{ 1+ \\frac{M_{i}}{M_{j}} } } "
},
{
"math_id": 125,
"text": " \\ln \\eta_{lmix} = \\sum_{i=1}^{N} x_{i} \\ln \\eta_{li} "
},
{
"math_id": 126,
"text": " \\eta_{lmix} "
},
{
"math_id": 127,
"text": " \\eta_{li} "
},
{
"math_id": 128,
"text": " \\ln \\eta_{lmix} = \\sum_{i=1}^{N} x_{i} \\ln \\eta_{li} + \\sum_{i=1}^{N}\\sum_{j=1}^{N} x_{i}x_{j}d_{ij}"
},
{
"math_id": 129,
"text": " d_{ij}"
},
{
"math_id": 130,
"text": " \\ln \\left( \\eta_{lmix} V_{lmix} \\right) = \\sum_{i=1}^{N} x_{i} \\ln \\left( \\eta_{li} V_{li} \\right)"
},
{
"math_id": 131,
"text": " V_{li} "
},
{
"math_id": 132,
"text": " V_{lmix} "
},
{
"math_id": 133,
"text": " \\ln \\left( \\eta_{lmix} V \\right) = \\sum_{i=1}^{N} z_{i} \\ln \\left( \\eta_{li} V_{li} \\right) + \n\\frac {\\Delta G^{E}} {RT} "
},
{
"math_id": 134,
"text": " \\Delta G^{E} = \\sum_{i=1}^{N}\\sum_{j=1}^{N} z_{i}z_{j}E_{ij} "
},
{
"math_id": 135,
"text": " G^{E} "
},
{
"math_id": 136,
"text": " E_{ij} "
},
{
"math_id": 137,
"text": " \\eta_{li} V_{li} "
},
{
"math_id": 138,
"text": " \\Delta\\eta "
},
{
"math_id": 139,
"text": " \\eta_{df} "
},
{
"math_id": 140,
"text": " \\eta_{df} = \\eta - \\eta_{0} \\quad \\iff \\quad \\eta = \\eta_{0} + \\eta_{df} "
},
{
"math_id": 141,
"text": " \\left[ \\frac{\\eta_{df}} {D_{p}} + 10^{-4} \\right]^{1/4} = LBC_{} "
},
{
"math_id": 142,
"text": " LBC_{} = LBC_{} \\left( \\rho_{nr} \\right) = \\sum_{i=1}^{5} a_{i} \\rho_{nr}^{i-1} "
},
{
"math_id": 143,
"text": " \\eta = \\eta_{0} -10^{-4} D_{p} + D_{p} L_{}^{4} "
},
{
"math_id": 144,
"text": " \\eta_{0} = \\eta_{0} \\left( T \\right) "
},
{
"math_id": 145,
"text": " D_{p} = T_{c}^{-1/6} P_{c}^{2/3} M_{n}^{1/2} "
},
{
"math_id": 146,
"text": "\\rho_{n} \\,"
},
{
"math_id": 147,
"text": "\\rho_{nr} "
},
{
"math_id": 148,
"text": "P_c"
},
{
"math_id": 149,
"text": "T \\ "
},
{
"math_id": 150,
"text": "T_c "
},
{
"math_id": 151,
"text": "V_c"
},
{
"math_id": 152,
"text": "\\eta \\ \\ "
},
{
"math_id": 153,
"text": " \\eta_{mix} = \\eta_{0mix} -10^{-4} D_{pmix} + D_{pmix} L_{mix}^{4} "
},
{
"math_id": 154,
"text": " LBC_{mix} = LBC_{mix} \\left( c_{rmix} \\right) = \\sum_{i=1}^{5} a_{i} c_{rmix}^{i-1} "
},
{
"math_id": 155,
"text": " D_{pmix} = T_{cmix}^{-1/6} P_{cmix}^{2/3} M_{mix}^{1/2} "
},
{
"math_id": 156,
"text": " \\eta_{0mix} = \\eta_{0mix} \\left( T \\right) "
},
{
"math_id": 157,
"text": " T_{cmix} = \\sum_{i} z_{i} T_{ci} "
},
{
"math_id": 158,
"text": " M_{mix} = M_{n} = \\sum_{i} z_{i} M_{i} "
},
{
"math_id": 159,
"text": " P_{cmix} = \\sum_{i} z_{i} P_{ci} "
},
{
"math_id": 160,
"text": " \\rho_{ncmix}^{-1} = V_{cmix} = \\sum_{i} z_{i} V_{ci} + z_{C7+} \\cdot V_{cC7+} \\quad i < C7+ "
},
{
"math_id": 161,
"text": " V_{cC7+} = 21.573 + 0.015122 \\cdot M_{C7+} - 27.656 \\cdot SG_{C7+} + 0.070615 \\cdot M_{C7+} SG_{C7+} "
},
{
"math_id": 162,
"text": " SG_{C7+} "
},
{
"math_id": 163,
"text": " T_{ci} \\quad \\text{for} \\quad i \\geq C7+ \\quad \\text{or} \\quad T_{cC7+} \\quad \\text{is taken from EOS characterization}"
},
{
"math_id": 164,
"text": " M_{i} \\quad \\text{for} \\quad i \\geq C7+ \\quad \\text{or} \\quad M_{C7+} \\quad \\text{is taken from EOS characterization}"
},
{
"math_id": 165,
"text": " P_{ci} \\quad \\text{for} \\quad i \\geq C7+ \\quad \\text{or} \\quad P_{cC7+} \\quad \\text{is taken from EOS characterization}"
},
{
"math_id": 166,
"text": "M_{i}"
},
{
"math_id": 167,
"text": " V_{mix} = V_{mix}(T,P) \\quad \\text{for 1 mole fluid} "
},
{
"math_id": 168,
"text": " V "
},
{
"math_id": 169,
"text": " \\rho_{n} "
},
{
"math_id": 170,
"text": " c "
},
{
"math_id": 171,
"text": " \\rho_{nr} "
},
{
"math_id": 172,
"text": " \\rho_{nmix} = 1/V_{mix} \\quad and \\quad \\rho_{ncmix} = 1/V_{cmix} \\quad and \\quad \\rho_{nrmix} = V_{cmix}/V_{mix} = \\rho_{nmix}/\\rho_{ncmix} "
},
{
"math_id": 173,
"text": " \\eta_{0mix} \\left( T \\right) = \\frac {\\sum_{i} z_{i} \\eta_{0i} \\left( T_{ri} \\right) M_{i}^{1/2}} {\\sum_{j} z_{j} M_{j}^{1/2}} \\quad i,j < C7+ "
},
{
"math_id": 174,
"text": " \\eta_{0i} \\left( T_{ri} \\right) =\n\\begin{cases} \n34 \\times 10^{-5} \\cdot D_{pi} T_{ri}^{0.94} & \\text{if} \\quad T_{ri} \\leqslant 1.5 \\\\\n17.78 \\times 10^{-5} \\cdot D_{pi} \\left( 4.58 \\cdot T_{ri} - 1.67 \\right)^{5/8} & \\text{if} \\quad T_{ri} > 1.5\n\\end{cases}\n"
},
{
"math_id": 175,
"text": " D_{pi} = T_{ci}^{-1/6} P_{ci}^{2/3} M_{i}^{1/2} \\quad i < C7+ "
},
{
"math_id": 176,
"text": " T_{ri} = \\frac {T}{T_{ci}} \\quad i < C7+ "
},
{
"math_id": 177,
"text": " \\frac { V_{a} \\left( P_{ra}, T_{ra} \\right)} {V_{ca}} = \\frac { V_{z} \\left( P_{rz}, T_{rz} \\right)} {V_{cz}} \\iff V_{a} \\left( P_{a}, T_{a} \\right) = \\frac {V_{ca}} {V_{cz}}\n\\cdot V_{z} \\left( P_{z} = \\frac {P_{a}P_{cz}}{P_{ca}},T_{z}=\\frac{T_{a}T_{cz}}{T_{ca}} \\right) "
},
{
"math_id": 178,
"text": " \\eta \\left( P, T \\right) = \\frac {\\eta_{c}} {\\eta_{cz}}\n\\cdot \\eta_{z} \\left( P_{z},T_{z}\\right) \\approx \\frac {K_{p}D_{p}} {K_{pz}D_{pz}} \\cdot \n\\eta_{z} \\left( P_{z},T_{z}\\right) "
},
{
"math_id": 179,
"text": " \\alpha "
},
{
"math_id": 180,
"text": " \\eta_{mix} \\left( P,T \\right) = \\left( \\frac {T_{cmix}} {T_{cz}} \\right)^{-1/6} \\cdot \\left( \\frac {P_{cmix}} {P_{cz}} \\right)^{2/3} \\cdot \\left( \\frac {M_{mix}} {M_{z}} \\right)^{1/2} \\cdot \\frac {\\alpha_{cmix}} {\\alpha_{cz}} \\cdot \\eta_{z} \\left( P_{z},T_{z} \\right) "
},
{
"math_id": 181,
"text": " P_{z} = \\frac {P \\cdot P_{cz} \\alpha_{z}} {P_{cmix} \\alpha_{mix}} "
},
{
"math_id": 182,
"text": " T_{z} = \\frac {T \\cdot T_{cz} \\alpha_{z}} {T_{cmix} \\alpha_{mix}} "
},
{
"math_id": 183,
"text": " T_{cij} = \\left( T_{ci} T_{cj} \\right)^{1/2} "
},
{
"math_id": 184,
"text": " V_{cij} = \\frac {1}{8} \\left( V_{ci}^{1/3} + V_{cj}^{1/3} \\right)^{3} "
},
{
"math_id": 185,
"text": " \\bar Z_{c} "
},
{
"math_id": 186,
"text": " V_{ci} = R Z_{ci} T_{ci} / P_{ci} = \\bar R_{zc} T_{ci} / P_{ci} \\quad \\text{where} \\quad \\bar R_{zc} = R \\bar Z_{c} "
},
{
"math_id": 187,
"text": " V_{cij} = \\frac {1}{8} R_{zc} \\left( \\left( \\frac {T_{ci}} {P_{ci}} \\right)^{1/3} + \\left( \\frac {T_{cj}} {P_{cj}} \\right)^{1/3} \\right)^{3} "
},
{
"math_id": 188,
"text": " T_{cmix} = \\frac {\\sum_{i} \\sum_{j} z_{i} z_{j} V_{cij} T_{cij}} {\\sum_{i} \\sum_{j} z_{i} z_{j} V_{cij}} "
},
{
"math_id": 189,
"text": " V_{cij} "
},
{
"math_id": 190,
"text": " T_{cmix} "
},
{
"math_id": 191,
"text": " T_{cmix} = \\frac {\\sum_{i} \\sum_{j} z_{i} z_{j} \\left( \\left( \\frac {T_{ci}} {P_{ci}} \\right)^{1/3} + \\left( \\frac {T_{cj}} {P_{cj}} \\right)^{1/3} \\right)^{3} \\left( T_{ci} T_{cj} \\right)^{1/2}}\n\n{\\sum_{i} \\sum_{j} z_{i} z_{j} \\left( \\left( \\frac {T_{ci}} {P_{ci}} \\right)^{1/3} + \\left( \\frac {T_{cj}} {P_{cj}} \\right)^{1/3} \\right)^{3}} "
},
{
"math_id": 192,
"text": " P_{cmix} = R_{zc} T_{cmix} / V_{cmix} "
},
{
"math_id": 193,
"text": " V_{cmix} = \\sum_{i} \\sum_{j} z_{i} z_{j} V_{cij} "
},
{
"math_id": 194,
"text": " P_{cmix} = \\frac {8 \\sum_{i} \\sum_{j} z_{i} z_{j} \\left( \\left( \\frac {T_{ci}} {P_{ci}} \\right)^{1/3} + \\left( \\frac {T_{cj}} {P_{cj}} \\right)^{1/3} \\right)^{3} \\left( T_{ci} T_{cj} \\right)^{1/2}}\n\n{\\left( \\sum_{i} \\sum_{j} z_{i} z_{j} \\left( \\left( \\frac {T_{ci}} {P_{ci}} \\right)^{1/3} + \\left( \\frac {T_{cj}} {P_{cj}} \\right)^{1/3} \\right)^{3} \\right)^{2}} "
},
{
"math_id": 195,
"text": " \\overline{M}_{w} "
},
{
"math_id": 196,
"text": " \\overline{M}_{n} "
},
{
"math_id": 197,
"text": " M_{mix} = 1.304 \\times 10^{-4} \\left( \\overline{M}_{w}^{2.303} - \\overline{M}_{n}^{2.303} \\right) + \\overline{M}_{n} "
},
{
"math_id": 198,
"text": " \\overline{M}_{w} = \\frac {\\sum_{i} z_{i} M_{i}^{2}} {\\sum_{j} z_{j} M_{j}}\n\\quad and \\quad \\overline{M}_{n} = \\sum_{i} z_{i} M_{i} "
},
{
"math_id": 199,
"text": " \\alpha_{mix} = 1+7.378 \\times 10^{-3} \\rho_{rz \\alpha}^{1.847} M_{mix}^{0.5173} "
},
{
"math_id": 200,
"text": " P_{z \\alpha} = \\frac {P \\cdot P_{cz}} {P_{cmix}} \\quad \\text{and}\n\\quad T_{z \\alpha} = \\frac {T \\cdot T_{cz}} {T_{cmix}} \\quad \\Rightarrow\n\\quad V_{z \\alpha} = V(T_{z \\alpha},P_{z \\alpha}) \\quad \\text{for 1 mole methane} "
},
{
"math_id": 201,
"text": " \\rho_{z \\alpha} = M_{z}/V_{z \\alpha} \\quad and \\quad \\rho_{cz} = M_{z}/V_{cz} \\quad \\Rightarrow \\quad \\rho_{rz \\alpha} = \\rho_{z \\alpha}/\\rho_{cz} = V_{cz}/V_{z \\alpha} "
},
{
"math_id": 202,
"text": " \\alpha_{z} = 1+0.031 \\rho_{rz \\alpha}^{1.847} "
},
{
"math_id": 203,
"text": " V_{z} = V(T_{z},P_{z}) \\quad \\text{for 1 mole methane} "
},
{
"math_id": 204,
"text": " \\rho_{z} = M_{z}/V_{z} \\quad and \\quad \\rho_{cz} = M_{z}/V_{cz} \\quad \\Rightarrow \\quad \\rho_{rz} = \\rho_{z}/\\rho_{cz} = V_{cz}/V_{z} "
},
{
"math_id": 205,
"text": " \\eta_{z} \\left( \\rho_{z} , T_{z} \\right) = \\eta_{0}(T_{z}) + \\hat{\\eta}_{1}(T_{z}) \\rho_{z} + F_{1} \\Delta \\eta' (\\rho_{z} , T_{z}) + F_{2} \\Delta \\eta'' (\\rho_{z} , T_{z}) "
},
{
"math_id": 206,
"text": " \\eta_{0}(T_{z}) "
},
{
"math_id": 207,
"text": " \\hat{\\eta}_{1}(T_{z}) "
},
{
"math_id": 208,
"text": " \\Delta \\eta' (\\rho_{z} , T_{z}) "
},
{
"math_id": 209,
"text": " \\eta_{0} \\left( T_{z} \\right) = \\textstyle \\sum_{i=1}^{9} g_i T_{z}^{\\frac{i-3}{4}}"
},
{
"math_id": 210,
"text": " \\hat{\\eta}_{1}\\left( T_{z} \\right) = h_1 - h_2 \\left\\lbrack h_3 - ln \\left( \\frac{T_{z}}{h_4}\\right)\\right\\rbrack^2 "
},
{
"math_id": 211,
"text": " \\Delta \\eta' \\left( \\rho_{z} , T_{z} \\right) = e^{j_1 + j_4 / T_{z}} \\times \\lbrack exp{\\lbrack \\rho_{z}^{0.1} ( j_2+j_3/T_{z}^{3/2} ) + \\theta_{rz} \\rho_{z}^{0.5} \\left( j_5+j_6/T_{z}+j_7/T_{z}^2 \\right) \\rbrack } - 1 \\rbrack "
},
{
"math_id": 212,
"text": " e^{x} "
},
{
"math_id": 213,
"text": " exp{ \\lbrack x \\rbrack } "
},
{
"math_id": 214,
"text": "T_{fz} "
},
{
"math_id": 215,
"text": " F_1 "
},
{
"math_id": 216,
"text": " F_2 "
},
{
"math_id": 217,
"text": " \\Delta \\eta'' \\left( \\rho_{z} , T_{z} \\right) = e^{k_1 + k_4 / T_{z}} \\times \\lbrack exp{\\lbrack \\rho_{z}^{0.1} ( k_2+k_3/T_{z}^{3/2} ) + \\theta_{rz} \\rho_{z}^{0.5} \\left( k_5+k_6/T_{z}+k_7/T_{z}^2 \\right) \\rbrack } - 1 \\rbrack "
},
{
"math_id": 218,
"text": " \\theta_{rz} = \\left( \\rho_{z} - \\rho_ {cz} \\right) / \\rho_ {cz} = \\rho_{rz} - 1"
},
{
"math_id": 219,
"text": " F_{1} = \\frac {HTAN+1}{2} "
},
{
"math_id": 220,
"text": " F_{2} = \\frac {1-HTAN}{2} "
},
{
"math_id": 221,
"text": " HTAN = tanh \\left( \\Delta T_{z} \\right) = \\frac {e^{\\left( \\Delta T_{z} \\right)} - e^{\\left( -\\Delta T_{z} \\right)}} {e^{\\left( \\Delta T_{z} \\right)} + e^{\\left( -\\Delta T_{z} \\right)}} "
},
{
"math_id": 222,
"text": " \\Delta T_{z} = T_{z} - T_{fz} "
},
{
"math_id": 223,
"text": "PVT"
},
{
"math_id": 224,
"text": "T \\eta P"
},
{
"math_id": 225,
"text": " P = \\frac {RT}{V-b_{eos}} - \\frac {a_{eos}} {V(V+b_{eos})+b_{eos}(V-b_{eos})} "
},
{
"math_id": 226,
"text": " T = \\frac{rP} {\\eta-d} - \\frac {a} {\\eta \\left( \\eta+b \\right)+b \\left( \\eta-b \\right) } "
},
{
"math_id": 227,
"text": " T = \\frac{r_{i} P} {\\eta_{i}-d_{i}} - \\frac {a_{i}} {\\eta_{i} \\left( \\eta_{i}+b_{i} \\right)+b_{i} \\left( \\eta_{i}-b_{i} \\right) } "
},
{
"math_id": 228,
"text": " a_{i} = 0.45724 \\frac {r_{ci}^{2} P_{ci}^{2} } {T_{ci}} "
},
{
"math_id": 229,
"text": " b_{i} = 0.07780 \\frac {r_{ci} P_{ci} } {T_{ci}} "
},
{
"math_id": 230,
"text": " r_{i} = r_{ci} \\tau_{i} \\left( T_{ri}, P_{ri} \\right)"
},
{
"math_id": 231,
"text": " d_{i} = b_{i} \\phi_{i} \\left( T_{ri}, P_{ri} \\right)"
},
{
"math_id": 232,
"text": " r_{ci} = \\frac {\\eta_{ci} T_{ci}} {P_{ci} Z_{ci}} "
},
{
"math_id": 233,
"text": " \\eta_{ci} = K_{p} D_{pi} \\quad \\text{where} \\quad \nK_{p} = 7.7\\cdot 10^{4} \\quad \\text{and} \\quad D_{pi} = T_{ci}^{-1/6} M_{i}^{1/2} P_{ci}^{2/3} "
},
{
"math_id": 234,
"text": " \\tau_{i} = \\tau_{i} \\left( T_{ri}, P_{ri} \\right) = \n\\left( 1 + Q_{1i} \\left( \\sqrt{T_{ri}P_{ri}} -1 \\right) \\right)^{-2} "
},
{
"math_id": 235,
"text": " \\phi_{i} = \\phi_{i} \\left( T_{ri}, P_{ri} \\right) = \n\\exp \\left[ Q_{2i} \\left( \\sqrt{T_{ri}} -1 \\right) \\right] + Q_{3i} \\left( \\sqrt{P_{ri}} -1 \\right)^{2}"
},
{
"math_id": 236,
"text": " Q_{1i} =\n\\begin{cases}\n0.829599 + 0.350857 \\, \\omega_{i} - 0.747680 \\, \\omega_{i}^{2}, & \\text{if } & \\omega_{i} < 0.3 \\\\\n0.956763 + 0.192829 \\, \\omega_{i} - 0.303189 \\, \\omega_{i}^{2}, & \\text{if } & \\omega_{i} \\ge 0.3\n\\end{cases}\n"
},
{
"math_id": 237,
"text": " Q_{2i} =\n\\begin{cases}\n\\;\\;\\; 1.94546 \\;\\, - 3.19777 \\, \\omega_{i} + 2.80193 \\, \\omega_{i}^{2}, & \\; \\text{if } & \\omega_{i} < 0.3 \\\\\n- 0.258789 - 37.1071 \\, \\omega_{i} + 20.5510 \\, \\omega_{i}^{2}, & \\; \\text{if } & \\omega_{i} \\ge 0.3\n\\end{cases}\n"
},
{
"math_id": 238,
"text": " Q_{3i} =\n\\begin{cases}\n0.299757 + 2.20855 \\, \\omega_{i} - 6.64959 \\, \\omega_{i}^{2}, & & \\text{if } & \\omega_{i} < 0.3 \\\\\n5.16307 \\;\\; - 12.8207 \\, \\omega_{i} + 11.0109 \\, \\omega_{i}^{2}, & & \\text{if } & \\omega_{i} \\ge 0.3\n\\end{cases}\n"
},
{
"math_id": 239,
"text": " T = \\frac{r_{mix}P} {\\eta_{mix}-d_{mix}} -\n\\frac{a_{mix}} {\\eta_{mix}\\left(\\eta_{mix}+b_{mix} \\right)+b_{mix} \\left( \\eta_{mix}-b_{mix} \\right) } "
},
{
"math_id": 240,
"text": " a_{mix} = \\sum_{i=1} z_{i}a_{i} "
},
{
"math_id": 241,
"text": " b_{mix} = \\sum_{i=1} z_{i}b_{i} "
},
{
"math_id": 242,
"text": " d_{mix} = \\sum_{i=1} \\sum_{i=1} z_{i}z_{i}\\sqrt{ d_{i}d_{i} } \\left( 1- k_{ij} \\right) "
},
{
"math_id": 243,
"text": " r_{mix} = \\sum_{i=1} z_{i}r_{i} "
},
{
"math_id": 244,
"text": "\\eta_{df}"
},
{
"math_id": 245,
"text": "\\tau"
},
{
"math_id": 246,
"text": "\\tau_{0}"
},
{
"math_id": 247,
"text": "\\tau_{df}"
},
{
"math_id": 248,
"text": " \\eta_{} = \\eta_{0} + \\eta_{df}\n\\quad \\text{and} \\quad \\tau_{} = \\tau_{0} + \\tau_{df} "
},
{
"math_id": 249,
"text": " \\tau_{0} = \\eta_{0} \\frac{du}{dy} \\quad \\text{and} \\quad \\tau_{df} = \\eta_{df} \\frac{du}{dy}"
},
{
"math_id": 250,
"text": "du/dy"
},
{
"math_id": 251,
"text": " \\eta_{0} = \\frac{\\tau_{0}}{du/dy} \\quad \\text{and} \\quad \\eta_{df} = \\frac{\\tau_{df}}{du/dy} "
},
{
"math_id": 252,
"text": "F"
},
{
"math_id": 253,
"text": "N"
},
{
"math_id": 254,
"text": " \\zeta = \\frac{F}{N} = \\frac{A\\tau_{df}}{A\\sigma} = \\frac{\\tau_{df}}{\\sigma} "
},
{
"math_id": 255,
"text": "\\zeta"
},
{
"math_id": 256,
"text": "\\sigma"
},
{
"math_id": 257,
"text": " \\eta_{df} = \\frac{\\tau_{df}}{du/dy} = \\frac{\\zeta\\sigma}{du/dy} "
},
{
"math_id": 258,
"text": "\\tau_{dfatt}"
},
{
"math_id": 259,
"text": "\\tau_{dfrep}"
},
{
"math_id": 260,
"text": " \\eta_{df} = \\frac{\\tau_{dfrep}+\\tau_{dfatt}}{du/dy} = \\frac{\\zeta P}{du/dy} "
},
{
"math_id": 261,
"text": " P = \\frac {RT}{V-b} - \\frac{a}{V^2+ubV+wb^2} "
},
{
"math_id": 262,
"text": " \\mathbf{z} = \\left[z_{1}, \\cdots , z_{N}\\right]"
},
{
"math_id": 263,
"text": " \\mathbf{y} = \\left[y_{1}, \\cdots , y_{N}\\right]"
},
{
"math_id": 264,
"text": " \\mathbf{x} = \\left[x_{1}, \\cdots , x_{N}\\right]"
},
{
"math_id": 265,
"text": " P = P_{h} - P_{a} "
},
{
"math_id": 266,
"text": " P_{h} "
},
{
"math_id": 267,
"text": " P_{a} "
},
{
"math_id": 268,
"text": " P_{h} = P_{h} (V,T,\\mathbf{w} ) = \\frac {RT}{V-b} \\quad \\text{where} \\quad \\mathbf{w} = \\mathbf{x},\\mathbf{y}, \\mathbf{z}, 1_{pure fluid} "
},
{
"math_id": 269,
"text": " P_{a} = P_{a} (V,T,\\mathbf{w} ) = \\frac{a}{V^2+ubV+wb^2} \\quad \\text{where} \\quad \\mathbf{w} = \\mathbf{x},\\mathbf{y}, \\mathbf{z}, 1_{pure fluid} "
},
{
"math_id": 270,
"text": " \\tau_{fatt}"
},
{
"math_id": 271,
"text": " \\tau_{frep}"
},
{
"math_id": 272,
"text": " P_{a}"
},
{
"math_id": 273,
"text": " P_{h}"
},
{
"math_id": 274,
"text": " \\tau_{dfatt} = F(T,P_{a},\\mathbf{w}) \\quad \\text{and} \\quad \\tau_{dfrep} = F(T,P_{h},\\mathbf{w}) \\quad \\text{and} \\quad \\mathbf{w} = \\mathbf{x},\\mathbf{y}, \\mathbf{z}, 1_{pure fluid} "
},
{
"math_id": 275,
"text": " \\eta_{df} = K_{a} P_{a} + K_{h} P_{h} "
},
{
"math_id": 276,
"text": " K "
},
{
"math_id": 277,
"text": " \\eta = \\eta_{0} + K_{a} P_{a} + K_{h} P_{h} + K_{h2} P_{h}^{2} + K_{h3} P_{h}^{3} "
},
{
"math_id": 278,
"text": " \\eta_{i} = \\eta_{0i} + K_{ai} P_{ai} + K_{hi} P_{hi} + K_{h2i} P_{hi}^{2} + K_{h3i} P_{hi}^{3} "
},
{
"math_id": 279,
"text": " [P_{c}] = bar \\quad \\text{and} \\quad [T] = K \\quad \\text{and} \\quad [\\eta] = \\mu P "
},
{
"math_id": 280,
"text": " K_{ai} = B_{a1i} \\exp\\left( \\Gamma_{i} -1 \\right) +\nB_{a2i} \\left[ \\exp\\left( 2 \\Gamma_{i} - 2 \\right) - 1 \\right] "
},
{
"math_id": 281,
"text": " K_{hi} = B_{h1i} \\exp\\left( \\Gamma_{i} -1 \\right) +\nB_{h2i} \\left[ \\exp\\left( 2 \\Gamma_{i} - 2 \\right) - 1 \\right] "
},
{
"math_id": 282,
"text": " K_{h2i} = B_{h22i} \\left[ \\exp\\left( 2 \\Gamma_{i} \\right) - 1 \\right] "
},
{
"math_id": 283,
"text": " \\Gamma_{i} = T_{ci} / T "
},
{
"math_id": 284,
"text": " K_{ai} = B_{a0i} + B_{a1i} \\left[ \\exp\\left( \\Gamma_{i} -1 \\right) - 1 \\right] +\nB_{a2i} \\left[ \\exp\\left( 2 \\Gamma_{i} - 2 \\right) - 1 \\right] "
},
{
"math_id": 285,
"text": " K_{hi} = B_{h0i} + B_{h1i} \\left[ \\exp\\left( \\Gamma_{i} -1 \\right) - 1 \\right] +\nB_{h2i} \\left[ \\exp\\left( 2 \\Gamma_{i} - 2 \\right) - 1 \\right] "
},
{
"math_id": 286,
"text": " K_{h3i} = B_{h32i} \\left[ \\exp \\left( 2 \\Gamma_{i} \\right) - 1 \\right] \\left( \\Gamma_{i} - 1 \\right)^{3}"
},
{
"math_id": 287,
"text": "d_{2}"
},
{
"math_id": 288,
"text": " P_{dyn} = P = \\frac {RT}{V_{eos}-b_{eos}} - \\frac{a_{eos}}{V_{eos}^2+ub_{eos}V_{eos}+wb_{eos}^2} "
},
{
"math_id": 289,
"text": " P_{hmix} = P_{heos} \\left( V_{eos},T, \\mathbf{w} \\right) = \\frac {RT}{V_{eos}-b_{eos}} \n\\quad \\text{where} \\quad \\mathbf{w} =\\mathbf{x}, \\mathbf{y},\\mathbf{z} "
},
{
"math_id": 290,
"text": " P_{amix} = P_{aeos} \\left( V_{eos},T, \\mathbf{w} \\right) = \\frac{a_{eos}}{V_{eos}^2+ub_{eos}V_{eos}+wb_{eos}^2} \n\\quad \\text{where} \\quad \\mathbf{w} =\\mathbf{x}, \\mathbf{y},\\mathbf{z} "
},
{
"math_id": 291,
"text": " P_{dyn} "
},
{
"math_id": 292,
"text": " P_{amix} = P_{hmix} - P_{dyn} \\quad \\text{and} \\quad P_{hmix} = P_{heos} (V_{eos},T,\\mathbf{w}) = \\frac {RT}{V_{eos}-b_{eos}} \\quad \\text{where} \\quad \\mathbf{w} =\\mathbf{x}, \\mathbf{y},\\mathbf{z} "
},
{
"math_id": 293,
"text": " P_{hmix} = P_{dyn} + P_{amix} \\quad \\text{and} \\quad P_{amix} = P_{aeos} (V_{eos},T,\\mathbf{w}) = \\frac{a_{eos}}{V_{eos}^2+ub_{eos}V_{eos}+wb_{eos}^2} \\quad \\text{where} \\quad \\mathbf{w} =\\mathbf{x}, \\mathbf{y},\\mathbf{z} "
},
{
"math_id": 294,
"text": " \\eta_{mix} = \\eta_{0mix} + \\eta_{dfmix} "
},
{
"math_id": 295,
"text": " \\eta_{mix} = \\eta_{0mix} + K_{amix} P_{amix} +K_{hmix} P_{hmix} + \nK_{h2mix} P_{hmix}^{2} + K_{h3mix} P_{hmix}^{3} "
},
{
"math_id": 296,
"text": " \\ln \\left( \\eta_{0mix} \\right) = \\sum_{i=1}^{N} z_{i} \\ln ( \\eta_{0i} ) \\quad \\text{or} \\quad \\eta_{0mix} = \\prod_{i=1}^{N} \\eta_{0i}^{z_{i}} "
},
{
"math_id": 297,
"text": " K_{qmix} = \\sum_{i=1}^{N} W_{i} K_{qi} \\quad \\text{where} \\quad q = a,h,h2 "
},
{
"math_id": 298,
"text": " \\ln \\left( K_{h3mix} \\right) = \\sum_{i=1}^{N} z_{i} \\ln \\left( K_{h3i} \\right) \\quad \\text{or} \\quad K_{h3mix} = \\prod_{i=1}^{N} K_{h3i}^{z_{i}} "
},
{
"math_id": 299,
"text": " W_{i} = \\frac {z_{i}} {M_{i}^{\\varepsilon} \\cdot MM} \\quad \\text{where} \\quad\nMM = \\sum_{j=1}^{N} \\frac {z_{j}} {M_{j}^{\\varepsilon} } "
},
{
"math_id": 300,
"text": "\\varepsilon"
},
{
"math_id": 301,
"text": " \\quad \\varepsilon = 0.15 \\quad \\;\\; "
},
{
"math_id": 302,
"text": " \\quad \\varepsilon = 0.075 \\quad "
},
{
"math_id": 303,
"text": "W_{i}"
},
{
"math_id": 304,
"text": " \\eta_{dfr} = \\frac {\\eta_{df}}{\\eta_{c}} \\quad \\text{and} \\quad \\eta_{0r}= \\frac {\\eta_{0}}{\\eta_{c}} "
},
{
"math_id": 305,
"text": " \\eta_{dfr} = K_{ar} \\left( \\frac {P_{a}}{P_{c}} \\right) +\nK_{hr} \\left( \\frac {P_{h}}{P_{c}} \\right) +\nK_{h2r} \\left( \\frac {P_{h}}{P_{c}} \\right)^{2} "
},
{
"math_id": 306,
"text": " \\eta_{df} = \\frac{\\eta_{c} K_{ar}}{P_{c}} P_{a} + \n\\frac{\\eta_{c} K_{hr}}{P_{c}} P_{h} + \n\\frac{\\eta_{c} K_{h2r}}{P_{c}^{2}} P_{h}^{2} "
},
{
"math_id": 307,
"text": " \\eta_{ci} =K_{vi} D_{vi} \\quad \\text{where} \\quad D_{vi} = M_{i}^{1/2}T_{ci}^{1/2}V_{ci}^{-2/3} "
},
{
"math_id": 308,
"text": "K_{vi}"
},
{
"math_id": 309,
"text": " \\eta_{ci} = K_{p} D_{pi} \\quad \\text{where} \\quad D_{pi} = M_{i}^{1/2}P_{ci}^{2/3}T_{ci}^{-1/6} "
},
{
"math_id": 310,
"text": "K_{p}"
},
{
"math_id": 311,
"text": " K_{p} = 7.7 \\cdot 1.01325^{2/3} \\approx 7.77 "
},
{
"math_id": 312,
"text": " V _{ci}^{-1} = A + B \\cdot \\frac {P_{ci}}{RT_{ci}} \\iff\n V _{ci} = \\frac {RT_{ci}}{ART_{ci} + BP_{ci}} "
},
{
"math_id": 313,
"text": " \\eta_{ci} = C \\cdot P_{ci} M_{i}^{D} "
},
{
"math_id": 314,
"text": " [P_{c}] = bar \\quad \\text{and} \\quad [V_{c}] = [RT_{c}/P_{c}] = cm^{3}/mol \\quad \\text{and} \\quad [T] = K \n\\quad \\text{and} \\quad [\\eta_{c}] = \\mu P "
},
{
"math_id": 315,
"text": "D_{pi}"
},
{
"math_id": 316,
"text": " \\eta_{dfi} = \\eta_{dfdi} + \\eta_{dfui} = \\eta_{dfdi} + K_{pu}F_{ui} "
},
{
"math_id": 317,
"text": " \\eta_{dfdi} = \\frac{\\eta_{ci} K_{ari}}{P_{ci}} P_{ai} + \n\\frac{\\eta_{ci} K_{hri}}{P_{ci}} P_{hi} + \n\\frac{\\eta_{ci} K_{h2ri}}{P_{ci}^{2}} P_{hi}^{2} \\quad \\text{for} \\quad i=1,\\ldots,m "
},
{
"math_id": 318,
"text": " F_{ui} = \\frac{D_{pi} K_{ari}}{P_{ci}} P_{ai} + \n\\frac{D_{pi} K_{hri}}{P_{ci}} P_{hi} + \n\\frac{D_{pi} K_{h2ri}}{P_{ci}^{2}} P_{hi}^{2} \\quad \\text{for} \\quad i=m+1,\\ldots,N "
},
{
"math_id": 319,
"text": " D_{pi} = M_{i}^{1/2} P_{ci}^{2/3}T_{ci}^{-1/6} "
},
{
"math_id": 320,
"text": " \\eta_{ci} = K_{p} D_{pi} \\quad \\text{where} \\quad K_{p} = 7.9483 "
},
{
"math_id": 321,
"text": " \\left[ \\eta \\right] = \\left[ \\eta_{c} \\right] = \\mu P "
},
{
"math_id": 322,
"text": " \\left[ P \\right] = \\left[ P_{c} \\right] = bar "
},
{
"math_id": 323,
"text": " \\left[ T \\right] = \\left[ T_{c} \\right] = K "
},
{
"math_id": 324,
"text": " K_{qri} = B_{qrc} +B_{qr00} \\left( \\Gamma_{i} -1 \\right) +\n\\sum_{m=1}^{2} \\sum_{n=0}^{m} B_{qrmn} \\psi_{i}^{n} \\left[ \\exp ( m \\Gamma_{i} - m ) -1 \\right] \n\\quad \\text{where} \\quad q=a,h"
},
{
"math_id": 325,
"text": " K_{h2ri} = B_{h2rc} + B_{h2r21} \\psi_{i} \\left[ \\exp ( 2 \\Gamma_{i} ) -1 \\right] \n\\left( \\Gamma_{i} - 1 \\right)^{2}"
},
{
"math_id": 326,
"text": " \\psi_{i} = \\frac {R T_{ci}}{P_{ci}}\n\\quad \\text{and} \\quad \\Gamma_{i} = \\frac {T_{ci}}{T} "
},
{
"math_id": 327,
"text": " \\psi_{i} "
},
{
"math_id": 328,
"text": " \\left[ \\psi_{i} \\right] = cm^{3}/mol "
},
{
"math_id": 329,
"text": " \\eta_{mix} = \\eta_{dmix} + \\eta_{umix} = \\eta_{dmix} + K_{pu}F_{umix}"
},
{
"math_id": 330,
"text": " \\eta_{dmix} = \\eta_{0dmix} + K_{admix}P_{amix} + K_{hdmix}P_{hmix} + K_{h2dmix}P_{hmix}^{2} + K_{h3dmix}P_{hmix}^{3} "
},
{
"math_id": 331,
"text": " F_{umix} = \\eta_{0umix} + K_{aumix}P_{amix} + K_{humix}P_{hmix} + K_{h2umix}P_{hmix}^{2} + K_{h3umix}P_{hmix}^{3} "
},
{
"math_id": 332,
"text": " K_{pu} "
},
{
"math_id": 333,
"text": " P_{a} "
},
{
"math_id": 334,
"text": " P_{h} "
},
{
"math_id": 335,
"text": " \\ln \\left( \\eta_{0dmix} \\right) = \\sum_{i=1}^{m} z_{i} \\ln ( \\eta_{0i} ) \\quad \\text{or} \\quad \\eta_{0mix} = \\prod_{i=1}^{m} \\eta_{0i}^{z_{i}} "
},
{
"math_id": 336,
"text": " K_{qrdmix} =\\sum_{i=1}^{m}W_{i} \\frac{\\eta_{ci}K_{qri}}{P_{ci}} \\quad \\text{where} \\quad q=a,h "
},
{
"math_id": 337,
"text": " K_{qprdmix} =\\sum_{i=1}^{m}W_{i} \\frac{\\eta_{ci}K_{qrpi}}{P_{ci}^{p}} \\quad \\text{where} \\quad q=a,h \\quad \\text{and} \\quad p=2,3 "
},
{
"math_id": 338,
"text": " \\ln \\left( \\eta_{0umix} \\right) = \\sum_{i=m+1}^{N} z_{i} \\ln ( \\eta_{0i} ) \\quad \\text{or} \\quad \\eta_{0mix} = \\prod_{i=m+1}^{N} \\eta_{0i}^{z_{i}} "
},
{
"math_id": 339,
"text": " K_{qrumix} =\\sum_{i=m+1}^{N}W_{i} \\frac{D_{pi}K_{qri}}{P_{ci}} \\quad \\text{where} \\quad q=a,h "
},
{
"math_id": 340,
"text": " K_{qprumix} =\\sum_{i=m+1}^{N}W_{i} \\frac{D_{pi}K_{qpri}}{P_{ci}^{p}} \\quad \\text{where} \\quad q=a,h \\quad \\text{and} \\quad p=2,3 "
},
{
"math_id": 341,
"text": " \\varepsilon = 0.30 \\quad \\text{when SRK, PR or PRSV EOS is used} "
},
{
"math_id": 342,
"text": " \\eta_{0} = d_{g1} \\sqrt{T} + d_{g2} T^{d_{g3}}"
},
{
"math_id": 343,
"text": " \\eta_{0} = D_{g1} \\sqrt{T_{r}} + D_{g2} T_{r}^{D_{g3}}"
},
{
"math_id": 344,
"text": " D_{g1} = d_{g1} \\cdot \\sqrt{T_{c}} \\quad \\text{and} \\quad D_{g2} = d_{g2} \\cdot T_{c}^{d_{g3}}\n\\quad \\text{and} \\quad D_{g3} = d_{g3} "
},
{
"math_id": 345,
"text": " \\left[ \\eta_{0} \\right] = \\mu P \\quad \\text{and} \\quad \\left[ T \\right] = K "
},
{
"math_id": 346,
"text": " d_{g2} "
},
{
"math_id": 347,
"text": " \\eta_{lg} = \\eta_{0} + K_{a} P_{a} + K_{h} P_{h} + K_{h2} P_{h}^{2}"
},
{
"math_id": 348,
"text": " K_{a} = B_{a0} "
},
{
"math_id": 349,
"text": " K_{h} = B_{h0} "
},
{
"math_id": 350,
"text": " K_{h2} = \\frac { B_{h20} } {T_{r}^{2}}"
},
{
"math_id": 351,
"text": " \\left[ \\eta_{lg} \\right] = \\mu P \\quad \\text{and} \\quad \\left[ T \\right] = K "
},
{
"math_id": 352,
"text": " f_{\\nu} "
},
{
"math_id": 353,
"text": " \\eta = A \\exp \\left[ \\frac{B}{f_{\\nu}} \\right] \\quad \\text{where} \\quad f_{\\nu} = \\frac{V - b} {b} "
},
{
"math_id": 354,
"text": "V"
},
{
"math_id": 355,
"text": "b"
},
{
"math_id": 356,
"text": " \\eta_{dg} "
},
{
"math_id": 357,
"text": " \\eta_{ds} "
},
{
"math_id": 358,
"text": " \\Delta \\eta"
},
{
"math_id": 359,
"text": " \\eta = \\eta_{0} + \\eta_{df} "
},
{
"math_id": 360,
"text": " D "
},
{
"math_id": 361,
"text": " \\eta_{df} = \\frac{\\rho N_{A}L_{p}^{2}\\zeta}{M} \\quad \\text{and} \\quad D = \\frac{k_{B}T}{\\zeta}"
},
{
"math_id": 362,
"text": " L_{p} "
},
{
"math_id": 363,
"text": " L_{p}^{2} = \\frac{DM \\eta_{df}}{\\rho N_{A} k_{B}T} = \\frac{DM \\eta_{df}}{\\rho RT} "
},
{
"math_id": 364,
"text": " L_{p}"
},
{
"math_id": 365,
"text": " \\zeta = \\zeta_{0} \\exp \\left[ \\frac{B}{f_{\\nu}} \\right] \\quad \\text{and} \\quad \\zeta_{0} = \\frac{E}{N_{A}L_{d}} \\left( \\frac{M}{3RT}\\right)^{1/2} "
},
{
"math_id": 366,
"text": " f_{\\nu} = \\left( \\frac {RT}{E} \\right)^{3/2} \\quad \\text{and} \\quad E = E_{0} + PV \\quad \\text{and} \\quad E_{0} = \\alpha \\rho"
},
{
"math_id": 367,
"text": " \\text{where} \\quad \\rho = \\frac{M} {V} "
},
{
"math_id": 368,
"text": "E "
},
{
"math_id": 369,
"text": "PV "
},
{
"math_id": 370,
"text": "E_{0} "
},
{
"math_id": 371,
"text": "V-b "
},
{
"math_id": 372,
"text": " A = \\frac{L_{c} \\rho (\\alpha\\rho+PV)}{\\sqrt{3MRT}} \\quad \\text{where} \\quad L_{c} = \\frac{L_{p}^{2}}{L_{d}}"
},
{
"math_id": 373,
"text": " \\eta =\\eta_{0} + A \\exp \\left[ B \\left( \\frac{\\alpha\\rho+PV}{RT} \\right)^{3/2} \\right] "
},
{
"math_id": 374,
"text": " D =\\frac{RT L_{d}}{\\alpha\\rho+PV} \\sqrt{\\frac{3RT}{M}} \\exp \\left[ - B \\left( \\frac{\\alpha\\rho+PV}{RT} \\right)^{3/2} \\right] "
},
{
"math_id": 375,
"text": "B "
},
{
"math_id": 376,
"text": "b "
},
{
"math_id": 377,
"text": "L_{p} "
},
{
"math_id": 378,
"text": "L_{d} "
},
{
"math_id": 379,
"text": "L_{c} "
},
{
"math_id": 380,
"text": "M "
},
{
"math_id": 381,
"text": "N_{A} "
},
{
"math_id": 382,
"text": "P "
},
{
"math_id": 383,
"text": "R "
},
{
"math_id": 384,
"text": "V "
},
{
"math_id": 385,
"text": "\\alpha "
},
{
"math_id": 386,
"text": "\\eta "
},
{
"math_id": 387,
"text": "\\rho "
},
{
"math_id": 388,
"text": "\\zeta_{0}"
},
{
"math_id": 389,
"text": " \\eta_{mix} = \\eta_{0mix} + \\eta_{dfmix} "
},
{
"math_id": 390,
"text": "\\eta_{0} "
},
{
"math_id": 391,
"text": " \\eta_{dfmix} = \\frac {L_{cmix} \\rho_{eos}(\\alpha_{mix} \\rho_{eos} + PV_{eos})}{\\sqrt{3RTM_{mix}}}\n\n\\exp {\\left[ B_{mix} \\left( \\frac{ \\alpha_{mix} \\rho_{eos} + PV_{eos}}{RT} \\right)^{3/2} \\right] }"
},
{
"math_id": 392,
"text": " \\alpha \\text{,} \\, B \\text{,} \\, L_{c} \\,"
},
{
"math_id": 393,
"text": " M_{mix} = M_{n} = \\sum_{i=1}^{N} z_{i} M_{i} "
},
{
"math_id": 394,
"text": " \\alpha_{mix} = \\sum_{i=1}^{N} z_{i} \\alpha_{i} "
},
{
"math_id": 395,
"text": " B_{mix} = \\sum_{i=1}^{N} z_{i} B_{i} "
},
{
"math_id": 396,
"text": " L_{cmix} = \\sum_{i=1}^{N} z_{i} L_{ci} "
},
{
"math_id": 397,
"text": " \\, \\alpha_{i}, \\, B_{i}, L_{ci} \\, "
},
{
"math_id": 398,
"text": " \\, \\alpha_{}, \\, B_{}, L_{c} \\, "
},
{
"math_id": 399,
"text": " \\alpha, \\, B, \\, L_{c} \\, "
},
{
"math_id": 400,
"text": " \\alpha = a_{0} + a_{1}M "
},
{
"math_id": 401,
"text": " B = b_{0} + b_{1}M + b_{2}M^{2} "
},
{
"math_id": 402,
"text": " L_{c} = c_{0} + c_{1}M "
},
{
"math_id": 403,
"text": " a_{i} "
},
{
"math_id": 404,
"text": " b_{i} "
},
{
"math_id": 405,
"text": " c_{i} "
},
{
"math_id": 406,
"text": " X_{gl}"
},
{
"math_id": 407,
"text": " X_{sl}"
},
{
"math_id": 408,
"text": " X_{gl} = (V - V_{s})/V \\quad \\text{and} \\quad X_{sl} = V_{s}/V \\quad \\text{and} \\quad V_{s} \\approx b"
},
{
"math_id": 409,
"text": "V_{s}"
},
{
"math_id": 410,
"text": " \\eta = X_{gl}\\eta_{gl} + X_{sl} \\eta_{sl}"
},
{
"math_id": 411,
"text": " \\eta_{gl} = 40.785 \\frac{\\sqrt{MT^{*}}}{V_{c}^{2/3} \\Omega^{*}} * F_{c}\n\\quad \\text{with unit equation} \\quad [ \\eta_{gl} ] = \\mu P "
},
{
"math_id": 412,
"text": " \\Omega^{*} = \\frac{1.16145}{(T^{*})^{0.14874}} + \n\\frac{0.52487}{exp(0.7732T^{*})} + \n\\frac{2.16178}{exp(2.43787T^{*})} -\n6.435 \\times 10^{-4} (T^{*})^{0.14874} * \nsin \\left[ 18.0323 (T^{*})^{0.7683} - 7.27371 \\right]\n"
},
{
"math_id": 413,
"text": " T^{*} = 1.2593 * T / T_{c} \n\\quad \\text{and} \\quad F_{c} = 1 - 0.2756 \\omega +\n0.059035 \\mu_{r}^{4} + \\kappa "
},
{
"math_id": 414,
"text": "F_{c} \\, "
},
{
"math_id": 415,
"text": "M \\ "
},
{
"math_id": 416,
"text": "T \\ \\ "
},
{
"math_id": 417,
"text": "T_{c} \\ "
},
{
"math_id": 418,
"text": "V_{c} \\ "
},
{
"math_id": 419,
"text": "\\eta_{gl} \\, "
},
{
"math_id": 420,
"text": "\\kappa \\ \\ \\, "
},
{
"math_id": 421,
"text": "\\mu_{r} \\, "
},
{
"math_id": 422,
"text": "\\Omega^{*}"
},
{
"math_id": 423,
"text": "\\omega \\ \\ "
},
{
"math_id": 424,
"text": "\\Delta G^{\\ddagger} "
},
{
"math_id": 425,
"text": " \\eta_{sl} = A * exp{\\left[ -\\frac{\\Delta G^{\\ddagger} - PV}{RT} \\right]}"
},
{
"math_id": 426,
"text": "\\Delta U^{r} "
},
{
"math_id": 427,
"text": "\\Omega"
},
{
"math_id": 428,
"text": " \\eta_{sl} = A*exp \\left[ -\\frac{\\alpha \\Delta U^{r}-PV}{RT} \\right]\n= A*exp \\left[ -\\frac{ \\alpha \\Delta U^{r}}{RT} + Z \\right]"
},
{
"math_id": 429,
"text": "A "
},
{
"math_id": 430,
"text": " A = \\frac{RT}{V-b} * \\frac{1}{\\nu}"
},
{
"math_id": 431,
"text": "\\nu "
},
{
"math_id": 432,
"text": "X_{gl} "
},
{
"math_id": 433,
"text": "\\eta_{sl} "
},
{
"math_id": 434,
"text": " \\nu = X_{gl}^{-1} * 10^{12} \\left( \\nu_{0}+\\nu_{1}P \\right) = \n\\frac {V}{V - b} * 10^{12} \\left( \\nu_{0}+\\nu_{1}P \\right)"
},
{
"math_id": 435,
"text": " \\eta_{sl} = \\frac{RT}{V}*\n\\frac{1}{10^{12} \\left( \\nu_{0}+\\nu_{1}P \\right)}*\nexp \\left[ - \\alpha \\frac{\\Delta U^{r}}{RT} \\right]*\nexp \\left[ \\beta_{0}Z^{\\beta_{1}} \\right]"
},
{
"math_id": 436,
"text": "b \\ \\ \\ \\ \\ "
},
{
"math_id": 437,
"text": "P \\ \\ \\ \\,"
},
{
"math_id": 438,
"text": "T \\ \\ \\ \\,"
},
{
"math_id": 439,
"text": "V \\ \\ \\ \\,"
},
{
"math_id": 440,
"text": "X_{jl} \\ "
},
{
"math_id": 441,
"text": "Z \\ \\ \\ \\,"
},
{
"math_id": 442,
"text": "\\alpha \\ \\ \\ \\ "
},
{
"math_id": 443,
"text": "\\beta_{i} \\ \\ \\ "
},
{
"math_id": 444,
"text": "\\eta \\ \\ \\ \\ \\, "
},
{
"math_id": 445,
"text": "\\eta_{sl} \\ \\ "
},
{
"math_id": 446,
"text": "\\nu_{i} \\ \\ \\ "
},
{
"math_id": 447,
"text": "\\Delta G^{\\ne}"
},
{
"math_id": 448,
"text": "\\Delta U^{r} \\,"
},
{
"math_id": 449,
"text": " \\eta_{mix} = \\frac{V_{mix}-b_{mix}}{V_{mix}} * \\eta_{gl}^{mix} + \\frac{b_{mix}}{V_{mix}} * \\eta_{sl}^{mix}"
},
{
"math_id": 450,
"text": " \\eta_{gl}^{mix} = F ( T_{cmix},M_{cmix},V_{cmix},\\omega_{mix},\\mu_{rmix};T)"
},
{
"math_id": 451,
"text": " \\eta_{sl}^{mix} = F ( V_{mix},\\Delta U_{mix}^{r},Z_{mix};P,T)"
},
{
"math_id": 452,
"text": " \\eta_{sl}^{mix} = \\frac{RT}{V_{mix}}*\n\\frac{1}{10^{12} \\left( \\nu_{0}+\\nu_{1}P \\right)}*\nexp \\left[ - \\alpha \\frac{\\Delta U_{mix}^{r}}{RT} \\right]*\nexp \\left[ \\beta_{0}Z_{mix}^{\\beta_{1}} \\right]"
},
{
"math_id": 453,
"text": "V_{mix},\\Delta U_{mix}^{r},Z_{mix}"
},
{
"math_id": 454,
"text": " Q_{mix} = Q_{eos}(\\mathbf{z}) \\quad \\text{and} \\quad W_{mix} = W_{eos}(P,T,\\mathbf{z})"
},
{
"math_id": 455,
"text": " Q_{mix} = Q_{eos}(\\mathbf{y}) \\quad \\text{and} \\quad W_{mix} = W_{eos}(P,T,\\mathbf{y})"
},
{
"math_id": 456,
"text": " Q_{mix} = Q_{eos}(\\mathbf{x}) \\quad \\text{and} \\quad W_{mix} = W_{eos}(P,T,\\mathbf{x})"
},
{
"math_id": 457,
"text": " n = n_{l} + n_{g} \\quad \\text{and} \\quad n z_{i} = n_{l}x_{i} + n_{g}y_{i} \\quad \\text{and} \\quad i = 1,\\ldots,N"
},
{
"math_id": 458,
"text": " Q = T_{c}, M, V_{c}, \\omega, b \\quad \\text{and} \\quad W = V, \\Delta U^{r}, Z "
}
] |
https://en.wikipedia.org/wiki?curid=56541610
|
5655132
|
COST Hata model
|
Radio propagation model
The COST Hata model is a radio propagation model (i.e. path loss) that extends the urban Hata model (which in turn is based on the Okumura model) to cover a more elaborated range of frequencies (up to 2 GHz). It is the most often cited of the COST 231 models (EU funded research project ca. April 1986 – April 1996), also called the "Hata Model PCS Extension". This model is the combination of empirical and deterministic models for estimating path loss in an urban area over frequency range of 800 MHz to 2000 MHz.
COST (COopération européenne dans le domaine de la recherche Scientifique et Technique) is a European Union Forum for cooperative scientific research which has developed this model based on experimental measurements in multiple cities across Europe.
Applicable to / under conditions.
This model is applicable to macro cells in urban areas. To further evaluate Path Loss in suburban or rural (quasi-)open areas, this path loss has to be substituted into "Urban to Rural" / "Urban to Suburban" Conversions. (Ray GAO, 09 Sep 2007)
Mathematical formulation.
The COST Hata model is formulated as,
formula_0
where,
Limitations.
This model requires that the base station antenna is higher than all adjacent rooftops.
|
[
{
"math_id": 0,
"text": "L_b = 46.3 + 33.9 \\log_{10} \\frac{f}\\text{MHz} - 13.82 \\log_{10} \\frac{h_B}\\text{m} - a(h_R, f) + \\left( 44.9 - 6.55 \\log_{10} \\frac{h_B}\\text{m} \\right) \\log_{10} \\frac{d}\\text{km} + C_m "
}
] |
https://en.wikipedia.org/wiki?curid=5655132
|
56553748
|
Volatility tax
|
Mathematical finance term
The volatility tax is a mathematical finance term first published by Rick Ashburn, CFA in a 2003 column, and formalized by hedge fund manager Mark Spitznagel, describing the effect of large investment losses (or volatility) on compound returns. It has also been called volatility drag, volatility decay or variance drain. This is not literally a tax in the sense of a levy imposed by a government, but the mathematical difference between geometric averages compared to arithmetic averages. This difference resembles a tax due to the mathematics which impose a lower compound return when returns vary over time, compared to a simple sum of returns. This diminishment of returns is in increasing proportion to volatility, such that volatility itself appears to be the basis of a progressive tax. Conversely, fixed-return investments (which have no return volatility) appear to be "volatility tax free".
Overview.
As Spitznagel wrote:
<templatestyles src="Template:Blockquote/styles.css" />
Quantitatively, the volatility tax is the difference between the arithmetic and geometric average (or “ensemble average” and “time average”) returns of an asset or portfolio. It thus represents the degree of “non-ergodicity” of the geometric average.
Standard quantitative finance assumes that a portfolio’s net asset value changes follow a geometric Brownian motion (and thus are log-normally distributed) with arithmetic average return (or “drift”) formula_0, standard deviation (or “volatility”) formula_1, and geometric average return
formula_2
So the geometric average return is the difference between the arithmetic average return and a function of volatility. This function of volatility
formula_3
represents the volatility tax. (Though this formula is under the assumption of log-normality, the volatility tax provides an accurate approximation for most return distributions. The precise formula is a function of the central moments of the return distribution.)
The mathematics behind the volatility tax is such that a very large portfolio loss has a disproportionate impact on the volatility tax that it pays and, as Spitznagel wrote, this is why the most effective risk mitigation focuses on large losses:
<templatestyles src="Template:Blockquote/styles.css" />
According to Spitznagel, the goal of risk mitigation strategies is to solve this “vexing non-ergodicity, volatility tax problem” and thus raise a portfolio’s geometric average return, or CAGR, by lowering its volatility tax (and “narrow the gap between our ensemble and time averages”). This is “the very name of the game in successful investing. It is the key to the kingdom, and explains in a nutshell Warren Buffett’s cardinal rule, ‘Don’t lose money.’” Moreover, “the good news is the entire hedge fund industry basically exists to help with this—to help save on volatility taxes paid by portfolios. The bad news is they haven't done that, not at all.”
As Nassim Nicholas Taleb wrote in his 2018 book "Skin in the Game", “more than two decades ago, practitioners such as Mark Spitznagel and myself built our entire business careers around the effect of the difference between ensemble and time.”
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mu"
},
{
"math_id": 1,
"text": "\\sigma"
},
{
"math_id": 2,
"text": "\\mu-\\sigma^2/2"
},
{
"math_id": 3,
"text": "\\sigma^2/2"
}
] |
https://en.wikipedia.org/wiki?curid=56553748
|
56558
|
Blood pressure
|
Pressure exerted by circulating blood upon the walls of arteries
Blood pressure (BP) is the pressure of circulating blood against the walls of blood vessels. Most of this pressure results from the heart pumping blood through the circulatory system. When used without qualification, the term "blood pressure" refers to the pressure in a brachial artery, where it is most commonly measured. Blood pressure is usually expressed in terms of the systolic pressure (maximum pressure during one heartbeat) over diastolic pressure (minimum pressure between two heartbeats) in the cardiac cycle. It is measured in millimeters of mercury (mmHg) above the surrounding atmospheric pressure, or in kilopascals (kPa). The difference between the systolic and diastolic pressures is known as pulse pressure, while the average pressure during a cardiac cycle is known as mean arterial pressure.
Blood pressure is one of the vital signs—together with respiratory rate, heart rate, oxygen saturation, and body temperature—that healthcare professionals use in evaluating a patient's health. Normal resting blood pressure in an adult is approximately systolic over diastolic, denoted as "120/80 mmHg". Globally, the average blood pressure, age standardized, has remained about the same since 1975 to the present, at approximately 127/79 mmHg in men and 122/77 mmHg in women, although these average data mask significantly diverging regional trends.
Traditionally, a health-care worker measured blood pressure non-invasively by auscultation (listening) through a stethoscope for sounds in one arm's artery as the artery is squeezed, closer to the heart, by an aneroid gauge or a mercury-tube sphygmomanometer. Auscultation is still generally considered to be the gold standard of accuracy for non-invasive blood pressure readings in clinic. However, semi-automated methods have become common, largely due to concerns about potential mercury toxicity, although cost, ease of use and applicability to ambulatory blood pressure or home blood pressure measurements have also influenced this trend. Early automated alternatives to mercury-tube sphygmomanometers were often seriously inaccurate, but modern devices validated to international standards achieve an average difference between two standardized reading methods of 5 mm Hg or less, and a standard deviation of less than 8 mm Hg. Most of these semi-automated methods measure blood pressure using oscillometry (measurement by a pressure transducer in the cuff of the device of small oscillations of intra-cuff pressure accompanying heartbeat-induced changes in the volume of each pulse).
Blood pressure is influenced by cardiac output, systemic vascular resistance, blood volume and arterial stiffness, and varies depending on person's situation, emotional state, activity and relative health or disease state. In the short term, blood pressure is regulated by baroreceptors, which act via the brain to influence the nervous and the endocrine systems.
Blood pressure that is too low is called hypotension, pressure that is consistently too high is called hypertension, and normal pressure is called normotension. Both hypertension and hypotension have many causes and may be of sudden onset or of long duration. Long-term hypertension is a risk factor for many diseases, including stroke, heart disease, and kidney failure. Long-term hypertension is more common than long-term hypotension.
<templatestyles src="Template:TOC limit/styles.css" />
Classification, normal and abnormal values.
Systemic arterial pressure.
Blood pressure measurements can be influenced by circumstances of measurement. Guidelines use different thresholds for office (also known as clinic), home (when the person measures their own blood pressure at home), and ambulatory blood pressure (using an automated device over a 24-hour period).
The risk of cardiovascular disease increases progressively above 90 mmHg, especially among women.
Observational studies demonstrate that people who maintain arterial pressures at the low end of these pressure ranges have much better long-term cardiovascular health. There is an ongoing medical debate over what is the optimal level of blood pressure to target when using drugs to lower blood pressure with hypertension, particularly in older people.
Blood pressure fluctuates from minute to minute and normally shows a circadian rhythm over a 24-hour period, with highest readings in the early morning and evenings and lowest readings at night. Loss of the normal fall in blood pressure at night is associated with a greater future risk of cardiovascular disease and there is evidence that night-time blood pressure is a stronger predictor of cardiovascular events than day-time blood pressure. Blood pressure varies over longer time periods (months to years) and this variability predicts adverse outcomes. Blood pressure also changes in response to temperature, noise, emotional stress, consumption of food or liquid, dietary factors, physical activity, changes in posture (such as standing-up), drugs, and disease. The variability in blood pressure and the better predictive value of ambulatory blood pressure measurements has led some authorities, such as the National Institute for Health and Care Excellence (NICE) in the UK, to advocate for the use of ambulatory blood pressure as the preferred method for diagnosis of hypertension.
Various other factors, such as age and sex, also influence a person's blood pressure. Differences between left-arm and right-arm blood pressure measurements tend to be small. However, occasionally there is a consistent difference greater than 10 mmHg which may need further investigation, e.g. for peripheral arterial disease, obstructive arterial disease or aortic dissection.
There is no accepted diagnostic standard for hypotension, although pressures less than 90/60 are commonly regarded as hypotensive. In practice blood pressure is considered too low only if symptoms are present.
Systemic arterial pressure and age.
Fetal blood pressure.
In pregnancy, it is the fetal heart and not the mother's heart that builds up the fetal blood pressure to drive blood through the fetal circulation. The blood pressure in the fetal aorta is approximately 30 mmHg at 20 weeks of gestation, and increases to approximately 45 mmHg at 40 weeks of gestation.
The average blood pressure for full-term infants:
Childhood.
In children the normal ranges for blood pressure are lower than for adults and depend on height. Reference blood pressure values have been developed for children in different countries, based on the distribution of blood pressure in children of these countries.
Aging adults.
In adults in most societies, systolic blood pressure tends to rise from early adulthood onward, up to at least age 70; diastolic pressure tends to begin to rise at the same time but start to fall earlier in mid-life, approximately age 55. Mean blood pressure rises from early adulthood, plateauing in mid-life, while pulse pressure rises quite markedly after the age of 40. Consequently, in many older people, systolic blood pressure often exceeds the normal adult range, if the diastolic pressure is in the normal range this is termed isolated systolic hypertension. The rise in pulse pressure with age is attributed to increased stiffness of the arteries. An age-related rise in blood pressure is not considered healthy and is not observed in some isolated unacculturated communities.
Systemic venous pressure.
Blood pressure generally refers to the arterial pressure in the systemic circulation. However, measurement of pressures in the venous system and the pulmonary vessels plays an important role in intensive care medicine but requires invasive measurement of pressure using a catheter.
Venous pressure is the vascular pressure in a vein or in the atria of the heart. It is much lower than arterial pressure, with common values of 5 mmHg in the right atrium and 8 mmHg in the left atrium.
Variants of venous pressure include:
Pulmonary pressure.
Normally, the pressure in the pulmonary artery is about 15 mmHg at rest.
Increased blood pressure in the capillaries of the lung causes pulmonary hypertension, leading to interstitial edema if the pressure increases to above 20 mmHg, and to pulmonary edema at pressures above 25 mmHg.
Aortic pressure.
Aortic pressure, also called central aortic blood pressure, or central blood pressure, is the blood pressure at the root of the aorta. Elevated aortic pressure has been found to be a more accurate predictor of both cardiovascular events and mortality, as well as structural changes in the heart, than has peripheral blood pressure (such as measured through the brachial artery). Traditionally it involved an invasive procedure to measure aortic pressure, but now there are non-invasive methods of measuring it indirectly without a significant margin of error.
Certain researchers have argued for physicians to begin using aortic pressure, as opposed to peripheral blood pressure, as a guide for clinical decisions. The way antihypertensive drugs impact peripheral blood pressure can often be very different from the way they impact central aortic pressure.
Mean systemic pressure.
If the heart is stopped, blood pressure falls, but it does not fall to zero. The remaining pressure measured after cessation of the heart beat and redistribution of blood throughout the circulation is termed the mean systemic pressure or mean circulatory filling pressure; typically this is proximally ~7 mmHg.
Disorders of blood pressure.
Disorders of blood pressure control include high blood pressure, low blood pressure, and blood pressure that shows excessive or maladaptive fluctuation.
High blood pressure.
Arterial hypertension can be an indicator of other problems and may have long-term adverse effects. Sometimes it can be an acute problem, such as in a hypertensive emergency when blood pressure is more than 180/120 mmHg.
Levels of arterial pressure put mechanical stress on the arterial walls. Higher pressures increase heart workload and progression of unhealthy tissue growth (atheroma) that develops within the walls of arteries. The higher the pressure, the more stress that is present and the more atheroma tend to progress and the heart muscle tends to thicken, enlarge and become weaker over time.
Persistent hypertension is one of the risk factors for strokes, heart attacks, heart failure, and arterial aneurysms, and is the leading cause of chronic kidney failure. Even moderate elevation of arterial pressure leads to shortened life expectancy. At severely high pressures, mean arterial pressures 50% or more above average, a person can expect to live no more than a few years unless appropriately treated. For people with high blood pressure, higher heart rate variability (HRV) is a risk factor for atrial fibrillation.
Both high systolic pressure and high pulse pressure (the numerical difference between systolic and diastolic pressures) are risk factors. Elevated pulse pressure has been found to be a stronger independent predictor of cardiovascular events, especially in older populations, than has systolic, diastolic, or mean arterial pressure. In some cases, it appears that a decrease in excessive diastolic pressure can actually increase risk, probably due to the increased difference between systolic and diastolic pressures (ie. widened pulse pressure). If systolic blood pressure is elevated (>140 mmHg) with a normal diastolic blood pressure (<90 mmHg), it is called isolated systolic hypertension and may present a health concern. According to the 2017 American Heart Association blood pressure guidelines state that a systolic blood pressure of 130–139 mmHg with a diastolic pressure of 80–89 mmHg is "stage one hypertension".
For those with heart valve regurgitation, a change in its severity may be associated with a change in diastolic pressure. In a study of people with heart valve regurgitation that compared measurements two weeks apart for each person, there was an increased severity of aortic and mitral regurgitation when diastolic blood pressure increased, whereas when diastolic blood pressure decreased, there was a decreased severity.
Low blood pressure.
Blood pressure that is too low is known as hypotension. This is a medical concern if it causes signs or symptoms, such as dizziness, fainting, or in extreme cases in medical emergencies, circulatory shock. Causes of low arterial pressure include sepsis, hypovolemia, bleeding, cardiogenic shock, reflex syncope, hormonal abnormalities such as Addison's disease, eating disorders – particularly anorexia nervosa and bulimia.
Orthostatic hypotension.
A large fall in blood pressure upon standing (typically a systolic/diastolic blood pressure decrease of >20/10 mmHg) is termed orthostatic hypotension (postural hypotension) and represents a failure of the body to compensate for the effect of gravity on the circulation. Standing results in an increased hydrostatic pressure in the blood vessels of the lower limbs. The consequent distension of the veins below the diaphragm (venous pooling) causes ~500 ml of blood to be relocated from the chest and upper body. This results in a rapid decrease in central blood volume and a reduction of ventricular preload which in turn reduces stroke volume, and mean arterial pressure. Normally this is compensated for by multiple mechanisms, including activation of the autonomic nervous system which increases heart rate, myocardial contractility and systemic arterial vasoconstriction to preserve blood pressure and elicits venous vasoconstriction to decrease venous compliance. Decreased venous compliance also results from an intrinsic myogenic increase in venous smooth muscle tone in response to the elevated pressure in the veins of the lower body.
Other compensatory mechanisms include the veno-arteriolar axon reflex, the 'skeletal muscle pump' and 'respiratory pump'. Together these mechanisms normally stabilize blood pressure within a minute or less. If these compensatory mechanisms fail and arterial pressure and blood flow decrease beyond a certain point, the perfusion of the brain becomes critically compromised (i.e., the blood supply is not sufficient), causing lightheadedness, dizziness, weakness or fainting. Usually this failure of compensation is due to disease, or drugs that affect the sympathetic nervous system. A similar effect is observed following the experience of excessive gravitational forces (G-loading), such as routinely experienced by aerobatic or combat pilots 'pulling Gs' where the extreme hydrostatic pressures exceed the ability of the body's compensatory mechanisms.
Variable or fluctuating blood pressure.
Some fluctuation or variation in blood pressure is normal. Variation in blood pressure that is significantly greater than the norm is known as labile hypertension and is associated with increased risk of cardiovascular disease brain small vessel disease, and dementia independent of the average blood pressure level. Recent evidence from clinical trials has also linked variation in blood pressure to mortality, stroke, heart failure, and cardiac changes that may give rise to heart failure. These data have prompted discussion of whether excessive variation in blood pressure should be treated, even among normotensive older adults.
Older individuals and those who had received blood pressure medications are more likely to exhibit larger fluctuations in pressure, and there is some evidence that different antihypertensive agents have different effects on blood pressure variability; whether these differences translate to benefits in outcome is uncertain.
Physiology.
During each heartbeat, blood pressure varies between a maximum (systolic) and a minimum (diastolic) pressure. The blood pressure in the circulation is principally due to the pumping action of the heart. However, blood pressure is also regulated by neural regulation from the brain (see Hypertension and the brain), as well as osmotic regulation from the kidney. Differences in mean blood pressure drive the flow of blood around the circulation. The rate of mean blood flow depends on both blood pressure and the resistance to flow presented by the blood vessels. In the absence of hydrostatic effects (e.g. standing), mean blood pressure decreases as the circulating blood moves away from the heart through arteries and capillaries due to viscous losses of energy. Mean blood pressure drops over the whole circulation, although most of the fall occurs along the small arteries and arterioles. Pulsatility also diminishes in the smaller elements of the arterial circulation, although some transmitted pulsatility is observed in capillaries. Gravity affects blood pressure via hydrostatic forces (e.g., during standing), and valves in veins, breathing, and pumping from contraction of skeletal muscles also influence blood pressure, particularly in veins.
Hemodynamics.
A simple view of the hemodynamics of systemic arterial pressure is based around mean arterial pressure (MAP) and pulse pressure. Most influences on blood pressure can be understood in terms of their effect on cardiac output, systemic vascular resistance, or arterial stiffness (the inverse of arterial compliance). Cardiac output is the product of stroke volume and heart rate. Stroke volume is influenced by 1) the end-diastolic volume or filling pressure of the ventricle acting via the Frank–Starling mechanism—this is influenced by blood volume; 2) cardiac contractility; and 3) afterload, the impedance to blood flow presented by the circulation. In the short-term, the greater the blood volume, the higher the cardiac output. This has been proposed as an explanation of the relationship between high dietary salt intake and increased blood pressure; however, responses to increased dietary sodium intake vary between individuals and are highly dependent on autonomic nervous system responses and the renin–angiotensin system, changes in plasma osmolarity may also be important. In the longer-term the relationship between volume and blood pressure is more complex. In simple terms, systemic vascular resistance is mainly determined by the caliber of small arteries and arterioles. The resistance attributable to a blood vessel depends on its radius as described by the Hagen-Poiseuille's equation (resistance∝1/radius4). Hence, the smaller the radius, the higher the resistance. Other physical factors that affect resistance include: vessel length (the longer the vessel, the higher the resistance), blood viscosity (the higher the viscosity, the higher the resistance) and the number of vessels, particularly the smaller numerous, arterioles and capillaries. The presence of a severe arterial stenosis increases resistance to flow, however this increase in resistance rarely increases systemic blood pressure because its contribution to total systemic resistance is small, although it may profoundly decrease downstream flow. Substances called vasoconstrictors reduce the caliber of blood vessels, thereby increasing blood pressure. Vasodilators (such as nitroglycerin) increase the caliber of blood vessels, thereby decreasing arterial pressure. In the longer term a process termed remodeling also contributes to changing the caliber of small blood vessels and influencing resistance and reactivity to vasoactive agents. Reductions in capillary density, termed capillary rarefaction, may also contribute to increased resistance in some circumstances.
In practice, each individual's autonomic nervous system and other systems regulating blood pressure, notably the kidney, respond to and regulate all these factors so that, although the above issues are important, they rarely act in isolation and the actual arterial pressure response of a given individual can vary widely in the short and long term.
Pulse pressure.
The pulse pressure is the difference between the measured systolic and diastolic pressures,
formula_0
The pulse pressure is a consequence of the pulsatile nature of the cardiac output, i.e. the heartbeat. The magnitude of the pulse pressure is usually attributed to the interaction of the stroke volume of the heart, the compliance (ability to expand) of the arterial system—largely attributable to the aorta and large elastic arteries—and the resistance to flow in the arterial tree.
Clinical significance of pulse pressure.
A healthy pulse pressure is around 40 mmHg. A pulse pressure that is consistently 60 mmHg or greater is likely to be associated with disease, and a pulse pressure of 50 mmHg or more increases the risk of cardiovascular disease as well as other complications such as eye and kidney disease. Pulse pressure is considered low if it is less than 25% of the systolic. (For example, if the systolic pressure is 120 mmHg, then the pulse pressure would be considered low if it is less than 30 mmHg, since 30 is 25% of 120.) A very low pulse pressure can be a symptom of disorders such as congestive heart failure.
Elevated pulse pressure has been found to be a stronger independent predictor of cardiovascular events, especially in older populations, than has systolic, diastolic, or mean arterial pressure. This increased risk exists for both men and women and even when no other cardiovascular risk factors are present. The increased risk also exists even in cases in which diastolic pressure decreases over time while systolic remains steady.
A meta-analysis in 2000 showed that a 10 mmHg increase in pulse pressure was associated with a 20% increased risk of cardiovascular mortality, and a 13% increase in risk for all coronary end points. The study authors also noted that, while risks of cardiovascular end points do increase with higher systolic pressures, at any given systolic blood pressure the risk of major cardiovascular end points increases, rather than decreases, with lower diastolic levels. This suggests that interventions that lower diastolic pressure without also lowering systolic pressure (and thus lowering pulse pressure) could actually be counterproductive. There are no drugs currently approved to lower pulse pressure, although some antihypertensive drugs may modestly lower pulse pressure, while in some cases a drug that lowers overall blood pressure may actually have the counterproductive side effect of raising pulse pressure.
Pulse pressure can both widen or narrow in people with sepsis depending on the degree of hemodynamic compromise. A pulse pressure of over 70 mmHg in sepsis is correlated with an increased chance of survival and a more positive response to IV fluids.
Mean arterial pressure.
Mean arterial pressure (MAP) is the average of blood pressure over a cardiac cycle and is determined by the cardiac output (CO), systemic vascular resistance (SVR), and central venous pressure (CVP):
formula_1
In practice, the contribution of CVP (which is small) is generally ignored and so
formula_2
MAP is often estimated from measurements of the systolic pressure, formula_3 and the diastolic pressure, formula_4 using the equation:
formula_5
where "k" = 0.333 although other values for "k" have been advocated.
Regulation of blood pressure.
The endogenous, homeostatic regulation of arterial pressure is not completely understood, but the following mechanisms of regulating arterial pressure have been well-characterized:
These different mechanisms are not necessarily independent of each other, as indicated by the link between the RAS and aldosterone release. When blood pressure falls many physiological cascades commence in order to return the blood pressure to a more appropriate level.
The RAS is targeted pharmacologically by ACE inhibitors and angiotensin II receptor antagonists (also known as angiotensin receptor blockers; ARB). The aldosterone system is directly targeted by aldosterone antagonists. The fluid retention may be targeted by diuretics; the antihypertensive effect of diuretics is due to its effect on blood volume. Generally, the baroreceptor reflex is not targeted in hypertension because if blocked, individuals may experience orthostatic hypotension and fainting.
Measurement.
Arterial pressure is most commonly measured via a sphygmomanometer, which uses the height of a column of mercury, or an aneroid gauge, to reflect the blood pressure by auscultation. The most common automated blood pressure measurement technique is based on the oscillometric method. Fully automated oscillometric measurement has been available since 1981. This principle has recently been used to measure blood pressure with a smartphone. Measuring pressure invasively, by penetrating the arterial wall to take the measurement, is much less common and usually restricted to a hospital setting. Novel methods to measure blood pressure without penetrating the arterial wall, and without applying any pressure on patient's body are being explored, for example, cuffless measurements that uses only optical sensors.
In office blood pressure measurement, terminal digit preference is common. According to one study, approximately 40% of recorded measurements ended with the digit zero, whereas "without bias, 10%–20% of measurements are expected to end in zero"
In animals.
Blood pressure levels in non-human mammals may vary depending on the species. Heart rate differs markedly, largely depending on the size of the animal (larger animals have slower heart rates). The giraffe has a distinctly high arterial pressure of about 190 mm Hg, enabling blood perfusion through the -long neck to the head. In other species subjected to orthostatic blood pressure, such as arboreal snakes, blood pressure is higher than in non-arboreal snakes. A heart near to the head (short heart-to-head distance) and a long tail with tight integument favor blood perfusion to the head.
As in humans, blood pressure in animals differs by age, sex, time of day, and environmental circumstances: measurements made in laboratories or under anesthesia may not be representative of values under free-living conditions. Rats, mice, dogs and rabbits have been used extensively to study the regulation of blood pressure.
Hypertension in cats and dogs.
Hypertension in cats and dogs is generally diagnosed if the blood pressure is greater than 150 mm Hg (systolic), although sight hounds have higher blood pressures than most other dog breeds; a systolic pressure greater than 180 mmHg is considered abnormal in these dogs.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\! P_{\\text{pulse}} = P_{\\text{sys}} - P_{\\text{dias}}."
},
{
"math_id": 1,
"text": "\\! \\text{MAP} = (\\text{CO} \\cdot \\text{SVR}) + \\text{CVP} "
},
{
"math_id": 2,
"text": "\\! \\text{MAP} = \\text{CO} \\cdot \\text{SVR} "
},
{
"math_id": 3,
"text": " \\! P_{\\text{sys}}"
},
{
"math_id": 4,
"text": " \\! P_{\\text{dias}}"
},
{
"math_id": 5,
"text": "\\! \\text{MAP} \\approxeq P_{\\text{dias}} + k (P_{\\text{sys}} - P_{\\text{dias}})"
}
] |
https://en.wikipedia.org/wiki?curid=56558
|
56564310
|
MRI artifact
|
An MRI artifact is a visual artifact (an anomaly seen during visual representation) in magnetic resonance imaging (MRI). It is a feature appearing in an image that is not present in the original object. Many different artifacts can occur
during MRI, some affecting the diagnostic quality, while others may be confused with pathology. Artifacts can be classified as patient-related, signal processing-dependent and hardware (machine)-related.
Patient-related MR artifacts.
Motion artifacts.
A motion artifact is one of the most common artifacts in MR imaging. Motion can cause either ghost images or diffuse image noise in the phase-encoding direction. The reason for mainly affecting data sampling in the phase-encoding direction is the significant difference in the time of acquisition in the frequency- and phase-encoding directions. Frequency-encoding sampling in all the rows of the matrix (128, 256 or 512) takes place during a single echo (milliseconds). Phase-encoded sampling takes several seconds, or even minutes, owing to the collection of all the k-space lines to enable Fourier analysis. Major physiological movements are of millisecond to seconds duration and thus too slow to affect frequency-encoded sampling, but they have a pronounced effect in the phase-encoding direction. Periodic movements such as cardiac movement and blood vessel or CSF pulsation cause ghost images, while non-periodic movement causes diffuse image noise (Fig. 1). Ghost image intensity increases with amplitude of movement and the signal intensity from the moving tissue. Several methods can be used to reduce motion artifacts, including patient immobilisation, cardiac and respiratory gating, signal suppression of the tissue causing the artifact, choosing the shorter dimension of the matrix as the phase-encoding direction, view-ordering or phase-reordering methods and swapping phase and frequency-encoding directions to move the artifact out of the field of interest.
Flow.
Flow can manifest as either an altered intravascular signal (flow enhancement or flow-related signal loss), or as flow-related artifacts (ghost images or spatial misregistration). Flow enhancement, also known as inflow effect, is caused by fully magnetised protons entering the imaged slice while the stationary protons have not fully regained their magnetization. The fully magnetized protons yield a high signal in comparison with the rest of the surroundings. High velocity flow causes the protons entering the image to be removed from it by the time the 180-degree pulse is administered. The effect is that these protons do not contribute to the echo and are registered as a signal void or flow-related signal loss (Fig. 2). Spatial misregistration manifests as displacement of an intravascular signal owing to position encoding of a voxel in the phase direction preceding frequency encoding by time TE/2.The intensity of the artifact is dependent on the signal intensity from the vessel, and is less apparent with increased TE.
Metal artifacts.
Metal artifacts occur at interfaces of tissues with different magnetic susceptibilities, which cause local magnetic fields to distort the external magnetic field. This distortion changes the precession frequency in the tissue leading to spatial mismapping of information. The degree of distortion depends on the type of metal (stainless steel having a greater distorting effect than titanium alloy), the type of interface (most striking effect at soft tissue-metal interfaces), pulse sequence and imaging parameters. Metal artifacts are caused by external ferromagnetics such as cobalt containing make-up, internal ferromagnetics such as surgical clips, spinal hardware and other orthopaedic devices, and in some cases, metallic objects swallowed by people with pica. Manifestation of these artifacts is variable, including total signal loss, peripheral high signal and image distortion (Figs 3 and 4). Reduction of these artifacts can be attempted by orientating the long axis of an implant or device parallel to the long axis of the external magnetic field, possible with mobile extremity imaging and an open magnet. Further methods used are choosing the appropriate frequency encoding direction, since metal artifacts are most pronounced in this direction, using smaller voxel sizes, fast imaging sequences, increased readout bandwidth and avoiding gradient-echo imaging when metal is present. A technique called MARS (metal artifact reduction sequence) applies an additional gradient, along the slice select gradient at the time the frequency encoding gradient is applied.
Signal processing dependent artifacts.
The ways in which the data are sampled, processed and mapped out on the image matrix manifest these artifacts.
Chemical shift artifact.
Chemical shift artifact occurs at the fat/water interface in the frequency encoding direction (Fig. 5). These artifacts arise due to the difference in resonance of protons as a result of their micromagnetic environment. The protons of fat resonate at a slightly lower frequency than those of water. High field strength magnets are particularly susceptible to this artifact. Determination of the artifact can be made by swapping the phase- and frequency-encoding gradients and examining the resultant shift of fat tissue.
Partial volume.
Partial volume artifacts arise from the size of the voxel over which the signal is averaged. Objects smaller than the voxel dimensions lose their identity, and loss of detail and spatial resolution occurs. Reduction of these artifacts is accomplished by using a smaller pixel size and/or a smaller slice thickness.
Wrap-around.
A wrap-around artifact also known as an aliasing artifact, is a result of mismapping of anatomy that lies outside the field of view but within the slice volume. The selected field of view is smaller than the size of the imaged object. The anatomy is usually displaced to the opposite side of the image (Figs 6 and 7). It can be caused by non-linear gradients or by undersampling of the frequencies contained within the return signal. The sampling rate must be twice the maximal frequency that occurs in the object (Nyquist sampling limit). If not, the Fourier transform will assign very low values to the frequency signals greater than the Nyquist limit. These frequencies will then ‘wrap around’ to the opposite side of the image, masquerading as low-frequency signals. In the frequency encode direction a filter can be applied to the acquired signal to eliminate frequencies greater than the Nyquist frequency. In the phase encode direction, artifacts can be reduced by an increasing number of phase encode steps (increased image time). For correction, a larger field of view may be chosen.
Gibbs artifacts.
Gibbs artifacts or Gibbs ringing artifacts, also known as truncation artifacts are caused by the under-sampling of high spatial frequencies at sharp boundaries in the image. Lack of appropriate high-frequency components leads to an oscillation at a sharp transition known as a ringing artifact. It appears as multiple, regularly spaced parallel bands of alternating bright and dark signal that slowly fade with distance (Fig. 8). Ringing artifacts are more prominent in smaller digital matrix sizes. Methods employed to correct Gibbs artifact include filtering the k-space data prior to Fourier transform, increasing the matrix size for a given field of view, the Gegenbauer reconstruction and Bayesian approach.
Machine/hardware-related artifacts.
This is a wide and still expanding subject. Only a few common artifacts are recognised.
Radiofrequency (RF) quadrature.
RF detection circuit failure arises from improper detector channel operation. Fourier-transformed data display a bright spot in the centre of the image. If one channel of the detector has a higher gain than the other it will result in object ghosting in the image. This is the result of a hardware failure and must be addressed by a service representative.
External magnetic field (B0) inhomogeneity.
B0 inhomogeneity leads to mismapping of tissues. Inhomogeneous external magnetic field causes either spatial, intensity, or both distortions. Intensity distortion occurs when the field in a location is greater or less than in the rest of the imaged object (Fig. 9). Spatial distortion results from long-range field gradients, which remain constant in the inhomogeneous field.
Gradient field artifacts.
Magnetic field gradients are used to spatially encode the location of signals from excited protons within the volume being imaged. The slice select gradient defines the volume (slice). Phase- and frequency-encoding gradients provide the information in the other two dimensions. Any deviation in the gradient would be represented as a distortion. As the distance increases from the centre of the applied gradient, loss of field strength occurs at the periphery. Anatomical compression occurs and is especially pronounced on coronal and sagittal imaging. When the phase-encoding gradient is different, the width or height of the voxel is different, resulting in distortion. Anatomical proportions are compressed along one or the other axis. Square pixels (and voxels) should be obtained. Ideally the phase gradient should be assigned to the smaller dimension of the object and the frequency gradient to the larger dimension. In practice this is not always possible because of the necessity of displacing motion artifacts. This may be corrected by reducing the field of view, by lowering the gradient field strength or by decreasing the frequency bandwidth of radio signal. If correction is not achieved, the cause might be either a damaged gradient coil or an abnormal current passing through the gradient coil.
RF (B1) inhomogeneity.
Variation in intensity across the image may be due to the failure of the RF coil, non-uniform B1 field, non-uniform sensitivity of the receive only coil (spaces between wire in the coil, uneven distribution of wire), or presence of non-ferromagnetic material in the imaged object.
When using a FLASH sequence, tip angle variations due to B1 inhomogeneity can affect the contrast of the image. Similarly, for inversion recovery pulses, and other T1-dependent methods, will suffer from signal intensity errors and generally lower T1 weighting. This is due to imperfect flip angles throughout the slice, but particularly around the edges of the body, resulting in imperfect magnetization recovery.
RF tip angle theory vs reality.
Human body is full of protons and during imaging, the B0 field aligns these individual protons to a net magnetization in the direction of the magnetic field. An RF pulse that is applied perpendicular to the main magnetic field flips the spins to a desired angle. This flip angle scales with the B1 field amplitude. Accurate flip angle is crucial because the measured MR signals depend on the flip angle of the protons. However, this theory assumes that the B1 field is homogenous and, therefore, all spins in a slice are flipped an equal amount.
In reality, different areas of a slice see different radio frequency fields, leading to different flip angles. One reason this occurs is because the RF wavelength is inversely proportional to B0. So RF wavelength decreases when B0 increases. At B0 fields of 1.5T, RF wavelengths are long compared to the size of the body. But as the main magnetic field is increased, these wavelengths become the same or smaller than the regions of the body being imaged, resulting in flip angle inhomogeneity. In the images of a healthy patient's brain, it can be visually seen how inhomogeneous the fields are at 3T and 7T.
On a side note, this isn't the only cause of B1 inhomogeneity. It could also be due to the RF pulse design, B0 field inhomogeneity, or even patient movement.
Asymmetrical brightness.
There is a uniform decrease in signal intensity along the frequency encoding axis. Signal drop-off is due to filters that are too tight about the signal band. Some of the signal generated by the imaged section is, thereby, inappropriately rejected. A similar artifact may be caused by non-uniformity in slice thickness.
RF noise.
RF pulses and precessional frequencies of MRI instruments occupy the same frequency bandwidth as common sources such as TV, radio, fluorescent lights and computers. Stray RF signals can cause various artifacts. Narrow-band noise is projected perpendicular to the frequency- encoding direction. Broadband noise disrupts the image over a much larger area. Appropriate site planning, proper installation and RF shielding (Faraday cage) eliminate stray RF interference.
Zero line and star artifacts.
A bright linear signal in a dashed pattern that decreases in intensity across the screen and can occur as a line or star pattern, depending on the position of the patient in the ‘phase-frequency space’. Zero line and star artifacts are due to system noise or any cause of RF pollution within the room (Faraday cage). If this pattern persists, check for sources of system noise such as bad electronics or alternating current line noise, loose connections to surface coils, or any source of RF pollution. If a star pattern is encountered, the manufacturer needs to readjust the system software so that the image is moved off the zero point.
Zipper artifacts.
Although less common, zippers are bands through the image centre due to an imperfect Faraday cage, with RF pollution in, but originating from outside, the cage. Residual free induction decay stimulated echo also causes zippers.
Bounce point artifact.
Absence of signal from tissues of a particular T1 value is a consequence of magnitude sensitive reconstruction in inversion recovery imaging. When the chosen T1 equals 69% of the T1 value of a particular tissue, a bounce point artifact occurs. Use phase-sensitive reconstruction inversion recovery techniques.
Surface coil artifacts.
Close to the surface coil the signals are very strong resulting in a very intense image signal (Fig. 10). Further from the coil the signal strength drops rapidly due to the attenuation with a loss of image brightness and significant shading to the uniformity. Surface coil sensitivity intensifies problems related to RF attenuation and RF mismatching.
Slice-to-slice interference.
Non-uniform RF energy received by adjacent slices during a multi-slice acquisition is due to cross-excitation of adjacent slices with contrast loss in reconstructed images (Fig. 11). To overcome these interference artifacts, the acquisition of two independent sets of gapped multi-slice images need to be included, and subsequently reordered during display of the full image set.
Artifact correction.
Gating.
Gating, also known as triggering, is a technique that acquires MRI data at a low motion state. An example of this could be acquiring an MRI slice only when the lung capacity is low (i.e. between large breaths). Gating is a very simple solution that can have a very large result.
Gating is best suited for mitigating breathing and cardiac artifacts. This is because these types of motion are repetitive, so we can leverage triggering acquisitions in a ‘low motion state’. Gating is used for cine imaging, MRA, free-breathing chest scans, CSF flow imaging, and more.
In order to gate correctly, the system needs to have knowledge of the patient's cardiac motion and breathing pattern. This is commonly done by using a pulse oximeter or EKG sensor to read a cardiac signal and/or a bellows to read the breathing signal.
A big disadvantage to gating is ‘dead time’, defined as time wasted due to waiting for a high motion state to pass. For example, we do not want to acquire an MRI image while someone is in the process of inhaling, since this would be a high motion state. So, we have many time periods where we are waiting for a high motion state to pass. This is even more prominent when we consider respiratory and cardiac gating together. The windows of time where the respiratory and cardiac motions are low are very infrequent, leading to high dead times. However, the advantage is that images acquired with both cardiac and respiratory gating have a significant improvement in image quality.
Pilot tone.
The Pilot Tone method involves turning on a constant RF frequency to detect patient motion. More specifically, the MRI machine will detect the pilot tone signal when acquiring an image. The strength of the pilot tone signal at every TR will be proportional to the breathing/motion patterns of the patient. That is, the patient's movements will cause the received constant RF tone to be amplitude modulated. A very large advantage to the pilot tone is that it requires no contact with the patient.
Extracting a breathing signal using a pilot tone is simple in theory: One must place a constant frequency signal near the MRI bore, acquire an image, and take an FFT along the readout direction to extract the pilot tone. Technical considerations include choosing the RF frequency. The pilot tone must be detectable by the MRI machine, however must be carefully chosen not to interfere with the MRI image. The pilot tone shows up as a zipper (for a cartesian acquisition).
The location of that line is determined by the frequency of the RF tone. For this reason, pilot tone acquisitions usually have slightly large FOVs, to make room for the pilot tone.
Once an image has been acquired, the pilot tone signal can be extracted by taking the FFT along the readout direction and plotting the amplitude of the resulting signal. The pilot tone will show up as a line (of varying amplitude) when taking an FFT along the readout direction. The pilot tone method can also be used prospectively to acquire cardiac images.
The Pilot Tone method is great for detecting respiratory motion artifacts. This is because there is a very large and distinct modulation due to human breathing patterns. Heart signals are much more subtle and difficult to detect using a pilot tone. Retrospective techniques using the pilot tone are able to increase the level of detail and reduce blurring in free-breathing radial images.
TAMER.
Targeted Motion Estimation and Reduction (TAMER) is a retrospective motion correction method developed by Melissa Haskell, Stephen Cauley, and Lawrence Wald. The method was first introduced in their paper "Targeted Motion Estimation and Reduction (TAMER): Consistency Based Motion Mitigation for MRI using a Reduced Model Joint Optimization", as part of the "IEEE Transactions on Medical Imaging Journal". The method corrects motion-related artifacts by acquiring a joint estimation of the desired motion-free image and the associated motion trajectory by minimizing the data consistency error of a SENSE forward model that includes rigid-body subject motion.
Preliminaries.
The TAMER Method utilizes the SENSE forward model (described below) that has been modified to include the effects of motion in a 2D multi-shot imaging sequence. Note: the following modified SENSE model is described in detail in Melissa Haskell's doctoral dissertation, Retrospective Motion Correction for Magnetic Resonance Imaging.
Suppose that we have formula_0 coils. Let formula_1 be a formula_2 column vector of image voxel values where formula_3 is the number of k space samples acquired per shot and let formula_4 be the formula_5 signal data from formula_6 coils. Let formula_7 encoding matrix for a given formula_8 patient motion trajectory vector, formula_9. formula_10 is composed of formula_11 many formula_12 sub-matrices formula_13 (encoding matrices for each shot formula_14).
For each shot formula_15, we have the sub-matrix formula_13 which is the encoding matrix for that particular shot formula_14 where:
formula_16 is the formula_17 under-sampling operator
formula_18 is the Fourier Encoding Operator
formula_19 is the in-place translation operator
formula_20 is the through-plane translation operator
formula_21 is the rotation operator
SENSE Motion Forward Model: formula_22
SENSE model Extended to describe a 2D multi-shot imaging sequence: formula_23
The rigid-body motion forward model is nonlinear and the process of solving for estimations of both the motion trajectory and the image volume is computationally challenging and time-consuming. In the effort to speed up and simplify computations, the TAMER method separates the vector of image voxel values, formula_24, into a vector of target voxel values, formula_25 , and a vector of fixed voxels, formula_26. Given any choice of target voxels and fixed voxels, we have the following:
formula_27
formula_28
formula_29
Note: The length of formula_30 only makes up about 5% of the total length of formula_31.
Now the optimization can be reduced to fitting the signal contribution of the target voxels to the correct target voxel values formula_25 and the correct motion, formula_32.
TAMER Algorithm.
The TAMER algorithm has 3 main stages: Initialization, Jumpstart of Motion Parameter Search, and the Joint Optimization Reduced Model Search.
"Initialization:"
The first stage of the TAMER algorithm acquires the initial reconstruction of the full image volume, formula_33, by assuming that all motion parameters are zero. One can solve for formula_33 by minimizing the least squared error of the SENSE forward model without motion i.e. solve the system formula_34 where formula_35 and formula_36 is the conjugate transpose of formula_37. We have discussed the notion of separating the sense model into formula_28; however, we haven't yet discussed how the target voxels are chosen. Voxels that are strongly coupled together indicate motion. In a motion-free Cartesian acquisition, each voxel would only be coupled to itself, so our goal is to essentially un-couple these voxels. As described in the paper "Targeted Motion Estimation and Reduction (TAMER): Consistency Based Motion Mitigation for MRI using a Reduced Model Joint Optimization", as part of the "IEEE Transactions on Medical Imaging Journal, the TAMER algorithm converges fastest when choosing target voxels that are highly coupled." The target voxels can be entirely determined by the sequence parameters and coil sensitivities.
Target Voxel Selection Process:
Note: For each iteration of the TAMER process, the target voxels are selected by shifting the target voxels from the previous iteration perpendicularly to the phase encode direction by a preset amount.
Jumpstart of Motion Parameter Search:
Now the initial guess of the patient's motion is determined by evaluating the data consistency metric over a range of values for each of the formula_39 motion parameters and the best value for each parameter is selected to construct the initial guess.
Joint Optimization Reduced Model Search:
We now have the initial target voxels, motion estimate, and coil groupings. The following procedure is now executed.
Let formula_40 be the motion trajectory estimate for the formula_41search step. Let formula_42 be the max number of iterations.
While formula_43 repeat the following:
TAMER: Advantages and Disadvantages.
Advantages:
Disadvantages:
Neural network approaches.
In recent years, neural networks have generated a great deal of interest by outperforming traditional methods on longstanding problems across many fields. Machine learning, and by extension neural networks, have been used in many facets of MRI — for instance, speeding up image reconstruction, or improving reconstruction quality when working with a lack of data. Neural networks have also been used in motion artifact correction thanks to their ability to learn visual information from data, as well as infer underlying, latent representations in data.
NAMER.
Network Accelerated Motion Estimation and Reduction (NAMER) is a retrospective motion correction technique that utilizes convolutional neural networks (CNNs), a class of neural networks designed to process and learn from visual information such as images. This is a follow-up from the authors of the TAMER paper titled "Network Accelerated Motion Estimation and Reduction (NAMER): Convolutional neural network guided retrospective motion correction using a separable motion model." Similar to TAMER, the paper aims to correct for motion-related artifacts by way of estimating a desired motion-free image and optimizing parameters for a SENSE forward model describing the relationship between raw k-space data and image space while factoring in rigid motion.
Setup.
A SENSE forward model is used to induce synthetic motion artifacts in raw k-space data, allowing us access to both data with motion artifacts, as well as the ground-truth image without motion artifacts. This is important to the NAMER technique, because it utilizes a Convolutional Neural Network (CNN) to frontload image estimation and guide model parameter estimation. Convolutional Neural Networks leverage convolution kernels to analyze visual imagery. Here, a 27-layer network is used with multiple convolution layers, batch normalization, and ReLU activations. It uses a standard ADAM optimizer.
Image Estimation.
The CNN attempts to learn the image artifacts from the motion-corrupted input data formula_31. The estimate for these artifacts, denoted as formula_49, are then subtracted from the motion-corrupted input data formula_31 in order to produce a best estimate for the motion-free image:formula_50This serves two purposes: First, it allows the CNN to perform backpropagation and update its model weights by using a mean square error loss function comparing the difference between formula_51 and the known ground-truth motion-free image. Second, it gives us a good estimate of the motion-free image that gives us a starting point for model parameter optimization.
SENSE model parameter optimization.
Using a CNN effectively allows us to bypass the second stage of TAMER by skipping the joint parameter search. This means that we can focus on solely estimating motion parameters formula_52. Because formula_52 is really a vector of multiple, independent parameters, we can parallelize our optimization by estimating each parameter separately.
Optimizing the Optimization Procedure.
Before, we used the following to optimize both the image formula_31 and parameters formula_52 at once. Now, we can optimize solely the formula_52 values:
formula_53
On top of this, if a multi-shot acquisition was performed, we can estimate the parameters formula_54 for each of formula_55 shots separately, and go even further by estimating the parameters formula_56 for each line formula_57 in each shot formula_55:
formula_58
This allows us to massively reduce computation time, from around 50 minutes with TAMER to just 7 minutes with NAMER.
Reconstruction.
The new model parameters are then used in a standard Least Squares optimization problem to reconstruct an image that minimizes the distance between the k-space data, and the result of applying the SENSE forward model formula_59 under our new parameter estimate formula_60 to our best estimate for the motion-free image:formula_61This process is repeated until a desired number of time steps, or when the change in reconstructed image is sufficiently low. The NAMER technique has shown itself to be very effective in correcting for rigid motion artifacts, and converges much faster than other methods including TAMER. This illustrates the power of deep learning in improving results across myriad fields.
Generative adversarial networks.
Other more advanced techniques take advantage of generative adversarial networks (GANs) which aim to learn the underlying latent representation of data in order to synthesize new examples that are indistinguishable from real data. Here, two neural networks, a Generator Network and a Discriminator Network, are modelled as agents competing in a game. The Generator Network's goal is to produce synthetic images that are as close as possible to images from the true distribution, while the Discriminator Network's goal is to distinguish generated synthetic images from the true data distribution. Specific to motion artifact correction in MRI, the Generator Network takes in an image with motion artifacts, and outputs an image without motion artifacts. The Discriminator Network then differentiates between the synthesized image and ground truth data. Various studies have shown that GANs perform very well in correcting for motion artifacts.
RF (B1) Inhomogeneity Correction.
External Objects.
B1 inhomogeneity due to constructive or destructive interference from the permittivity of body tissue can be mitigated using external objects with high dielectric constants and low conductivity. These objects, called radiofrequency/dielectric cushion, can be placed over or near the imaging slice to improve B1 homogeneity. The combination of high dielectric constant and having low conductivity allows the cushion to alter the phase of the RF standing waves and has been shown to reduce signal loss due to B1 inhomogeneity. This correction method was shown to have the greatest effect on sequences that suffer from B1 inhomogeneity artifacts but has no effect on those with B0 inhomogeneity. In one study, the dielectric cushion improved image quality for turbo spin echo‐based T2‐weighted sequences but not on gradient echo‐based T2‐weighted sequences.
Coil Mitigated Corrections.
B1 inhomogeneity has been successfully mitigated by adjusting coil type and configurations.
Reducing the number of coils.
One method is as simple as using the same transmit and receive coil to improve homogeneity. This method exploits the tradeoff between B1 dependence and coil sensitivity dependence in FLASH sequences and allows the user to select an optimized flip angle that will reduce B1 dependence. By using the same coil for transmitting and receiving, the receiver coil sensitivity can offset some of the nonuniformities in the transmitter coil, reducing the overall RF inhomogeneity. For anatomical studies using the FLASH sequence that can be performed with one transmit and receive coil, this method can be used to reduce B1 inhomogeneity artifacts. However, the method would not be suitable for exams under strict time constraints, since the user first needs to perform flip angle optimization.
Coil excitation.
Modifying the field distribution within the RF coils will create a more homogenous field. This can be done by changing the way that the RF coil is driven and excited. One method uses a four-port RF excitation that applies different phase shifts at each port.
By implementing a four-port drive, the power requirement is decreased by 2, SNR is increased by √2, and the overall B1 homogeneity is improved.
Spiral coil.
Changing the shape of the coils can be used to reduce B1 inhomogeneity artifacts. The use of spiral coil instead of standard coils at higher fields has been shown to eliminate the effects of standing waves in larger samples. This method can be effective when imaging large samples at 4T or higher; however, the proper equipment is required to implement this correction method. Unlike post-processing or sequence modulations, changing the coil shape is not feasible in all scanners.
Parallel excitation with coils.
Another method to correct for B1 inhomogeneity is to employ the infrastructure in place from a parallel system to generate multiple RF pulses of lower flip angles that, together, can result in the same flip angle as that created using a single transmit coil. This method uses the multiple transmit coils from parallel imaging systems to reduce and better mitigate the RF power deposition by relying on shorter RF pulses. One advantage of using parallel excitation with coils is the potential to reduce scan time by combining the multiple short RF pulses and the parallel imaging capabilities to cut scan time. Overall, when this method is used with the correct selection of RF pulses and optimized for a low power deposition, the artifacts from B1 inhomogeneity can be greatly reduced.
Active Power modulation.
Actively modulating the RF transmit power for each slice position compensates for B1 inhomogeneity. This method focuses on inhomogeneity along the axial, or z axis, direction since it is the most dominant in terms of poor homogeneity and least sample dependent.
Prior to inhomogeneity correction, measurement of the B1 profile along the z-axis of the coil is necessary for calibration. Once calibrated, the B1 data can be used for active transmit power modulation. For a specific pulse sequence, the values of each slice position are pre-determined and the appropriate RF transmitter power scale values are read from a look-up table. Then, while the sequence runs, a real time slice counter varies the attenuation of the RF transmit power.
This method is advantageous for reducing artifacts at the source, particularly when accurate flip angle is critical and for increasing signal to noise ratio. Even though this technique can only be used to compensate for the B1 variation along the z-axis in axially acquired images, it's still significant since B1 inhomogeneity is most dominant along this axis.
B1 insensitive adiabatic pulses.
One way to achieve perfect spin inversion despite B1 inhomogeneity is to use adiabatic pulses. This correction method works by removing the source of the problem and applying pulses that will not generate flip angle errors. Specific sequences that employ adiabatic pulses for increased flip angle uniformity include a slice selective spin-echo pulse, adiabatic 180 degrees inversion RF pulses, and 180 degrees refocusing pulses.
Image post-processing.
Post-processing techniques correct for intensity inhomogeneity (IIH) of the same tissue over an image domain. This method applies a filter to the data, typically based on a pre-acquired IIH map of the B1 field. If a map of the IIH in the image domain is known, then the IIH can be corrected by division into the pre-acquired image. This popular model in describing the IIH effect is:
Where formula_24 is the measured intensity, formula_63 is the true intensity, formula_64 is the IIH effect and ξ is the noise.
This method is advantageous because it can be conducted offline, i.e., the patient is not required to be in the scanner. Therefore, correction time is not an issue. However, this technique does not improve SNR and contrast of the image because it only utilizes information that was already acquired. Since the B1 field was not homogeneous when the images were acquired, the flip angles and subsequent acquired signals are imprecise.
The effects of an AI-based image post-scan processing denoising system in brain scans have been demonstrated to be effective in higher image quality and morphometric analysis. Post-scan image processing systems enable noise reduction while retaining contrast. The subsequent image enhancement can be processed with shorter scan times for higher throughput and plausible earlier detection.
B1 mapping techniques for image post-processing corrections.
To correct RF inhomogeneity artifacts using post-processing corrections, there are a few methods to map the B1 field. Here is a short description of some common techniques.
Double angle method.
A common and robust method that uses the results from two images acquired at flip angles of formula_65 and formula_66. The B1 map is then constructed using a ratio of the signal intensities of these two images. This method, although robust and accurate, requires a long TR and long scan time; therefore, the method is not optimal for imaging regions susceptible to motion.
Phase map method.
Similar to the double angle method, the phase map method uses two images; however, this method relies on the accrual of phase to determine the real flip angle of each spin. After applying a 180 degree rotation about the x-axis followed by a 90 degree rotation about the y axis, the resulting phase is then used to map the B1 field. By obtaining two images and subtracting one from the other, any phase from B0 inhomogeneity can be removed and only phase accumulated by the inhomogeneous RF field will be mapped. This method can be used to map 3D volumes but requires a long scan time, making it unsuitable for some scanning requirements.
Dual Refocusing Echo Acquisition Mode (DREAM).
This method is a multislice B1 mapping technique. DREAM can be used to acquire a 2D B1 map in 130 ms, making it insensitive to motion and feasible for scans that require breath holds, such as cardiac imaging. The short acquisition also reduces effects of chemical shifts and susceptibility. Additionally, this method requires low SAR rates. Although not as accurate as the double angle method, DREAM achieves reliable B1 mapping during short acquisitions. T
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C"
},
{
"math_id": 1,
"text": "x "
},
{
"math_id": 2,
"text": "nN_{sh} \\times 1"
},
{
"math_id": 3,
"text": "n "
},
{
"math_id": 4,
"text": "s "
},
{
"math_id": 5,
"text": "nN_{sh}C \\times 1"
},
{
"math_id": 6,
"text": "C "
},
{
"math_id": 7,
"text": "E_{\\theta} \\text{ be an } nN_{sh}C \\times nN_{sh} "
},
{
"math_id": 8,
"text": "M \\times 1"
},
{
"math_id": 9,
"text": "\\theta"
},
{
"math_id": 10,
"text": "E_{\\theta}"
},
{
"math_id": 11,
"text": "N_{sh} "
},
{
"math_id": 12,
"text": "nC \\times nN_{sh}"
},
{
"math_id": 13,
"text": "E_{\\theta_i} = U_iFCT_{xy,i}T_{z,i}R_i"
},
{
"math_id": 14,
"text": "i"
},
{
"math_id": 15,
"text": "i "
},
{
"math_id": 16,
"text": "U_i"
},
{
"math_id": 17,
"text": "nC \\times nN_{sh}C"
},
{
"math_id": 18,
"text": "F"
},
{
"math_id": 19,
"text": "T_{xy,i}"
},
{
"math_id": 20,
"text": "Tz,i"
},
{
"math_id": 21,
"text": "R_i"
},
{
"math_id": 22,
"text": "s = Ex"
},
{
"math_id": 23,
"text": "s = E_{\\theta}x \\text{ where } E_{\\theta} = \\begin{pmatrix} E_{\\theta_1}\\\\ E_{\\theta_2} \\\\ . \\\\ . \\\\ . \\\\ E_{\\theta_{Nsh}} \\end{pmatrix}\n\\text{ and } E_{\\theta_i} = U_iFCT_{xy,i}T_{z,i}R_i \\text{ for shot } i = 1, 2, ..., N_{sh}"
},
{
"math_id": 24,
"text": " x "
},
{
"math_id": 25,
"text": " x_t "
},
{
"math_id": 26,
"text": " x_f "
},
{
"math_id": 27,
"text": "x = \\binom{x_t}{x_f}"
},
{
"math_id": 28,
"text": " s = s_t + s_f = E_{\\theta}(t)x_t + E_{\\theta}(f) x_f "
},
{
"math_id": 29,
"text": " s_t = s - E_{\\theta}(f) x_f = E_{\\theta}(t)x_t "
},
{
"math_id": 30,
"text": "x_t"
},
{
"math_id": 31,
"text": "x"
},
{
"math_id": 32,
"text": " \\theta "
},
{
"math_id": 33,
"text": " x_0 "
},
{
"math_id": 34,
"text": " (E_0^HE_0)x_0 = E_0^Hs "
},
{
"math_id": 35,
"text": " E_0 = UFC "
},
{
"math_id": 36,
"text": " E_0^H "
},
{
"math_id": 37,
"text": " E_0 "
},
{
"math_id": 38,
"text": "E^HE"
},
{
"math_id": 39,
"text": "M"
},
{
"math_id": 40,
"text": "\\theta^{(i)}"
},
{
"math_id": 41,
"text": "i^{th}"
},
{
"math_id": 42,
"text": "i_{max}"
},
{
"math_id": 43,
"text": "i < i_{max} \\text{ and } \\nabla^{(i)} > \\Delta \\epsilon_{max}\n"
},
{
"math_id": 44,
"text": "s_{t}^{\\left(i\\right)}\\,\\to s_{t}^{\\left(i\\right)}\\,=\\,s\\,-\\,E_{\\theta^{\\left(i\\right)}}\\left(f\\right)x_{f}=\n\\,E_{\\theta^{\\left(i\\right)}}\\left(t\\right)x_{t}"
},
{
"math_id": 45,
"text": "x_{t}^{\\left(i\\right)}\\,\\to E_{\\theta^{\\left(i\\right)}}^{H}\\left(t\\right)E_{\\theta^{\\left(i\\right)}}\\left(t\\right)\\,x_{t}^{\\left(i\\right)}=\\,E_{\\theta^{\\left(i\\right)}}^{H}\\left(t\\right)x_{f}"
},
{
"math_id": 46,
"text": "\\epsilon^{(i)} = || s - E_{\\theta^{(i)}}x^{(i)}|| \n"
},
{
"math_id": 47,
"text": "\\,\\theta^{\\left(i+1\\right)}\\,=\\,\\theta^{\\left(i\\right)}\\,+\\,\\nabla^{\\left(i\\right)}\\,\\to\\,\\,\\nabla^{\\left(i\\right)}\\,=\n\n\\epsilon^{(i)} - || s - E_{\\left(\\theta ^{\\left(i\\right)}\\,+\\,d\\theta^{\\left(i\\right)}\\right)\\,}x^{\\left(i\\right)}||_2\n"
},
{
"math_id": 48,
"text": "i = i + 1\n"
},
{
"math_id": 49,
"text": "CNN(x)"
},
{
"math_id": 50,
"text": "x_{CNN} = x - CNN(x)"
},
{
"math_id": 51,
"text": "x_{CNN}"
},
{
"math_id": 52,
"text": "\\theta"
},
{
"math_id": 53,
"text": "\\hat{\\theta} = argmin_{\\theta} \\Vert s - E_\\theta x_{CNN} \\Vert_2"
},
{
"math_id": 54,
"text": "\\theta_n"
},
{
"math_id": 55,
"text": "n"
},
{
"math_id": 56,
"text": "\\theta_{n, l}"
},
{
"math_id": 57,
"text": "l"
},
{
"math_id": 58,
"text": "\\hat{\\theta}_{n, l} = argmin_{\\theta_{n, l}} \\Vert s_{n, l} - E_{\\theta_{n, l}} x_{CNN} \\Vert_2"
},
{
"math_id": 59,
"text": "E_{\\hat{\\theta}}"
},
{
"math_id": 60,
"text": "\\hat{\\theta}"
},
{
"math_id": 61,
"text": "[\\hat{x}] = arg\\min_x \\Vert s - E_{\\hat{\\theta}} x \\Vert_2"
},
{
"math_id": 62,
"text": " x = \\alpha x'+ \\xi "
},
{
"math_id": 63,
"text": " x' "
},
{
"math_id": 64,
"text": " \\alpha "
},
{
"math_id": 65,
"text": "\\alpha"
},
{
"math_id": 66,
"text": "2\\alpha"
}
] |
https://en.wikipedia.org/wiki?curid=56564310
|
56567
|
Hyperbolic functions
|
Collective name of 6 mathematical functions
In mathematics, hyperbolic functions are analogues of the ordinary trigonometric functions, but defined using the hyperbola rather than the circle. Just as the points (cos "t", sin "t") form a circle with a unit radius, the points (cosh "t", sinh "t") form the right half of the unit hyperbola. Also, similarly to how the derivatives of sin("t") and cos("t") are cos("t") and –sin("t") respectively, the derivatives of sinh("t") and cosh("t") are cosh("t") and +sinh("t") respectively.
Hyperbolic functions occur in the calculations of angles and distances in hyperbolic geometry. They also occur in the solutions of many linear differential equations (such as the equation defining a catenary), cubic equations, and Laplace's equation in Cartesian coordinates. Laplace's equations are important in many areas of physics, including electromagnetic theory, heat transfer, fluid dynamics, and special relativity.
The basic hyperbolic functions are:
from which are derived:
corresponding to the derived trigonometric functions.
The inverse hyperbolic functions are:
The hyperbolic functions take a real argument called a hyperbolic angle. The size of a hyperbolic angle is twice the area of its hyperbolic sector. The hyperbolic functions may be defined in terms of the legs of a right triangle covering this sector.
In complex analysis, the hyperbolic functions arise when applying the ordinary sine and cosine functions to an imaginary angle. The hyperbolic sine and the hyperbolic cosine are entire functions. As a result, the other hyperbolic functions are meromorphic in the whole complex plane.
By Lindemann–Weierstrass theorem, the hyperbolic functions have a transcendental value for every non-zero algebraic value of the argument.
Hyperbolic functions were introduced in the 1760s independently by Vincenzo Riccati and Johann Heinrich Lambert. Riccati used "Sc." and "Cc." () to refer to circular functions and "Sh." and "Ch." () to refer to hyperbolic functions. Lambert adopted the names, but altered the abbreviations to those used today. The abbreviations sh, ch, th, cth are also currently used, depending on personal preference.
Definitions.
There are various equivalent ways to define the hyperbolic functions.
Exponential definitions.
In terms of the exponential function:
Differential equation definitions.
The hyperbolic functions may be defined as solutions of differential equations: The hyperbolic sine and cosine are the solution ("s", "c") of the system
formula_6
with the initial conditions formula_7 The initial conditions make the solution unique; without them any pair of functions formula_8 would be a solution.
sinh("x") and cosh("x") are also the unique solution of the equation "f" ″("x") = "f" ("x"),
such that "f" (0) = 1, "f" ′(0) = 0 for the hyperbolic cosine, and "f" (0) = 0, "f" ′(0) = 1 for the hyperbolic sine.
Complex trigonometric definitions.
Hyperbolic functions may also be deduced from trigonometric functions with complex arguments:
where i is the imaginary unit with "i"2 = −1.
The above definitions are related to the exponential definitions via Euler's formula (See below).
Characterizing properties.
Hyperbolic cosine.
It can be shown that the area under the curve of the hyperbolic cosine (over a finite interval) is always equal to the arc length corresponding to that interval:
formula_15
Hyperbolic tangent.
The hyperbolic tangent is the (unique) solution to the differential equation "f" ′ = 1 − "f" 2, with "f"&hairsp;(0) = 0.
Useful relations.
The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities. In fact, Osborn's rule states that one can convert any trigonometric identity (up to but not including sinhs or implied sinhs of 4th degree) for formula_16, formula_17, formula_18 or formula_16 and formula_19 into a hyperbolic identity, by expanding it completely in terms of integral powers of sines and cosines, changing sine to sinh and cosine to cosh, and switching the sign of every term containing a product of two sinhs.
Odd and even functions:
formula_20
Hence:
formula_21
Thus, cosh "x" and sech "x" are even functions; the others are odd functions.
formula_22
Hyperbolic sine and cosine satisfy:
formula_23
the last of which is similar to the Pythagorean trigonometric identity.
One also has
formula_24
for the other functions.
Sums of arguments.
formula_25
particularly
formula_26
Also:
formula_27
Subtraction formulas.
formula_28
Also:
formula_29
Half argument formulas.
formula_30
where sgn is the sign function.
If "x" ≠ 0, then
formula_31
Square formulas.
formula_32
Inequalities.
The following inequality is useful in statistics:
formula_33
It can be proved by comparing term by term the Taylor series of the two functions.
Inverse functions as logarithms.
formula_34
Derivatives.
formula_35
formula_36
Second derivatives.
Each of the functions sinh and cosh is equal to its second derivative, that is:
formula_37
formula_38
All functions with this property are linear combinations of sinh and cosh, in particular the exponential functions formula_39 and formula_40.
Standard integrals.
formula_41
The following integrals can be proved using hyperbolic substitution:
formula_42
where "C" is the constant of integration.
Taylor series expressions.
It is possible to express explicitly the Taylor series at zero (or the Laurent series, if the function is not defined at zero) of the above functions.
formula_43
This series is convergent for every complex value of x. Since the function sinh "x" is odd, only odd exponents for "x" occur in its Taylor series.
formula_44
This series is convergent for every complex value of x. Since the function cosh "x" is even, only even exponents for x occur in its Taylor series.
The sum of the sinh and cosh series is the infinite series expression of the exponential function.
The following series are followed by a description of a subset of their domain of convergence, where the series is convergent and its sum equals the function.
formula_45
where:
Infinite products and continued fractions.
The following expansions are valid in the whole complex plane:
formula_48
formula_49
formula_50
Comparison with circular functions.
The hyperbolic functions represent an expansion of trigonometry beyond the circular functions. Both types depend on an argument, either circular angle or hyperbolic angle.
Since the area of a circular sector with radius r and angle u (in radians) is "r"2"u"/2, it will be equal to u when "r" = √2. In the diagram, such a circle is tangent to the hyperbola "xy" = 1 at (1,1). The yellow sector depicts an area and angle magnitude. Similarly, the yellow and red regions together depict a hyperbolic sector with area corresponding to hyperbolic angle magnitude.
The legs of the two right triangles with hypotenuse on the ray defining the angles are of length √2 times the circular and hyperbolic functions.
The hyperbolic angle is an invariant measure with respect to the squeeze mapping, just as the circular angle is invariant under rotation.
The Gudermannian function gives a direct relationship between the circular functions and the hyperbolic functions that does not involve complex numbers.
The graph of the function "a" cosh("x"/"a") is the catenary, the curve formed by a uniform flexible chain, hanging freely between two fixed points under uniform gravity.
Relationship to the exponential function.
The decomposition of the exponential function in its even and odd parts gives the identities
formula_51
and
formula_52
Combined with Euler's formula
formula_53
this gives
formula_54
for the general complex exponential function.
Additionally,
formula_55
Hyperbolic functions for complex numbers.
Since the exponential function can be defined for any complex argument, we can also extend the definitions of the hyperbolic functions to complex arguments. The functions sinh "z" and cosh "z" are then holomorphic.
Relationships to ordinary trigonometric functions are given by Euler's formula for complex numbers:
formula_56
so:
formula_57
Thus, hyperbolic functions are periodic with respect to the imaginary component, with period formula_58 (formula_59 for hyperbolic tangent and cotangent).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\sinh x = \\frac {e^x - e^{-x}} {2} = \\frac {e^{2x} - 1} {2e^x} = \\frac {1 - e^{-2x}} {2e^{-x}}."
},
{
"math_id": 1,
"text": " \\cosh x = \\frac {e^x + e^{-x}} {2} = \\frac {e^{2x} + 1} {2e^x} = \\frac {1 + e^{-2x}} {2e^{-x}}."
},
{
"math_id": 2,
"text": "\\tanh x = \\frac{\\sinh x}{\\cosh x} = \\frac {e^x - e^{-x}} {e^x + e^{-x}}\n= \\frac{e^{2x} - 1} {e^{2x} + 1}."
},
{
"math_id": 3,
"text": "\\coth x = \\frac{\\cosh x}{\\sinh x} = \\frac {e^x + e^{-x}} {e^x - e^{-x}}\n= \\frac{e^{2x} + 1} {e^{2x} - 1}."
},
{
"math_id": 4,
"text": " \\operatorname{sech} x = \\frac{1}{\\cosh x} = \\frac {2} {e^x + e^{-x}}\n= \\frac{2e^x} {e^{2x} + 1}."
},
{
"math_id": 5,
"text": " \\operatorname{csch} x = \\frac{1}{\\sinh x} = \\frac {2} {e^x - e^{-x}}\n= \\frac{2e^x} {e^{2x} - 1}."
},
{
"math_id": 6,
"text": "\\begin{align}\nc'(x)&=s(x),\\\\\ns'(x)&=c(x),\\\\\n\\end{align}\n"
},
{
"math_id": 7,
"text": "s(0) = 0, c(0) = 1."
},
{
"math_id": 8,
"text": "(a e^x + b e^{-x}, a e^x - b e^{-x})"
},
{
"math_id": 9,
"text": "\\sinh x = -i \\sin (i x)."
},
{
"math_id": 10,
"text": "\\cosh x = \\cos (i x)."
},
{
"math_id": 11,
"text": "\\tanh x = -i \\tan (i x)."
},
{
"math_id": 12,
"text": "\\coth x = i \\cot (i x)."
},
{
"math_id": 13,
"text": " \\operatorname{sech} x = \\sec (i x)."
},
{
"math_id": 14,
"text": "\\operatorname{csch} x = i \\csc (i x)."
},
{
"math_id": 15,
"text": "\\text{area} = \\int_a^b \\cosh x \\,dx = \\int_a^b \\sqrt{1 + \\left(\\frac{d}{dx} \\cosh x \\right)^2} \\,dx = \\text{arc length.}"
},
{
"math_id": 16,
"text": "\\theta"
},
{
"math_id": 17,
"text": "2\\theta"
},
{
"math_id": 18,
"text": "3\\theta"
},
{
"math_id": 19,
"text": "\\varphi"
},
{
"math_id": 20,
"text": "\\begin{align}\n \\sinh (-x) &= -\\sinh x \\\\\n \\cosh (-x) &= \\cosh x\n\\end{align}"
},
{
"math_id": 21,
"text": "\\begin{align}\n \\tanh (-x) &= -\\tanh x \\\\\n \\coth (-x) &= -\\coth x \\\\\n \\operatorname{sech} (-x) &= \\operatorname{sech} x \\\\\n \\operatorname{csch} (-x) &= -\\operatorname{csch} x\n\\end{align}"
},
{
"math_id": 22,
"text": "\\begin{align}\n \\operatorname{arsech} x &= \\operatorname{arcosh} \\left(\\frac{1}{x}\\right) \\\\\n \\operatorname{arcsch} x &= \\operatorname{arsinh} \\left(\\frac{1}{x}\\right) \\\\\n \\operatorname{arcoth} x &= \\operatorname{artanh} \\left(\\frac{1}{x}\\right)\n\\end{align}"
},
{
"math_id": 23,
"text": "\\begin{align}\n \\cosh x + \\sinh x &= e^x \\\\\n \\cosh x - \\sinh x &= e^{-x} \\\\\n \\cosh^2 x - \\sinh^2 x &= 1\n\\end{align}"
},
{
"math_id": 24,
"text": "\\begin{align}\n \\operatorname{sech} ^{2} x &= 1 - \\tanh^{2} x \\\\\n \\operatorname{csch} ^{2} x &= \\coth^{2} x - 1\n\\end{align}"
},
{
"math_id": 25,
"text": "\\begin{align}\n \\sinh(x + y) &= \\sinh x \\cosh y + \\cosh x \\sinh y \\\\\n \\cosh(x + y) &= \\cosh x \\cosh y + \\sinh x \\sinh y \\\\ \n \\tanh(x + y) &= \\frac{\\tanh x +\\tanh y}{1+ \\tanh x \\tanh y } \\\\\n\\end{align}"
},
{
"math_id": 26,
"text": "\\begin{align}\n\\cosh (2x) &= \\sinh^2{x} + \\cosh^2{x} = 2\\sinh^2 x + 1 = 2\\cosh^2 x - 1 \\\\\n\\sinh (2x) &= 2\\sinh x \\cosh x \\\\\n\\tanh (2x) &= \\frac{2\\tanh x}{1+ \\tanh^2 x } \\\\\n\\end{align}"
},
{
"math_id": 27,
"text": "\\begin{align}\n \\sinh x + \\sinh y &= 2 \\sinh \\left(\\frac{x+y}{2}\\right) \\cosh \\left(\\frac{x-y}{2}\\right)\\\\\n \\cosh x + \\cosh y &= 2 \\cosh \\left(\\frac{x+y}{2}\\right) \\cosh \\left(\\frac{x-y}{2}\\right)\\\\\n\\end{align}"
},
{
"math_id": 28,
"text": "\\begin{align}\n \\sinh(x - y) &= \\sinh x \\cosh y - \\cosh x \\sinh y \\\\\n \\cosh(x - y) &= \\cosh x \\cosh y - \\sinh x \\sinh y \\\\\n \\tanh(x - y) &= \\frac{\\tanh x -\\tanh y}{1- \\tanh x \\tanh y } \\\\\n\\end{align}"
},
{
"math_id": 29,
"text": "\\begin{align}\n \\sinh x - \\sinh y &= 2 \\cosh \\left(\\frac{x+y}{2}\\right) \\sinh \\left(\\frac{x-y}{2}\\right)\\\\\n \\cosh x - \\cosh y &= 2 \\sinh \\left(\\frac{x+y}{2}\\right) \\sinh \\left(\\frac{x-y}{2}\\right)\\\\\n\\end{align}"
},
{
"math_id": 30,
"text": "\\begin{align}\n \\sinh\\left(\\frac{x}{2}\\right) &= \\frac{\\sinh x}{\\sqrt{2 (\\cosh x + 1)} } &&= \\sgn x \\, \\sqrt \\frac{\\cosh x - 1}{2} \\\\[6px]\n \\cosh\\left(\\frac{x}{2}\\right) &= \\sqrt \\frac{\\cosh x + 1}{2}\\\\[6px]\n \\tanh\\left(\\frac{x}{2}\\right) &= \\frac{\\sinh x}{\\cosh x + 1} &&= \\sgn x \\, \\sqrt \\frac{\\cosh x-1}{\\cosh x+1} = \\frac{e^x - 1}{e^x + 1}\n\\end{align}"
},
{
"math_id": 31,
"text": " \\tanh\\left(\\frac{x}{2}\\right) = \\frac{\\cosh x - 1}{\\sinh x} = \\coth x - \\operatorname{csch} x "
},
{
"math_id": 32,
"text": "\\begin{align}\n\\sinh^2 x &= \\tfrac{1}{2}(\\cosh 2x - 1) \\\\\n\\cosh^2 x &= \\tfrac{1}{2}(\\cosh 2x + 1)\n\\end{align}"
},
{
"math_id": 33,
"text": "\\operatorname{cosh}(t) \\leq e^{t^2 /2}"
},
{
"math_id": 34,
"text": "\\begin{align}\n \\operatorname {arsinh} (x) &= \\ln \\left(x + \\sqrt{x^{2} + 1} \\right) \\\\\n \\operatorname {arcosh} (x) &= \\ln \\left(x + \\sqrt{x^{2} - 1} \\right) && x \\geq 1 \\\\\n \\operatorname {artanh} (x) &= \\frac{1}{2}\\ln \\left( \\frac{1 + x}{1 - x} \\right) && | x | < 1 \\\\\n \\operatorname {arcoth} (x) &= \\frac{1}{2}\\ln \\left( \\frac{x + 1}{x - 1} \\right) && |x| > 1 \\\\\n \\operatorname {arsech} (x) &= \\ln \\left( \\frac{1}{x} + \\sqrt{\\frac{1}{x^2} - 1}\\right) = \\ln \\left( \\frac{1+ \\sqrt{1 - x^2}}{x} \\right) && 0 < x \\leq 1 \\\\\n \\operatorname {arcsch} (x) &= \\ln \\left( \\frac{1}{x} + \\sqrt{\\frac{1}{x^2} +1}\\right) && x \\ne 0\n\\end{align}"
},
{
"math_id": 35,
"text": "\\begin{align}\n \\frac{d}{dx}\\sinh x &= \\cosh x \\\\\n \\frac{d}{dx}\\cosh x &= \\sinh x \\\\\n \\frac{d}{dx}\\tanh x &= 1 - \\tanh^2 x = \\operatorname{sech}^2 x = \\frac{1}{\\cosh^2 x} \\\\\n \\frac{d}{dx}\\coth x &= 1 - \\coth^2 x = -\\operatorname{csch}^2 x = -\\frac{1}{\\sinh^2 x} && x \\neq 0 \\\\\n \\frac{d}{dx}\\operatorname{sech} x &= - \\tanh x \\operatorname{sech} x \\\\\n \\frac{d}{dx}\\operatorname{csch} x &= - \\coth x \\operatorname{csch} x && x \\neq 0\n\\end{align}"
},
{
"math_id": 36,
"text": "\\begin{align}\n \\frac{d}{dx}\\operatorname{arsinh} x &= \\frac{1}{\\sqrt{x^2+1}} \\\\\n \\frac{d}{dx}\\operatorname{arcosh} x &= \\frac{1}{\\sqrt{x^2 - 1}} && 1 < x \\\\\n \\frac{d}{dx}\\operatorname{artanh} x &= \\frac{1}{1-x^2} && |x| < 1 \\\\\n \\frac{d}{dx}\\operatorname{arcoth} x &= \\frac{1}{1-x^2} && 1 < |x| \\\\\n \\frac{d}{dx}\\operatorname{arsech} x &= -\\frac{1}{x\\sqrt{1-x^2}} && 0 < x < 1 \\\\\n \\frac{d}{dx}\\operatorname{arcsch} x &= -\\frac{1}{|x|\\sqrt{1+x^2}} && x \\neq 0\n \\end{align}"
},
{
"math_id": 37,
"text": " \\frac{d^2}{dx^2}\\sinh x = \\sinh x "
},
{
"math_id": 38,
"text": " \\frac{d^2}{dx^2}\\cosh x = \\cosh x \\, ."
},
{
"math_id": 39,
"text": " e^x "
},
{
"math_id": 40,
"text": " e^{-x} "
},
{
"math_id": 41,
"text": "\\begin{align}\n \\int \\sinh (ax)\\,dx &= a^{-1} \\cosh (ax) + C \\\\\n \\int \\cosh (ax)\\,dx &= a^{-1} \\sinh (ax) + C \\\\\n \\int \\tanh (ax)\\,dx &= a^{-1} \\ln (\\cosh (ax)) + C \\\\\n \\int \\coth (ax)\\,dx &= a^{-1} \\ln \\left|\\sinh (ax)\\right| + C \\\\\n \\int \\operatorname{sech} (ax)\\,dx &= a^{-1} \\arctan (\\sinh (ax)) + C \\\\\n \\int \\operatorname{csch} (ax)\\,dx &= a^{-1} \\ln \\left| \\tanh \\left( \\frac{ax}{2} \\right) \\right| + C = a^{-1} \\ln\\left|\\coth \\left(ax\\right) - \\operatorname{csch} \\left(ax\\right)\\right| + C = -a^{-1}\\operatorname{arcoth} \\left(\\cosh\\left(ax\\right)\\right) +C\n\\end{align}"
},
{
"math_id": 42,
"text": "\\begin{align}\n \\int {\\frac{1}{\\sqrt{a^2 + u^2}}\\,du} & = \\operatorname{arsinh} \\left( \\frac{u}{a} \\right) + C \\\\\n \\int {\\frac{1}{\\sqrt{u^2 - a^2}}\\,du} &= \\sgn{u} \\operatorname{arcosh} \\left| \\frac{u}{a} \\right| + C \\\\\n \\int {\\frac{1}{a^2 - u^2}}\\,du & = a^{-1}\\operatorname{artanh} \\left( \\frac{u}{a} \\right) + C && u^2 < a^2 \\\\\n \\int {\\frac{1}{a^2 - u^2}}\\,du & = a^{-1}\\operatorname{arcoth} \\left( \\frac{u}{a} \\right) + C && u^2 > a^2 \\\\\n \\int {\\frac{1}{u\\sqrt{a^2 - u^2}}\\,du} & = -a^{-1}\\operatorname{arsech}\\left| \\frac{u}{a} \\right| + C \\\\\n \\int {\\frac{1}{u\\sqrt{a^2 + u^2}}\\,du} & = -a^{-1}\\operatorname{arcsch}\\left| \\frac{u}{a} \\right| + C\n\\end{align}"
},
{
"math_id": 43,
"text": "\\sinh x = x + \\frac {x^3} {3!} + \\frac {x^5} {5!} + \\frac {x^7} {7!} + \\cdots = \\sum_{n=0}^\\infty \\frac{x^{2n+1}}{(2n+1)!}"
},
{
"math_id": 44,
"text": "\\cosh x = 1 + \\frac {x^2} {2!} + \\frac {x^4} {4!} + \\frac {x^6} {6!} + \\cdots = \\sum_{n=0}^\\infty \\frac{x^{2n}}{(2n)!}"
},
{
"math_id": 45,
"text": "\\begin{align}\n\n \\tanh x &= x - \\frac {x^3} {3} + \\frac {2x^5} {15} - \\frac {17x^7} {315} + \\cdots = \\sum_{n=1}^\\infty \\frac{2^{2n}(2^{2n}-1)B_{2n} x^{2n-1}}{(2n)!}, \\qquad \\left |x \\right | < \\frac {\\pi} {2} \\\\\n\n \\coth x &= x^{-1} + \\frac {x} {3} - \\frac {x^3} {45} + \\frac {2x^5} {945} + \\cdots = \\sum_{n=0}^\\infty \\frac{2^{2n} B_{2n} x^{2n-1}} {(2n)!}, \\qquad 0 < \\left |x \\right | < \\pi \\\\\n\n \\operatorname{sech} x &= 1 - \\frac {x^2} {2} + \\frac {5x^4} {24} - \\frac {61x^6} {720} + \\cdots = \\sum_{n=0}^\\infty \\frac{E_{2 n} x^{2n}}{(2n)!} , \\qquad \\left |x \\right | < \\frac {\\pi} {2} \\\\\n\n \\operatorname{csch} x &= x^{-1} - \\frac {x} {6} +\\frac {7x^3} {360} -\\frac {31x^5} {15120} + \\cdots = \\sum_{n=0}^\\infty \\frac{ 2 (1-2^{2n-1}) B_{2n} x^{2n-1}}{(2n)!} , \\qquad 0 < \\left |x \\right | < \\pi\n\n\\end{align}"
},
{
"math_id": 46,
"text": "B_n "
},
{
"math_id": 47,
"text": "E_n "
},
{
"math_id": 48,
"text": "\\sinh x = x\\prod_{n=1}^\\infty\\left(1+\\frac{x^2}{n^2\\pi^2}\\right) =\n\\cfrac{x}{1 - \\cfrac{x^2}{2\\cdot3+x^2 -\n\\cfrac{2\\cdot3 x^2}{4\\cdot5+x^2 -\n\\cfrac{4\\cdot5 x^2}{6\\cdot7+x^2 - \\ddots}}}}\n"
},
{
"math_id": 49,
"text": "\\cosh x = \\prod_{n=1}^\\infty\\left(1+\\frac{x^2}{(n-1/2)^2\\pi^2}\\right) = \\cfrac{1}{1 - \\cfrac{x^2}{1 \\cdot 2 + x^2 - \\cfrac{1 \\cdot 2x^2}{3 \\cdot 4 + x^2 - \\cfrac{3 \\cdot 4x^2}{5 \\cdot 6 + x^2 - \\ddots}}}}"
},
{
"math_id": 50,
"text": "\\tanh x = \\cfrac{1}{\\cfrac{1}{x} + \\cfrac{1}{\\cfrac{3}{x} + \\cfrac{1}{\\cfrac{5}{x} + \\cfrac{1}{\\cfrac{7}{x} + \\ddots}}}}"
},
{
"math_id": 51,
"text": "e^x = \\cosh x + \\sinh x,"
},
{
"math_id": 52,
"text": "e^{-x} = \\cosh x - \\sinh x."
},
{
"math_id": 53,
"text": "e^{ix} = \\cos x + i\\sin x,"
},
{
"math_id": 54,
"text": "e^{x+iy}=(\\cosh x+\\sinh x)(\\cos y+i\\sin y)"
},
{
"math_id": 55,
"text": "e^x = \\sqrt{\\frac{1 + \\tanh x}{1 - \\tanh x}} = \\frac{1 + \\tanh \\frac{x}{2}}{1 - \\tanh \\frac{x}{2}}"
},
{
"math_id": 56,
"text": "\\begin{align}\n e^{i x} &= \\cos x + i \\sin x \\\\\n e^{-i x} &= \\cos x - i \\sin x\n\\end{align}"
},
{
"math_id": 57,
"text": "\\begin{align}\n \\cosh(ix) &= \\frac{1}{2} \\left(e^{i x} + e^{-i x}\\right) = \\cos x \\\\\n \\sinh(ix) &= \\frac{1}{2} \\left(e^{i x} - e^{-i x}\\right) = i \\sin x \\\\\n \\cosh(x+iy) &= \\cosh(x) \\cos(y) + i \\sinh(x) \\sin(y) \\\\\n \\sinh(x+iy) &= \\sinh(x) \\cos(y) + i \\cosh(x) \\sin(y) \\\\\n \\tanh(ix) &= i \\tan x \\\\\n \\cosh x &= \\cos(ix) \\\\\n \\sinh x &= - i \\sin(ix) \\\\\n \\tanh x &= - i \\tan(ix)\n\\end{align}"
},
{
"math_id": 58,
"text": "2 \\pi i"
},
{
"math_id": 59,
"text": "\\pi i"
}
] |
https://en.wikipedia.org/wiki?curid=56567
|
565742
|
Symbolic method
|
In mathematics, the symbolic method in invariant theory is an algorithm developed by Arthur Cayley, Siegfried Heinrich Aronhold, Alfred Clebsch, and Paul Gordan in the 19th century for computing invariants of algebraic forms. It is based on treating the form as if it were a power of a degree one form, which corresponds to embedding a symmetric power of a vector space into the symmetric elements of a tensor product of copies of it.
Symbolic notation.
The symbolic method uses a compact, but rather confusing and mysterious notation for invariants, depending on the introduction of new symbols "a", "b", "c", ... (from which the symbolic method gets its name) with apparently contradictory properties.
Example: the discriminant of a binary quadratic form.
These symbols can be explained by the following example from Gordan. Suppose that
formula_0
is a binary quadratic form with an invariant given by the discriminant
formula_1
The symbolic representation of the discriminant is
formula_2
where "a" and "b" are the symbols. The meaning of the expression ("ab")2 is as follows. First of all, ("ab") is a shorthand form for the determinant of a matrix whose rows are "a"1, "a"2 and "b"1, "b"2, so
formula_3
Squaring this we get
formula_4
Next we pretend that
formula_5
so that
formula_6
and we ignore the fact that this does not seem to make sense if "f" is not a power of a linear form.
Substituting these values gives
formula_7
Higher degrees.
More generally if
formula_8
is a binary form of higher degree, then one introduces new variables "a"1, "a"2, "b"1, "b"2, "c"1, "c"2, with the properties
formula_9
What this means is that the following two vector spaces are naturally isomorphic:
The isomorphism is given by mapping "a'a", "b'b", ... to "A""j". This mapping does not preserve products of polynomials.
More variables.
The extension to a form "f" in more than two variables "x"1, "x"2, "x"3... is similar: one introduces symbols "a"1, "a"2, "a"3 and so on with the properties
formula_10
Symmetric products.
The rather mysterious formalism of the symbolic method corresponds to embedding a symmetric product S"n"("V") of a vector space "V" into a tensor product of "n" copies of "V", as the elements preserved by the action of the symmetric group. In fact this is done twice, because the invariants of degree "n" of a quantic of degree "m" are the invariant elements of S"n"S"m"("V"), which gets embedded into a tensor product of "mn" copies of "V", as the elements invariant under a wreath product of the two symmetric groups. The brackets of the symbolic method are really invariant linear forms on this tensor product, which give invariants of S"n"S"m"("V") by restriction.
References.
Footnotes
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\displaystyle f(x) = A_0x_1^2+2A_1x_1x_2+A_2x_2^2"
},
{
"math_id": 1,
"text": "\\displaystyle \\Delta=A_0A_2-A_1^2."
},
{
"math_id": 2,
"text": "\\displaystyle 2\\Delta=(ab)^2"
},
{
"math_id": 3,
"text": "\\displaystyle (ab)=a_1b_2-a_2b_1."
},
{
"math_id": 4,
"text": "\\displaystyle (ab)^2=a_1^2b_2^2-2a_1a_2b_1b_2+a_2^2b_1^2."
},
{
"math_id": 5,
"text": "\\displaystyle f(x)=(a_1x_1+a_2x_2)^2=(b_1x_1+b_2x_2)^2"
},
{
"math_id": 6,
"text": "\\displaystyle A_i=a_1^{2-i}a_2^{i}= b_1^{2-i}b_2^{i}"
},
{
"math_id": 7,
"text": "\\displaystyle (ab)^2= A_2A_0-2A_1A_1+A_0A_2 = 2\\Delta."
},
{
"math_id": 8,
"text": "\\displaystyle f(x) = A_0x_1^n+\\binom{n}{1}A_1x_1^{n-1}x_2+\\cdots+A_nx_2^n"
},
{
"math_id": 9,
"text": "f(x)=(a_1x_1+a_2x_2)^n=(b_1x_1+b_2x_2)^n=(c_1x_1+c_2x_2)^n=\\cdots."
},
{
"math_id": 10,
"text": "f(x)=(a_1x_1+a_2x_2+a_3x_3+\\cdots)^n=(b_1x_1+b_2x_2+b_3x_3+\\cdots)^n=(c_1x_1+c_2x_2+c_3x_3+\\cdots)^n=\\cdots."
}
] |
https://en.wikipedia.org/wiki?curid=565742
|
56577590
|
Liñán's diffusion flame theory
|
Liñán diffusion flame theory is a theory developed by Amable Liñán in 1974 to explain the diffusion flame structure using activation energy asymptotics and Damköhler number asymptotics. Liñán used counterflowing jets of fuel and oxidizer to study the diffusion flame structure, analyzing for the entire range of Damköhler number. His theory predicted four different types of flame structure as follows,
Mathematical description.
The theory is well explained in the simplest possible model. Thus, assuming a one-step irreversible Arrhenius law for the combustion chemistry with constant density and transport properties and with unity Lewis number reactants, the governing equation for the non-dimensional temperature field formula_0 in the stagnation point flow reduces to
formula_1
where formula_2 is the mixture fraction, formula_3 is the Damköhler number, formula_4 is the activation temperature and the fuel mass fraction and oxidizer mass fraction are scaled with their respective feed stream values, given by
formula_5
with boundary conditions formula_6. Here, formula_7 is the unburnt temperature profile (frozen solution) and formula_8 is the stoichiometric parameter (mass of oxidizer stream required to burn unit mass of fuel stream). The four regime are analyzed by trying to solve above equations using activation energy asymptotics and Damköhler number asymptotics. The solution to above problem is multi-valued. Treating mixture fraction formula_2 as independent variable reduces the equation to
formula_9
with boundary conditions formula_10 and formula_11.
Extinction Damköhler number.
The reduced Damköhler number is defined as follows
formula_12
where formula_13 and formula_14. The theory predicted an expression for the reduced Damköhler number at which the flame will extinguish, given by
formula_15
where formula_16.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "T(y)"
},
{
"math_id": 1,
"text": "\\frac{d^2 T}{dy^2} + y\\frac{dT}{dy} = -\\mathrm{Da}\\ y_F y_O e^{-T_a/T}, \\quad Z= \\frac{1}{2}\\mathrm{erfc}\\left(\\frac{y}{\\sqrt 2}\\right) "
},
{
"math_id": 2,
"text": "Z"
},
{
"math_id": 3,
"text": "\\mathrm{Da}"
},
{
"math_id": 4,
"text": "T_a = E/R"
},
{
"math_id": 5,
"text": "\\begin{align}\ny_F &= Z + T_o - T \\\\\ny_O &= (1-Z)/S + T_o - T\n\\end{align}"
},
{
"math_id": 6,
"text": "T(-\\infty)=T(\\infty)=T_o"
},
{
"math_id": 7,
"text": "T_o "
},
{
"math_id": 8,
"text": "S"
},
{
"math_id": 9,
"text": "\\frac{d^2 T}{dZ^2} = - 2\\pi e^{y^2} \\mathrm{Da}\\ y_F y_O e^{-T_a/T} "
},
{
"math_id": 10,
"text": "T(0)=T(1)=T_o"
},
{
"math_id": 11,
"text": "y = \\sqrt{2} \\mathrm{erfc}^{-1}(2Z)"
},
{
"math_id": 12,
"text": "\\delta = 8\\pi Z_s^2 e^{y_s^2} \\left(\\frac{T_s^2}{T_a}\\right)^3 \\mathrm{Da}\\ e^{-T_a/T}"
},
{
"math_id": 13,
"text": "y_s =\\sqrt{2} \\mathrm{erfc}^{-1}(2Z_s),\\ Z_s = 1/(S+1)"
},
{
"math_id": 14,
"text": "T_s = T_o + Z_s"
},
{
"math_id": 15,
"text": "\\delta_E = e\\left[(1-\\gamma)-(1-\\gamma)^2+0.26(1-\\gamma)^3 +0.055(1-\\gamma)^4\\right]"
},
{
"math_id": 16,
"text": "\\gamma=1-2(1-\\alpha)(1-Z_s)"
}
] |
https://en.wikipedia.org/wiki?curid=56577590
|
5657877
|
Type I and type II errors
|
Concepts from statistical hypothesis testing
In statistical hypothesis testing, a type I error, or a false positive, is the rejection of the null hypothesis when it is actually true. For example, an innocent person may be convicted.
A type II error, or a false negative, is the failure to reject a null hypothesis that is actually false. For example: a guilty person may be not convicted.
Much of statistical theory revolves around the minimization of one or both of these errors, though the complete elimination of either is an impossibility if the outcome is not determined by a known, observable causal process. By selecting a low threshold (cut-off) value and modifying the alpha (α) level, the quality of the hypothesis test can be increased. The knowledge of type I errors and type II errors is widely used in medical science, biometrics and computer science.
Intuitively, type I errors can be thought of as errors of "commission" (i.e., the researcher unluckily concludes that something is the fact). For instance, consider a study where researchers compare a drug with a placebo. If the patients who are given the drug improve more than the patients given the placebo by chance, it may appear that the drug is effective, but in fact the opposite is true.
By contrast, type II errors are errors of "omission". In the example above, if the patients who got the drug did not get better at a higher rate than the ones who got the placebo and this was a random fluke, that would be a type II error.
Definition.
Statistical background.
In statistical test theory, the notion of a statistical error is an integral part of hypothesis testing. The test goes about choosing about two competing propositions called null hypothesis, denoted by formula_0 and alternative hypothesis, denoted by formula_1. This is conceptually similar to the judgement in a court trial. The null hypothesis corresponds to the position of the defendant: just as he is presumed to be innocent until proven guilty, so is the null hypothesis presumed to be true until the data provide convincing evidence against it. The alternative hypothesis corresponds to the position against the defendant. Specifically, the null hypothesis also involves the absence of a difference or the absence of an association. Thus, the null hypothesis can never be that there is a difference or an association.
If the result of the test corresponds with reality, then a correct decision has been made. However, if the result of the test does not correspond with reality, then an error has occurred. There are two situations in which the decision is wrong. The null hypothesis may be true, whereas we reject formula_0. On the other hand, the alternative hypothesis formula_1 may be true, whereas formula_0 is rejected. Two types of error are distinguished: type I error and type II error.
Type I error.
The first kind of error is the mistaken rejection of a null hypothesis as the result of a test procedure. This kind of error is called a type I error (false positive) and is sometimes called an error of the first kind. In terms of the courtroom example, a type I error corresponds to convicting an innocent defendant.
Type II error.
The second kind of error is the mistaken failure to reject the null hypothesis as the result of a test procedure. This sort of error is called a type II error (false negative) and is also referred to as an error of the second kind. In terms of the courtroom example, a type II error corresponds to acquitting a criminal.
Crossover error rate.
The crossover error rate (CER) is the point at which type I errors and type II errors are equal. A system with a lower CER value provides more accuracy than a system with a higher CER value.
False positive and false negative.
In terms of false positives and false negatives, a positive result corresponds to rejecting the null hypothesis, while a negative result corresponds to failing to reject the null hypothesis; "false" means the conclusion drawn is incorrect. Thus, a type I error is equivalent to a false positive, and a type II error is equivalent to a false negative.
Table of error types.
Tabulated relations between truth/falseness of the null hypothesis and outcomes of the test:
Error rate.
A perfect test would have zero false positives and zero false negatives. However, statistical methods are probabilistic, and it cannot be known for certain whether statistical conclusions are correct. Whenever there is uncertainty, there is the possibility of making an error. Considering this, all statistical hypothesis tests have a probability of making type I and type II errors.
These two types of error rates are traded off against each other: for any given sample set, the effort to reduce one type of error generally results in increasing the other type of error.
The quality of hypothesis test.
The same idea can be expressed in terms of the rate of correct results and therefore used to minimize error rates and improve the quality of hypothesis test. To reduce the probability of committing a type I error, making the alpha value more stringent is both simple and efficient. To decrease the probability of committing a type II error, which is closely associated with analyses' power, either increasing the test's sample size or relaxing the alpha level could increase the analyses' power. A test statistic is robust if the type I error rate is controlled.
Varying different threshold (cut-off) values could also be used to make the test either more specific or more sensitive, which in turn elevates the test quality. For example, imagine a medical test, in which an experimenter might measure the concentration of a certain protein in the blood sample. The experimenter could adjust the threshold (black vertical line in the figure) and people would be diagnosed as having diseases if any number is detected above this certain threshold. According to the image, changing the threshold would result in changes in false positives and false negatives, corresponding to movement on the curve.
Example.
Since in a real experiment it is impossible to avoid all type I and type II errors, it is important to consider the amount of risk one is willing to take to falsely reject H0 or accept H0. The solution to this question would be to report the p-value or significance level α of the statistic. For example, if the p-value of a test statistic result is estimated at 0.0596, then there is a probability of 5.96% that we falsely reject H0. Or, if we say, the statistic is performed at level α, like 0.05, then we allow to falsely reject H0 at 5%. A significance level α of 0.05 is relatively common, but there is no general rule that fits all scenarios.
Vehicle speed measuring.
The speed limit of a freeway in the United States is 120 kilometers per hour (75 mph). A device is set to measure the speed of passing vehicles. Suppose that the device will conduct three measurements of the speed of a passing vehicle, recording as a random sample X1, X2, X3. The traffic police will or will not fine the drivers depending on the average speed formula_2. That is to say, the test statistic
formula_3
In addition, we suppose that the measurements X1, X2, X3 are modeled as normal distribution N(μ,2). Then, T should follow N(μ,2/formula_4) and the parameter μ represents the true speed of passing vehicle. In this experiment, the null hypothesis H0 and the alternative hypothesis H1 should be
H0: μ=120 against H1: μ>120.
If we perform the statistic level at α=0.05, then a critical value c should be calculated to solve
formula_5
According to change-of-units rule for the normal distribution. Referring to Z-table, we can get
formula_6
Here, the critical region. That is to say, if the recorded speed of a vehicle is greater than critical value 121.9, the driver will be fined. However, there are still 5% of the drivers are falsely fined since the recorded average speed is greater than 121.9 but the true speed does not pass 120, which we say, a type I error.
The type II error corresponds to the case that the true speed of a vehicle is over 120 kilometers per hour but the driver is not fined. For example, if the true speed of a vehicle μ=125, the probability that the driver is not fined can be calculated as
formula_7
which means, if the true speed of a vehicle is 125, the driver has the probability of 0.36% to avoid the fine when the statistic is performed at level α=0.05, since the recorded average speed is lower than 121.9. If the true speed is closer to 121.9 than 125, then the probability of avoiding the fine will also be higher.
The tradeoffs between type I error and type II error should also be considered. That is, in this case, if the traffic police do not want to falsely fine innocent drivers, the level α can be set to a smaller value, like 0.01. However, if that is the case, more drivers whose true speed is over 120 kilometers per hour, like 125, would be more likely to avoid the fine.
Etymology.
In 1928, Jerzy Neyman (1894–1981) and Egon Pearson (1895–1980), both eminent statisticians, discussed the problems associated with "deciding whether or not a particular sample may be judged as likely to have been randomly drawn from a certain population": and, as Florence Nightingale David remarked, "it is necessary to remember the adjective 'random' [in the term 'random sample'] should apply to the method of drawing the sample and not to the sample itself".
They identified "two sources of error", namely:
In 1930, they elaborated on these two sources of error, remarking that
<templatestyles src="Template:Blockquote/styles.css" />in testing hypotheses two considerations must be kept in view, we must be able to reduce the chance of rejecting a true hypothesis to as low a value as desired; the test must be so devised that it will reject the hypothesis tested when it is likely to be false.
In 1933, they observed that these "problems are rarely presented in such a form that we can discriminate with certainty between the true and false hypothesis". They also noted that, in deciding whether to fail to reject, or reject a particular hypothesis amongst a "set of alternative hypotheses", H1, H2..., it was easy to make an error,
<templatestyles src="Template:Blockquote/styles.css" />[and] these errors will be of two kinds:
In all of the papers co-written by Neyman and Pearson the expression H0 always signifies "the hypothesis to be tested".
In the same paper they call these two sources of error, errors of type I and errors of type II respectively.
Related terms.
Null hypothesis.
It is standard practice for statisticians to conduct tests in order to determine whether or not a "speculative hypothesis" concerning the observed phenomena of the world (or its inhabitants) can be supported. The results of such testing determine whether a particular set of results agrees reasonably (or does not agree) with the speculated hypothesis.
On the basis that it is always assumed, by statistical convention, that the speculated hypothesis is wrong, and the so-called "null hypothesis" that the observed phenomena simply occur by chance (and that, as a consequence, the speculated agent has no effect) – the test will determine whether this hypothesis is right or wrong. This is why the hypothesis under test is often called the null hypothesis (most likely, coined by Fisher (1935, p. 19)), because it is "this" hypothesis that is to be either nullified or not nullified by the test. When the null hypothesis is nullified, it is possible to conclude that data support the "alternative hypothesis" (which is the original speculated one).
The consistent application by statisticians of Neyman and Pearson's convention of representing "the hypothesis to be tested" (or "the hypothesis to be nullified") with the expression "H"0 has led to circumstances where many understand the term "the null hypothesis" as meaning "the nil hypothesis" – a statement that the results in question have arisen through chance. This is not necessarily the case – the key restriction, as per Fisher (1966), is that "the null hypothesis must be exact, that is free from vagueness and ambiguity, because it must supply the basis of the 'problem of distribution', of which the test of significance is the solution." As a consequence of this, in experimental science the null hypothesis is generally a statement that a particular treatment has no effect; in observational science, it is that there is "no difference" between the value of a particular measured variable, and that of an experimental prediction.
Statistical significance.
If the probability of obtaining a result as extreme as the one obtained, supposing that the null hypothesis were true, is lower than a pre-specified cut-off probability (for example, 5%), then the result is said to be statistically significant and the null hypothesis is rejected.
British statistician Sir Ronald Aylmer Fisher (1890–1962) stressed that the null hypothesis
<templatestyles src="Template:Blockquote/styles.css" />is never proved or established, but is possibly disproved, in the course of experimentation. Every experiment may be said to exist only in order to give the facts a chance of disproving the null hypothesis.
Application domains.
Medicine.
In the practice of medicine, the differences between the applications of screening and testing are considerable.
Medical screening.
Screening involves relatively cheap tests that are given to large populations, none of whom manifest any clinical indication of disease (e.g., Pap smears).
Testing involves far more expensive, often invasive, procedures that are given only to those who manifest some clinical indication of disease, and are most often applied to confirm a suspected diagnosis.
For example, most states in the US require newborns to be screened for phenylketonuria and hypothyroidism, among other congenital disorders.
Although they display a high rate of false positives, the screening tests are considered valuable because they greatly increase the likelihood of detecting these disorders at a far earlier stage.
The simple blood tests used to screen possible blood donors for HIV and hepatitis have a significant rate of false positives; however, physicians use much more expensive and far more precise tests to determine whether a person is actually infected with either of these viruses.
Perhaps the most widely discussed false positives in medical screening come from the breast cancer screening procedure mammography. The US rate of false positive mammograms is up to 15%, the highest in world. One consequence of the high false positive rate in the US is that, in any 10-year period, half of the American women screened receive a false positive mammogram. False positive mammograms are costly, with over $100 million spent annually in the U.S. on follow-up testing and treatment. They also cause women unneeded anxiety. As a result of the high false positive rate in the US, as many as 90–95% of women who get a positive mammogram do not have the condition. The lowest rate in the world is in the Netherlands, 1%. The lowest rates are generally in Northern Europe where mammography films are read twice and a high threshold for additional testing is set (the high threshold decreases the power of the test).
The ideal population screening test would be cheap, easy to administer, and produce zero false negatives, if possible. Such tests usually produce more false positives, which can subsequently be sorted out by more sophisticated (and expensive) testing.
Medical testing.
False negatives and false positives are significant issues in medical testing.
False positives can also produce serious and counter-intuitive problems when the condition being searched for is rare, as in screening. If a test has a false positive rate of one in ten thousand, but only one in a million samples (or people) is a true positive, most of the positives detected by that test will be false. The probability that an observed positive result is a false positive may be calculated using Bayes' theorem.
False negatives produce serious and counter-intuitive problems, especially when the condition being searched for is common. If a test with a false negative rate of only 10% is used to test a population with a true occurrence rate of 70%, many of the negatives detected by the test will be false.
This sometimes leads to inappropriate or inadequate treatment of both the patient and their disease. A common example is relying on cardiac stress tests to detect coronary atherosclerosis, even though cardiac stress tests are known to only detect limitations of coronary artery blood flow due to advanced stenosis.
Biometrics.
Biometric matching, such as for fingerprint recognition, facial recognition or iris recognition, is susceptible to type I and type II errors.
The probability of type I errors is called the "false reject rate" (FRR) or false non-match rate (FNMR), while the probability of type II errors is called the "false accept rate" (FAR) or false match rate (FMR).
If the system is designed to rarely match suspects then the probability of type II errors can be called the "false alarm rate". On the other hand, if the system is used for validation (and acceptance is the norm) then the FAR is a measure of system security, while the FRR measures user inconvenience level.
Security screening.
False positives are routinely found every day in airport security screening, which are ultimately visual inspection systems. The installed security alarms are intended to prevent weapons being brought onto aircraft; yet they are often set to such high sensitivity that they alarm many times a day for minor items, such as keys, belt buckles, loose change, mobile phones, and tacks in shoes.
The ratio of false positives (identifying an innocent traveler as a terrorist) to true positives (detecting a would-be terrorist) is, therefore, very high; and because almost every alarm is a false positive, the positive predictive value of these screening tests is very low.
The relative cost of false results determines the likelihood that test creators allow these events to occur. As the cost of a false negative in this scenario is extremely high (not detecting a bomb being brought onto a plane could result in hundreds of deaths) whilst the cost of a false positive is relatively low (a reasonably simple further inspection) the most appropriate test is one with a low statistical specificity but high statistical sensitivity (one that allows a high rate of false positives in return for minimal false negatives).
Computers.
The notions of false positives and false negatives have a wide currency in the realm of computers and computer applications, including computer security, spam filtering, malware, optical character recognition, and many others.
For example, in the case of spam filtering:
While most anti-spam tactics can block or filter a high percentage of unwanted emails, doing so without creating significant false-positive results is a much more demanding task. A low number of false negatives is an indicator of the efficiency of spam filtering.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Div col/styles.css"/>
|
[
{
"math_id": 0,
"text": "H_0"
},
{
"math_id": 1,
"text": "H_1"
},
{
"math_id": 2,
"text": "\\bar X"
},
{
"math_id": 3,
"text": "T=\\frac{X_1+X_2+X_3}{3}=\\bar X"
},
{
"math_id": 4,
"text": "\\sqrt{3}"
},
{
"math_id": 5,
"text": "P\\left(Z\\geqslant\\frac{c-120}{\\frac{2}{\\sqrt{3}}}\\right)=0.05"
},
{
"math_id": 6,
"text": "\\frac{c-120}{\\frac{2}{\\sqrt{3}}}=1.645\\Rightarrow c=121.9"
},
{
"math_id": 7,
"text": "P=(T<121.9|\\mu=125)=P\\left(\\frac{T-125}{\\frac{2}{\\sqrt{3}}}<\\frac{121.9-125}{\\frac{2}{\\sqrt{3}}}\\right)=\\phi(-2.68)=0.0036"
}
] |
https://en.wikipedia.org/wiki?curid=5657877
|
5658261
|
Apeirogon
|
Polygon with an infinite number of sides
In geometry, an apeirogon (from grc " infinite, boundless" and angle) or infinite polygon is a polygon with an infinite number of sides. Apeirogons are the rank 2 case of infinite polytopes. In some literature, the term "apeirogon" may refer only to the regular apeirogon, with an infinite dihedral group of symmetries.
Definitions.
Geometric apeirogon.
Given a point "A"0 in a Euclidean space and a translation "S", define the point "Ai" to be the point obtained from "i" applications of the translation "S" to "A"0, so "Ai" = "Si"("A"0). The set of vertices "Ai" with "i" any integer, together with edges connecting adjacent vertices, is a sequence of equal-length segments of a line, and is called the regular apeirogon as defined by H. S. M. Coxeter.
A regular apeirogon can be defined as a partition of the Euclidean line "E"1 into infinitely many equal-length segments. It generalizes the regular "n"-gon, which may be defined as a partition of the circle "S"1 into "finitely" many equal-length segments.
Hyperbolic pseudogon.
The regular pseudogon is a partition of the hyperbolic line "H"1 (instead of the Euclidean line) into segments of length 2λ, as an analogue of the regular apeirogon.
Abstract apeirogon.
An abstract polytope is a partially ordered set "P" (whose elements are called "faces") with properties modeling those of the inclusions of faces of convex polytopes. The "rank" (or dimension) of an abstract polytope is determined by the length of the maximal ordered chains of its faces, and an abstract polytope of rank "n" is called an abstract "n"-polytope.
For abstract polytopes of rank 2, this means that: A) the elements of the partially ordered set are sets of vertices with either zero vertex (the empty set), one vertex, two vertices (an edge), or the entire vertex set (a two-dimensional face), ordered by inclusion of sets; B) each vertex belongs to exactly two edges; C) the undirected graph formed by the vertices and edges is connected.224
An abstract polytope is called an abstract apeirotope if it has infinitely many elements; an abstract 2-apeirotope is called an abstract apeirogon.25
A realization of an abstract polytope is a mapping of its vertices to points a geometric space (typically a Euclidean space).121 A faithful realization is a realization such that the vertex mapping is injective.122 Every geometric apeirogon is a realization of the abstract apeirogon.
Symmetries.
The infinite dihedral group "G" of symmetries of a regular geometric apeirogon is generated by two reflections, the product of which translates each vertex of "P" to the next.231 The product of the two reflections can be decomposed as a product of a non-zero translation, finitely many rotations, and a possibly trivial reflection.141231
In an abstract polytope, a "flag" is a collection of one face of each dimension, all incident to each other (that is, comparable in the partial order); an abstract polytope is called "regular" if it has symmetries (structure-preserving permutations of its elements) that take any flag to any other flag. In the case of a two-dimensional abstract polytope, this is automatically true; the symmetries of the apeirogon form the infinite dihedral group.31
A symmetric realization of an abstract apeirogon is defined as a mapping from its vertices to a finite-dimensional geometric space (typically a Euclidean space) such that every symmetry of the abstract apeirogon corresponds to an isometry of the images of the mapping.121225
Moduli space.
Generally, the moduli space of a faithful realization of an abstract polytope is a convex cone of infinite dimension.127 The realization cone of the abstract apeirogon has uncountably infinite algebraic dimension and cannot be closed in the Euclidean topology.141232
Classification of Euclidean apeirogons.
The symmetric realization of any regular polygon in Euclidean space of dimension greater than 2 is reducible, meaning it can be made as a blend of two lower-dimensional polygons. This characterization of the regular polygons naturally characterizes the regular apeirogons as well. The discrete apeirogons are the results of blending the 1-dimensional apeirogon with other polygons.231 Since every polygon is a quotient of the apeirogon, the blend of any polygon with an apeirogon produces another apeirogon.
In two dimensions the discrete regular apeirogons are the infinite zigzag polygons, resulting from the blend of the 1-dimensional apeirogon with the digon, represented with the Schläfli symbol {∞}#{2}, {∞}#{}, or formula_0.
In three dimensions the discrete regular apeirogons are the infinite helical polygons, with vertices spaced evenly along a helix. These are the result of blending the 1-dimensional apeirogon with a 2-dimensional polygon, {{math|{∞}#{{{mvar|p}}/{{mvar|q}}{{)}}}} or formula_1.
|
[
{
"math_id": 0,
"text": "\\left\\{\\dfrac{2}{0,1}\\right\\}"
},
{
"math_id": 1,
"text": "\\left\\{\\dfrac{p}{0,q}\\right\\}"
}
] |
https://en.wikipedia.org/wiki?curid=5658261
|
56591592
|
Engelbert–Schmidt zero–one law
|
The Engelbert–Schmidt zero–one law is a theorem that gives a mathematical criterion for an event associated with a continuous, non-decreasing additive functional of Brownian motion to have probability either 0 or 1, without the possibility of an intermediate value. This zero-one law is used in the study of questions of finiteness and asymptotic behavior for stochastic differential equations. (A Wiener process is a mathematical formalization of Brownian motion used in the statement of the theorem.) This 0-1 law, published in 1981, is named after Hans-Jürgen Engelbert and the probabilist Wolfgang Schmidt (not to be confused with the number theorist Wolfgang M. Schmidt).
Engelbert–Schmidt 0–1 law.
Let formula_0 be a σ-algebra and let formula_1 be an increasing family of sub-"σ"-algebras of formula_0. Let formula_2 be a Wiener process on the probability space formula_3.
Suppose that formula_4 is a Borel measurable function of the real line into [0,∞].
Then the following three assertions are equivalent:
(i) formula_5.
(ii) formula_6.
(iii) formula_7 for all compact subsets formula_8 of the real line.
Extension to stable processes.
In 1997 Pio Andrea Zanzotto proved the following extension of the Engelbert–Schmidt zero-one law. It contains Engelbert and Schmidt's result as a special case, since the Wiener process is a real-valued stable process of index formula_9.
Let formula_10 be a formula_11-valued stable process of index formula_12 on the filtered probability space formula_13.
Suppose that formula_14 is a Borel measurable function.
Then the following three assertions are equivalent:
(i) formula_15.
(ii) formula_16.
(iii) formula_7 for all compact subsets formula_8 of the real line.
The proof of Zanzotto's result is almost identical to that of the Engelbert–Schmidt zero-one law. The key object in the proof is the local time process associated with stable processes of index formula_12, which is known to be jointly continuous.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathcal{F}"
},
{
"math_id": 1,
"text": "F = (\\mathcal{F}_t)_{t \\ge 0}"
},
{
"math_id": 2,
"text": "(W, F)"
},
{
"math_id": 3,
"text": "(\\Omega, \\mathcal{F}, P)"
},
{
"math_id": 4,
"text": "f"
},
{
"math_id": 5,
"text": " P \\Big( \\int_0^t f (W_s)\\,\\mathrm ds < \\infty \\text{ for all } t \\ge 0 \\Big) > 0 "
},
{
"math_id": 6,
"text": " P \\Big( \\int_0^t f (W_s)\\,\\mathrm ds < \\infty \\text{ for all } t \\ge 0 \\Big) = 1 "
},
{
"math_id": 7,
"text": " \\int_K f (y)\\,\\mathrm dy < \\infty \\, "
},
{
"math_id": 8,
"text": "K"
},
{
"math_id": 9,
"text": "\\alpha = 2"
},
{
"math_id": 10,
"text": "X"
},
{
"math_id": 11,
"text": "\\mathbb R"
},
{
"math_id": 12,
"text": "\\alpha\\in(1,2]"
},
{
"math_id": 13,
"text": "(\\Omega, \\mathcal{F}, (\\mathcal{F}_t), P)"
},
{
"math_id": 14,
"text": "f:\\mathbb R \\to [0,\\infty]"
},
{
"math_id": 15,
"text": " P \\Big( \\int_0^t f (X_s)\\,\\mathrm ds < \\infty \\text{ for all } t \\ge 0 \\Big) > 0 "
},
{
"math_id": 16,
"text": " P \\Big( \\int_0^t f (X_s)\\,\\mathrm ds < \\infty \\text{ for all } t \\ge 0 \\Big) = 1 "
}
] |
https://en.wikipedia.org/wiki?curid=56591592
|
56600253
|
Group-scheme action
|
In algebraic geometry, an action of a group scheme is a generalization of a group action to a group scheme. Precisely, given a group "S"-scheme "G", a left action of "G" on an "S"-scheme "X" is an "S"-morphism
formula_0
such that
A right action of "G" on "X is defined analogously. A scheme equipped with a left or right action of a group scheme "G" is called a G"-scheme. An equivariant morphism between "G"-schemes is a morphism of schemes that intertwines the respective "G"-actions.
More generally, one can also consider (at least some special case of) an action of a group functor: viewing "G" as a functor, an action is given as a natural transformation satisfying the conditions analogous to the above. Alternatively, some authors study group action in the language of a groupoid; a group-scheme action is then an example of a groupoid scheme.
Constructs.
The usual constructs for a group action such as orbits generalize to a group-scheme action. Let formula_5 be a given group-scheme action as above.
Problem of constructing a quotient.
Unlike a set-theoretic group action, there is no straightforward way to construct a quotient for a group-scheme action. One exception is the case when the action is free, the case of a principal fiber bundle.
There are several approaches to overcome this difficulty:
Depending on applications, another approach would be to shift the focus away from a space then onto stuff on a space; e.g., topos. So the problem shifts from the classification of orbits to that of equivariant objects.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sigma: G \\times_S X \\to X"
},
{
"math_id": 1,
"text": "\\sigma \\circ (1_G \\times \\sigma) = \\sigma \\circ (m \\times 1_X)"
},
{
"math_id": 2,
"text": "m: G \\times_S G \\to G"
},
{
"math_id": 3,
"text": "\\sigma \\circ (e \\times 1_X) = 1_X"
},
{
"math_id": 4,
"text": "e: S \\to G"
},
{
"math_id": 5,
"text": "\\sigma"
},
{
"math_id": 6,
"text": "x: T \\to X"
},
{
"math_id": 7,
"text": "\\sigma_x: G \\times_S T \\to X \\times_S T"
},
{
"math_id": 8,
"text": "(\\sigma \\circ (1_G \\times x), p_2)"
},
{
"math_id": 9,
"text": "\\sigma_x"
},
{
"math_id": 10,
"text": "(x, 1_T): T \\to X \\times_S T."
}
] |
https://en.wikipedia.org/wiki?curid=56600253
|
56601
|
Fuzzy set
|
Sets whose elements have degrees of membership
In mathematics, fuzzy sets (also known as uncertain sets) are sets whose elements have degrees of membership. Fuzzy sets were introduced independently by Lotfi A. Zadeh in 1965 as an extension of the classical notion of set.
At the same time, defined a more general kind of structure called an ""L"-relation", which he studied in an abstract algebraic context;
fuzzy relations are special cases of "L"-relations when "L" is the unit interval [0, 1].
They are now used throughout fuzzy mathematics, having applications in areas such as linguistics , decision-making , and clustering .
In classical set theory, the membership of elements in a set is assessed in binary terms according to a bivalent condition—an element either belongs or does not belong to the set. By contrast, fuzzy set theory permits the gradual assessment of the membership of elements in a set; this is described with the aid of a membership function valued in the real unit interval [0, 1]. Fuzzy sets generalize classical sets, since the indicator functions (aka characteristic functions) of classical sets are special cases of the membership functions of fuzzy sets, if the latter only takes values 0 or 1. In fuzzy set theory, classical bivalent sets are usually called "crisp sets". The fuzzy set theory can be used in a wide range of domains in which information is incomplete or imprecise, such as bioinformatics.
Definition.
A fuzzy set is a pair formula_0 where formula_1 is a set (often required to be non-empty) and formula_2 a membership function.
The reference set formula_1 (sometimes denoted by formula_3 or formula_4) is called universe of discourse, and for each formula_5 the value formula_6 is called the grade of membership of formula_7 in formula_8.
The function formula_9 is called the membership function of the fuzzy set formula_10.
For a finite set formula_11 the fuzzy set formula_0 is often denoted by formula_12
Let formula_13. Then formula_7 is called
The (crisp) set of all fuzzy sets on a universe formula_1 is denoted with formula_17 (or sometimes just formula_18).
Crisp sets related to a fuzzy set.
For any fuzzy set formula_19 and formula_20 the following crisp sets are defined:
Note that some authors understand "kernel" in a different way; see below.
formula_27formula_28
formula_32
formula_34
formula_35
is called a crossover point.
formula_39formula_40formula_41
formula_42
where formula_43 denotes the supremum, which exists because formula_44 is non-empty and bounded above by 1. If "U" is finite, we can simply replace the supremum by the maximum.
formula_45
In the finite case, where the supremum is a maximum, this means that at least one element of the fuzzy set has full membership. A non-empty fuzzy set formula_29 may be normalized with result formula_46 by dividing the membership function of the fuzzy set by its height:
formula_47
Besides similarities this differs from the usual normalization in that the normalizing constant is not a sum.
formula_49
In the case when formula_50 is a finite set, or more generally a closed set, the width is just
formula_51
In the "n"-dimensional case formula_52 the above can be replaced by the "n"-dimensional volume of formula_50.
In general, this can be defined given any measure on "U", for instance by integration (e.g. Lebesgue integration) of formula_50.
formula_54.
Without loss of generality, we may take "x" ≤ "y", which gives the equivalent formulation
formula_55.
This definition can be extended to one for a general topological space "U": we say the fuzzy set formula_29 is convex when, for any subset "Z" of "U", the condition
formula_56
holds, where formula_57 denotes the boundary of "Z" and formula_58 denotes the image of a set "X" (here formula_57) under a function "f" (here formula_38).
Fuzzy set operations.
Although the complement of a fuzzy set has a single most common definition, the other main operations, union and intersection, do have some ambiguity.
formula_62.
formula_65,
and their union formula_66 is defined by:
formula_67.
By the definition of the t-norm, we see that the union and intersection are commutative, monotonic, associative, and have both a null and an identity element. For the intersection, these are ∅ and "U", respectively, while for the union, these are reversed. However, the union of a fuzzy set and its complement may not result in the full universe "U", and the intersection of them may not give the empty set ∅. Since the intersection and union are associative, it is natural to define the intersection and union of a finite family of fuzzy sets recursively. It is noteworthy that the generally accepted standard operators for the union and intersection of fuzzy sets are the max and min operators:
formula_71
Examples for fuzzy intersection/union pairs with standard negator can be derived from samples provided in the article about t-norms.
The fuzzy intersection is not idempotent in general, because the standard t-norm min is the only one which has this property. Indeed, if the arithmetic multiplication is used as the t-norm, the resulting fuzzy intersection operation is not idempotent. That is, iteratively taking the intersection of a fuzzy set with itself is not trivial. It instead defines the "m"-th power of a fuzzy set, which can be canonically generalized for non-integer exponents in the following way:
formula_73
The case of exponent two is special enough to be given a name.
formula_75
Taking formula_76, we have formula_77 and formula_78
formula_81
which means formula_82, e. g.:
formula_83
Another proposal for a set difference could be:
formula_84
formula_85
or by using a combination of just max, min, and standard negation, giving
formula_86
Axioms for definition of generalized symmetric differences analogous to those for t-norms, t-conorms, and negators have been proposed by Vemur et al. (2014) with predecessors by Alsina et al. (2005) and Bedregal et al. (2009).
Disjoint fuzzy sets.
In contrast to the general ambiguity of intersection and union operations, there is clearness for disjoint fuzzy sets:
Two fuzzy sets formula_63 are disjoint iff
formula_87
which is equivalent to
formula_88 formula_89
and also equivalent to
formula_90
We keep in mind that min/max is a t/s-norm pair, and any other will work here as well.
Fuzzy sets are disjoint if and only if their supports are disjoint according to the standard definition for crisp sets.
For disjoint fuzzy sets formula_63 any intersection will give ∅, and any union will give the same result, which is denoted as
formula_91
with its membership function given by
formula_92
Note that only one of both summands is greater than zero.
For disjoint fuzzy sets formula_63 the following holds true:
formula_93
This can be generalized to finite families of fuzzy sets as follows:
Given a family formula_94 of fuzzy sets with index set "I" (e.g. "I" = {1,2,3...,"n"}). This family is (pairwise) disjoint iff
formula_95
A family of fuzzy sets formula_94 is disjoint, iff the family of underlying supports formula_96 is disjoint in the standard sense for families of crisp sets.
Independent of the t/s-norm pair, intersection of a disjoint family of fuzzy sets will give ∅ again, while the union has no ambiguity:
formula_97
with its membership function given by
formula_98
Again only one of the summands is greater than zero.
For disjoint families of fuzzy sets formula_94 the following holds true:
formula_99
Scalar cardinality.
For a fuzzy set formula_29 with finite support formula_50 (i.e. a "finite fuzzy set"), its cardinality (aka scalar cardinality or sigma-count) is given by
formula_100.
In the case that "U" itself is a finite set, the relative cardinality is given by
formula_101.
This can be generalized for the divisor to be a non-empty fuzzy set: For fuzzy sets formula_102 with "G" ≠ ∅, we can define the relative cardinality by:
formula_103,
which looks very similar to the expression for conditional probability.
Note:
Distance and similarity.
For any fuzzy set formula_29 the membership function formula_106 can be regarded as a family formula_107. The latter is a metric space with several metrics formula_108 known. A metric can be derived from a norm (vector norm) formula_109 via
formula_110.
For instance, if formula_1 is finite, i.e. formula_111, such a metric may be defined by:
formula_112 where formula_113 and formula_114 are sequences of real numbers between 0 and 1.
For infinite formula_1, the maximum can be replaced by a supremum.
Because fuzzy sets are unambiguously defined by their membership function, this metric can be used to measure distances between fuzzy sets on the same universe:
formula_115,
which becomes in the above sample:
formula_116.
Again for infinite formula_1 the maximum must be replaced by a supremum. Other distances (like the canonical 2-norm) may diverge, if infinite fuzzy sets are too different, e.g., formula_117 and formula_1.
Similarity measures (here denoted by formula_118) may then be derived from the distance, e.g. after a proposal by Koczy:
formula_119 if formula_120 is finite, formula_121 else,
or after Williams and Steele:
formula_122 if formula_120 is finite, formula_121 else
where formula_123 is a steepness parameter and formula_124.
Another definition for interval valued (rather 'fuzzy') similarity measures formula_125 is provided by Beg and Ashraf as well.
"L"-fuzzy sets.
Sometimes, more general variants of the notion of fuzzy set are used, with membership functions taking values in a (fixed or variable) algebra or structure formula_126 of a given kind; usually it is required that formula_126 be at least a poset or lattice. These are usually called "L"-fuzzy sets, to distinguish them from those valued over the unit interval. The usual membership functions with values in [0, 1] are then called [0, 1]-valued membership functions. These kinds of generalizations were first considered in 1967 by Joseph Goguen, who was a student of Zadeh. A classical corollary may be indicating truth and membership values by {f, t} instead of {0, 1}.
An extension of fuzzy sets has been provided by Atanassov. An intuitionistic fuzzy set (IFS) formula_29 is characterized by two functions:
1. formula_127 – degree of membership of "x"
2. formula_128 – degree of non-membership of "x"
with functions formula_129 with formula_130.
This resembles a situation like some person denoted by formula_7 voting
After all, we have a percentage of approvals, a percentage of denials, and a percentage of abstentions.
For this situation, special "intuitive fuzzy" negators, t- and s-norms can be defined. With formula_134 and by combining both functions to formula_135 this situation resembles a special kind of "L"-fuzzy sets.
Once more, this has been expanded by defining picture fuzzy sets (PFS) as follows: A PFS A is characterized by three functions mapping "U" to [0, 1]: formula_136, "degree of positive membership", "degree of neutral membership", and "degree of negative membership" respectively and additional condition formula_137
This expands the voting sample above by an additional possibility of "refusal of voting".
With formula_138 and special "picture fuzzy" negators, t- and s-norms this resembles just another type of "L"-fuzzy sets.
Neutrosophic fuzzy sets.
The concept of IFS has been extended into two major models. The two extensions of IFS are neutrosophic fuzzy sets and Pythagorean fuzzy sets.
Neutrosophic fuzzy sets were introduced by Smarandache in 1998. Like IFS, neutrosophic fuzzy sets have the previous two functions: one for membership formula_127 and another for non-membership formula_128. The major difference is that neutrosophic fuzzy sets have one more function: for indeterminate formula_139. This value indicates that the degree of undecidedness that the entity x belongs to the set. This concept of having indeterminate formula_139 value can be particularly useful when one cannot be very confident on the membership or non-membership values for item "x". In summary, neutrosophic fuzzy sets are associated with the following functions:
1. formula_127—degree of membership of "x"
2. formula_128—degree of non-membership of "x"
3. formula_139—degree of indeterminate value of "x"
Pythagorean fuzzy sets.
The other extension of IFS is what is known as Pythagorean fuzzy sets. Pythagorean fuzzy sets are more flexible than IFSs. IFSs are based on the constraint formula_140, which can be considered as too restrictive in some occasions. This is why Yager proposed the concept of Pythagorean fuzzy sets. Such sets satisfy the constraint formula_141, which is reminiscent of the Pythagorean theorem. Pythagorean fuzzy sets can be applicable to real life applications in which the previous condition of formula_140 is not valid. However, the less restrictive condition of formula_141 may be suitable in more domains.
Fuzzy logic.
As an extension of the case of multi-valued logic, valuations (formula_142) of propositional variables (formula_143) into a set of membership degrees (formula_144) can be thought of as membership functions mapping predicates into fuzzy sets (or more formally, into an ordered set of fuzzy pairs, called a fuzzy relation). With these valuations, many-valued logic can be extended to allow for fuzzy premises from which graded conclusions may be drawn.
This extension is sometimes called "fuzzy logic in the narrow sense" as opposed to "fuzzy logic in the wider sense," which originated in the engineering fields of automated control and knowledge engineering, and which encompasses many topics involving fuzzy sets and "approximated reasoning."
Industrial applications of fuzzy sets in the context of "fuzzy logic in the wider sense" can be found at fuzzy logic.
Fuzzy number.
A fuzzy number is a fuzzy set that satisfies all the following conditions:
If these conditions are not satisfied, then A is not a fuzzy number. The core of this fuzzy number is a singleton; its location is:
formula_146
Fuzzy numbers can be likened to the funfair game "guess your weight," where someone guesses the contestant's weight, with closer guesses being more correct, and where the guesser "wins" if he or she guesses near enough to the contestant's weight, with the actual weight being completely correct (mapping to 1 by the membership function).
The kernel formula_147 of a fuzzy interval formula_29 is defined as the 'inner' part, without the 'outbound' parts where the membership value is constant ad infinitum. In other words, the smallest subset of formula_148 where formula_127 is constant outside of it, is defined as the kernel.
However, there are other concepts of fuzzy numbers and intervals as some authors do not insist on convexity.
Fuzzy categories.
The use of set membership as a key component of category theory can be generalized to fuzzy sets. This approach, which began in 1968 shortly after the introduction of fuzzy set theory, led to the development of Goguen categories in the 21st century. In these categories, rather than using two valued set membership, more general intervals are used, and may be lattices as in "L"-fuzzy sets.
Fuzzy relation equation.
The fuzzy relation equation is an equation of the form "A" · "R" = "B", where "A" and "B" are fuzzy sets, "R" is a fuzzy relation, and "A" · "R" stands for the composition of "A" with "R" .
Entropy.
A measure "d" of fuzziness for fuzzy sets of universe formula_1 should fulfill the following conditions for all formula_13:
formula_154
formula_155
which means that "B" is "crisper" than "A".
In this case formula_151 is called the entropy of the fuzzy set "A".
For finite formula_111 the entropy of a fuzzy set formula_29 is given by
formula_157,
formula_158
or just
formula_159
where formula_160 is Shannon's function (natural entropy function)
formula_161
and formula_162 is a constant depending on the measure unit and the logarithm base used (here we have used the natural base e).
The physical interpretation of "k" is the Boltzmann constant "k""B".
Let formula_29 be a fuzzy set with a continuous membership function (fuzzy variable). Then
formula_163
and its entropy is
formula_164
Extensions.
There are many mathematical constructions similar to or more general than fuzzy sets. Since fuzzy sets were introduced in 1965, many new mathematical constructions and theories treating imprecision, inexactness, ambiguity, and uncertainty have been developed. Some of these constructions and theories are extensions of fuzzy set theory, while others try to mathematically model imprecision and uncertainty in a different way.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "(U, m)"
},
{
"math_id": 1,
"text": "U"
},
{
"math_id": 2,
"text": "m\\colon U \\rightarrow [0,1]"
},
{
"math_id": 3,
"text": "\\Omega"
},
{
"math_id": 4,
"text": "X"
},
{
"math_id": 5,
"text": "x\\in U,"
},
{
"math_id": 6,
"text": "m(x)"
},
{
"math_id": 7,
"text": "x"
},
{
"math_id": 8,
"text": "(U,m)"
},
{
"math_id": 9,
"text": "m = \\mu_A"
},
{
"math_id": 10,
"text": "A = (U, m)"
},
{
"math_id": 11,
"text": "U=\\{x_1,\\dots,x_n\\},"
},
{
"math_id": 12,
"text": "\\{m(x_1)/x_1,\\dots,m(x_n)/x_n\\}."
},
{
"math_id": 13,
"text": "x \\in U"
},
{
"math_id": 14,
"text": "m(x) = 0"
},
{
"math_id": 15,
"text": "m(x) = 1"
},
{
"math_id": 16,
"text": "0 < m(x) < 1"
},
{
"math_id": 17,
"text": "SF(U)"
},
{
"math_id": 18,
"text": "F(U)"
},
{
"math_id": 19,
"text": "A = (U,m)"
},
{
"math_id": 20,
"text": "\\alpha \\in [0,1]"
},
{
"math_id": 21,
"text": "A^{\\ge\\alpha} = A_\\alpha = \\{x \\in U \\mid m(x)\\ge\\alpha\\}"
},
{
"math_id": 22,
"text": "A^{>\\alpha} = A'_\\alpha = \\{x \\in U \\mid m(x)>\\alpha\\}"
},
{
"math_id": 23,
"text": "S(A) = \\operatorname{Supp}(A) = A^{>0} = \\{x \\in U \\mid m(x)>0\\}"
},
{
"math_id": 24,
"text": "C(A) = \\operatorname{Core}(A) = A^{=1} = \\{x \\in U \\mid m(x)=1\\}"
},
{
"math_id": 25,
"text": "\\operatorname{Kern}(A)"
},
{
"math_id": 26,
"text": "A = \\varnothing"
},
{
"math_id": 27,
"text": "\\forall"
},
{
"math_id": 28,
"text": " x \\in U: \\mu_A(x) = m(x) = 0"
},
{
"math_id": 29,
"text": "A"
},
{
"math_id": 30,
"text": "B"
},
{
"math_id": 31,
"text": "A = B"
},
{
"math_id": 32,
"text": "\\forall x \\in U: \\mu_A(x) = \\mu_B(x)"
},
{
"math_id": 33,
"text": "A \\subseteq B"
},
{
"math_id": 34,
"text": "\\forall x \\in U: \\mu_A(x) \\le \\mu_B(x)"
},
{
"math_id": 35,
"text": "\\mu_A(x) = 0.5"
},
{
"math_id": 36,
"text": "A^{=\\alpha} = \\{x \\in U \\mid \\mu_A(x) = \\alpha\\}"
},
{
"math_id": 37,
"text": "\\alpha\\in[0,1]"
},
{
"math_id": 38,
"text": "\\mu_A"
},
{
"math_id": 39,
"text": "\\Lambda_A = \\{\\alpha \\in [0,1] : A^{=\\alpha} \\ne \\varnothing\\} = \\{\\alpha \\in [0, 1] : {}"
},
{
"math_id": 40,
"text": "\\exist"
},
{
"math_id": 41,
"text": "x \\in U(\\mu_A(x) = \\alpha)\\} = \\mu_A(U)"
},
{
"math_id": 42,
"text": "\\operatorname{Hgt}(A) = \\sup \\{\\mu_A(x) \\mid x \\in U\\} = \\sup(\\mu_A(U))"
},
{
"math_id": 43,
"text": "\\sup"
},
{
"math_id": 44,
"text": "\\mu_A(U)"
},
{
"math_id": 45,
"text": "\\operatorname{Hgt}(A) = 1"
},
{
"math_id": 46,
"text": "\\tilde{A}"
},
{
"math_id": 47,
"text": "\\forall x \\in U: \\mu_{\\tilde{A}}(x) = \\mu_A(x)/\\operatorname{Hgt}(A)"
},
{
"math_id": 48,
"text": "(U \\subseteq \\mathbb{R})"
},
{
"math_id": 49,
"text": "\\operatorname{Width}(A) = \\sup(\\operatorname{Supp}(A)) - \\inf(\\operatorname{Supp}(A))"
},
{
"math_id": 50,
"text": "\\operatorname{Supp}(A)"
},
{
"math_id": 51,
"text": "\\operatorname{Width}(A) = \\max(\\operatorname{Supp}(A)) - \\min(\\operatorname{Supp}(A))"
},
{
"math_id": 52,
"text": "(U \\subseteq \\mathbb{R}^n)"
},
{
"math_id": 53,
"text": "A (U \\subseteq \\mathbb{R})"
},
{
"math_id": 54,
"text": "\\forall x,y \\in U, \\forall\\lambda\\in[0,1]: \\mu_A(\\lambda{x} + (1-\\lambda)y) \\ge \\min(\\mu_A(x),\\mu_A(y))"
},
{
"math_id": 55,
"text": "\\forall z \\in [x,y]: \\mu_A(z) \\ge \\min(\\mu_A(x),\\mu_A(y))"
},
{
"math_id": 56,
"text": "\\forall z \\in Z: \\mu_A(z) \\ge \\inf(\\mu_A(\\partial Z))"
},
{
"math_id": 57,
"text": "\\partial Z"
},
{
"math_id": 58,
"text": "f(X) = \\{f(x) \\mid x \\in X\\}"
},
{
"math_id": 59,
"text": "\\neg{A}"
},
{
"math_id": 60,
"text": "A^c"
},
{
"math_id": 61,
"text": "cA"
},
{
"math_id": 62,
"text": "\\forall x \\in U: \\mu_{\\neg{A}}(x) = 1 - \\mu_A(x)"
},
{
"math_id": 63,
"text": "A, B"
},
{
"math_id": 64,
"text": "A\\cap{B}"
},
{
"math_id": 65,
"text": "\\forall x \\in U: \\mu_{A\\cap{B}}(x) = t(\\mu_A(x),\\mu_B(x))"
},
{
"math_id": 66,
"text": "A\\cup{B}"
},
{
"math_id": 67,
"text": "\\forall x \\in U: \\mu_{A\\cup{B}}(x) = s(\\mu_A(x),\\mu_B(x))"
},
{
"math_id": 68,
"text": "\\forall x \\in U: \\mu_{A\\cup{B}}(x) = \\max(\\mu_A(x),\\mu_B(x))"
},
{
"math_id": 69,
"text": "\\mu_{A\\cap{B}}(x) = \\min(\\mu_A(x),\\mu_B(x))"
},
{
"math_id": 70,
"text": "n(\\alpha) = 1 - \\alpha, \\alpha \\in [0, 1]"
},
{
"math_id": 71,
"text": "\\forall x \\in U: \\mu_{\\neg{A}}(x) = n(\\mu_A(x))."
},
{
"math_id": 72,
"text": "\\nu \\in \\R^+"
},
{
"math_id": 73,
"text": "\\forall x \\in U: \\mu_{A^{\\nu}}(x) = \\mu_{A}(x)^{\\nu}."
},
{
"math_id": 74,
"text": "CON(A) = A^2"
},
{
"math_id": 75,
"text": "\\forall x \\in U: \\mu_{CON(A)}(x) = \\mu_{A^2}(x) = \\mu_{A}(x)^2."
},
{
"math_id": 76,
"text": "0^0 = 1"
},
{
"math_id": 77,
"text": "A^0 = U"
},
{
"math_id": 78,
"text": "A^1 = A."
},
{
"math_id": 79,
"text": "A \\setminus B"
},
{
"math_id": 80,
"text": " A - B"
},
{
"math_id": 81,
"text": "\\forall x \\in U: \\mu_{A\\setminus{B}}(x) = t(\\mu_A(x),n(\\mu_B(x))),"
},
{
"math_id": 82,
"text": "A \\setminus B = A \\cap \\neg{B}"
},
{
"math_id": 83,
"text": "\\forall x \\in U: \\mu_{A\\setminus{B}}(x) = \\min(\\mu_A(x),1 - \\mu_B(x))."
},
{
"math_id": 84,
"text": "\\forall x \\in U: \\mu_{A-{B}}(x) = \\mu_A(x) - t(\\mu_A(x),\\mu_B(x))."
},
{
"math_id": 85,
"text": "\\forall x \\in U: \\mu_{A \\triangle B}(x) = |\\mu_A(x) - \\mu_B(x)|,"
},
{
"math_id": 86,
"text": "\\forall x \\in U: \\mu_{A \\triangle B}(x) = \\max(\\min(\\mu_A(x), 1 - \\mu_B(x)), \\min(\\mu_B(x), 1 - \\mu_A(x)))."
},
{
"math_id": 87,
"text": "\\forall x \\in U: \\mu_A(x) = 0 \\lor \\mu_B(x) = 0"
},
{
"math_id": 88,
"text": "\\nexists"
},
{
"math_id": 89,
"text": "x \\in U: \\mu_A(x) > 0 \\land \\mu_B(x) > 0"
},
{
"math_id": 90,
"text": "\\forall x \\in U: \\min(\\mu_A(x),\\mu_B(x)) = 0"
},
{
"math_id": 91,
"text": "A \\,\\dot{\\cup}\\, B = A \\cup B"
},
{
"math_id": 92,
"text": "\\forall x \\in U: \\mu_{A \\dot{\\cup} B}(x) = \\mu_A(x) + \\mu_B(x)"
},
{
"math_id": 93,
"text": "\\operatorname{Supp}(A \\,\\dot{\\cup}\\, B) = \\operatorname{Supp}(A) \\cup \\operatorname{Supp}(B)"
},
{
"math_id": 94,
"text": "A = (A_i)_{i \\in I}"
},
{
"math_id": 95,
"text": "\\text{for all } x \\in U \\text{ there exists at most one } i \\in I \\text{ such that } \\mu_{A_i}(x) > 0."
},
{
"math_id": 96,
"text": "\\operatorname{Supp} \\circ A = (\\operatorname{Supp}(A_i))_{i \\in I}"
},
{
"math_id": 97,
"text": "\\dot{\\bigcup\\limits_{i \\in I}}\\, A_i = \\bigcup_{i \\in I} A_i"
},
{
"math_id": 98,
"text": "\\forall x \\in U: \\mu_{\\dot{\\bigcup\\limits_{i \\in I}} A_i}(x) = \\sum_{i \\in I} \\mu_{A_i}(x)"
},
{
"math_id": 99,
"text": "\\operatorname{Supp}\\left(\\dot{\\bigcup\\limits_{i \\in I}}\\, A_i\\right) = \\bigcup\\limits_{i \\in I} \\operatorname{Supp}(A_i)"
},
{
"math_id": 100,
"text": "\\operatorname{Card}(A) = \\operatorname{sc}(A) = |A| = \\sum_{x \\in U} \\mu_A(x)"
},
{
"math_id": 101,
"text": "\\operatorname{RelCard}(A) = \\|A\\| = \\operatorname{sc}(A)/|U| = |A|/|U|"
},
{
"math_id": 102,
"text": "A,G"
},
{
"math_id": 103,
"text": "\\operatorname{RelCard}(A,G) = \\operatorname{sc}(A|G) = \\operatorname{sc}(A\\cap{G})/\\operatorname{sc}(G)"
},
{
"math_id": 104,
"text": "\\operatorname{sc}(G) > 0"
},
{
"math_id": 105,
"text": "G = U"
},
{
"math_id": 106,
"text": "\\mu_A: U \\to [0,1]"
},
{
"math_id": 107,
"text": "\\mu_A = (\\mu_A(x))_{x \\in U} \\in [0,1]^U"
},
{
"math_id": 108,
"text": "d"
},
{
"math_id": 109,
"text": "\\|\\,\\|"
},
{
"math_id": 110,
"text": "d(\\alpha,\\beta) = \\| \\alpha - \\beta \\|"
},
{
"math_id": 111,
"text": "U = \\{x_1, x_2, ... x_n\\}"
},
{
"math_id": 112,
"text": "d(\\alpha,\\beta) := \\max \\{ |\\alpha(x_i) - \\beta(x_i)| : i=1, ..., n \\}"
},
{
"math_id": 113,
"text": "\\alpha"
},
{
"math_id": 114,
"text": "\\beta"
},
{
"math_id": 115,
"text": "d(A,B) := d(\\mu_A,\\mu_B)"
},
{
"math_id": 116,
"text": "d(A,B) = \\max \\{ |\\mu_A(x_i) - \\mu_B(x_i)| : i=1,...,n \\}"
},
{
"math_id": 117,
"text": "\\varnothing"
},
{
"math_id": 118,
"text": "S"
},
{
"math_id": 119,
"text": "S = 1 / (1 + d(A,B))"
},
{
"math_id": 120,
"text": "d(A,B)"
},
{
"math_id": 121,
"text": "0"
},
{
"math_id": 122,
"text": "S = \\exp(-\\alpha{d(A,B)})"
},
{
"math_id": 123,
"text": "\\alpha > 0"
},
{
"math_id": 124,
"text": "\\exp(x) = e^x"
},
{
"math_id": 125,
"text": "\\zeta"
},
{
"math_id": 126,
"text": "L"
},
{
"math_id": 127,
"text": "\\mu_A(x)"
},
{
"math_id": 128,
"text": "\\nu_A(x)"
},
{
"math_id": 129,
"text": "\\mu_A, \\nu_A: U \\to [0,1]"
},
{
"math_id": 130,
"text": "\\forall x \\in U: \\mu_A(x) + \\nu_A(x) \\le 1"
},
{
"math_id": 131,
"text": "\\mu_A(x)=1, \\nu_A(x)=0"
},
{
"math_id": 132,
"text": "\\mu_A(x)=0, \\nu_A(x)=1"
},
{
"math_id": 133,
"text": "\\mu_A(x)=\\nu_A(x)=0"
},
{
"math_id": 134,
"text": "D^* = \\{(\\alpha,\\beta) \\in [0, 1]^2 : \\alpha + \\beta = 1 \\}"
},
{
"math_id": 135,
"text": "(\\mu_A,\\nu_A): U \\to D^*"
},
{
"math_id": 136,
"text": "\\mu_A, \\eta_A, \\nu_A"
},
{
"math_id": 137,
"text": "\\forall x \\in U: \\mu_A(x) + \\eta_A(x) + \\nu_A(x) \\le 1"
},
{
"math_id": 138,
"text": "D^* = \\{(\\alpha,\\beta,\\gamma) \\in [0, 1]^3 : \\alpha + \\beta + \\gamma = 1 \\}"
},
{
"math_id": 139,
"text": "i_A(x)"
},
{
"math_id": 140,
"text": "\\mu_A(x) + \\nu_A(x) \\le 1"
},
{
"math_id": 141,
"text": "\\mu_A(x)^2 + \\nu_A(x)^2 \\le 1"
},
{
"math_id": 142,
"text": "\\mu : \\mathit{V}_o \\to \\mathit{W}"
},
{
"math_id": 143,
"text": "\\mathit{V}_o"
},
{
"math_id": 144,
"text": "\\mathit{W}"
},
{
"math_id": 145,
"text": "\\mu_{A}(x)"
},
{
"math_id": 146,
"text": " \\, C(A) = x^* : \\mu_A(x^*)=1"
},
{
"math_id": 147,
"text": "K(A) = \\operatorname{Kern}(A)"
},
{
"math_id": 148,
"text": "\\R"
},
{
"math_id": 149,
"text": "d(A) = 0"
},
{
"math_id": 150,
"text": "\\mu_A(x) \\in \\{0,\\,1\\}"
},
{
"math_id": 151,
"text": "d(A)"
},
{
"math_id": 152,
"text": "\\forall x \\in U: \\mu_A(x) = 0.5"
},
{
"math_id": 153,
"text": "\\mu_A \\leq \\mu_B \\iff"
},
{
"math_id": 154,
"text": "\\mu_A \\leq \\mu_B \\leq 0.5"
},
{
"math_id": 155,
"text": "\\mu_A \\geq \\mu_B \\geq 0.5"
},
{
"math_id": 156,
"text": "d(\\neg{A}) = d(A)"
},
{
"math_id": 157,
"text": "d(A) = H(A) + H(\\neg{A})"
},
{
"math_id": 158,
"text": "H(A) = -k \\sum_{i=1}^n \\mu_A(x_i) \\ln \\mu_A(x_i)"
},
{
"math_id": 159,
"text": "d(A) = -k \\sum_{i=1}^n S(\\mu_A(x_i))"
},
{
"math_id": 160,
"text": "S(x) = H_e(x)"
},
{
"math_id": 161,
"text": "S(\\alpha) = -\\alpha \\ln \\alpha - (1-\\alpha) \\ln (1-\\alpha),\\ \\alpha \\in [0,1]"
},
{
"math_id": 162,
"text": "k"
},
{
"math_id": 163,
"text": "H(A) = -k \\int_{- \\infty}^\\infty \\operatorname{Cr} \\lbrace A \\geq t \\rbrace \\ln \\operatorname{Cr} \\lbrace A \\geq t \\rbrace \\,dt"
},
{
"math_id": 164,
"text": "d(A) = -k \\int_{- \\infty}^\\infty S(\\operatorname{Cr} \\lbrace A \\geq t \\rbrace )\\,dt."
}
] |
https://en.wikipedia.org/wiki?curid=56601
|
5660340
|
Bathythermograph
|
Device to detect water temperature and pressure
The bathythermograph, or BT, also known as the Mechanical Bathythermograph, or MBT; is a device that holds a temperature sensor and a transducer to detect changes in water temperature versus depth down to a depth of approximately 285 meters (935 feet). Lowered by a small winch on the ship into the water, the BT records pressure and temperature changes on a coated glass slide as it is dropped nearly freely through the water. While the instrument is being dropped, the wire is payed out until it reaches a predetermined depth, then a brake is applied and the BT is drawn back to the surface. Because the pressure is a function of depth (see Pascal's law), temperature measurements can be correlated with the depth at which they are recorded.
History.
The true origins of the BT began in 1935 when Carl-Gustaf Rossby started experimenting. He then forwarded the development of the BT to his graduate student Athelstan Spilhaus, who then fully developed the BT in 1938 as a collaboration between MIT, Woods Hole Oceanographic Institution (WHOI), and the U.S. Navy. The device was modified during World War II to gather information on the varying temperature of the ocean for the U.S. Navy. Originally the slides were prepared "by rubbing a bit of skunk oil on with a finger and then wiping off with the soft side of one's hand," followed by smoking the slide over the flame of a Bunsen burner. Later on the skunk oil was replaced with an evaporated metal film.
Since water temperature may vary by layer and may affect sonar by producing inaccurate location results, bathothermographs (U.S. World War II spelling) were installed on the outer hulls of U.S. submarines during World War II.
By monitoring variances, or lack of variances, in underwater temperature or pressure layers, while submerged, the submarine commander could adjust and compensate for temperature layers that could affect sonar accuracy. This was especially important when firing torpedoes at a target based strictly on a sonar fix.
More importantly, when the submarine was under attack by a surface vessel using sonar, the information from the bathothermograph allowed the submarine commander to seek thermoclines, which are colder layers of water, that would distort the pinging from the surface vessel's sonar, allowing the submarine under attack to "disguise" its actual position and to escape depth charge damage and eventually to escape from the surface vessel.
Throughout the use of the bathythermograph various technicians, watchstanders, and oceanographers noted how dangerous the deployment and retrieval of the BT was. According to watchstander Edward S. Barr: "… In any kind of rough weather, this BT position was frequently subject to waves making a clean sweep of the deck. In spite of breaking waves over the side, the operator had to hold his station, because the equipment was already over the side. One couldn't run for shelter as the brake and hoisting power were combined in a single hand lever. To let go of this lever would cause all the wire on the winch to unwind, sending the recording device and all its cable to the ocean bottom forever. It was not at all uncommon, from the protective position of the laboratory door, to look back and see your watchmate at the BT winch completely disappear from sight as a wave would come crashing over the side. … We also took turns taking BT readings. It wasn't fair for only one person to get wet consistently."
Expendable bathythermograph.
After witnessing firsthand the dangers of deploying and retrieving BTs, James M. Snodgrass began developing the expendable bathythermograph (XBT). Snodgrass' description of the XBT:Briefly, the unit would break down in two components, as follows: the ship to surface unit, and surface to expendable unit. I have in mind a package which could be jettisoned, either by the "Armstrong" method, or some simple mechanical device, which would at all times be connected to the surface vessel. The wire would be paid out from the surface ship and not from the surface float unit. The surface float would require a minimum of flotation and a small, very simple sea anchor. From this simple platform the expendable BT unit would sink as outlined for the acoustic unit. However, it would unwind as it goes a very fine thread of probably neutrally buoyant conductor terminating at the float unit, thence connected to the wire leading to the ship. In the early 1960s the U.S. Navy contracted Sippican Corporation of Marion, Massachusetts to develop the XBT, who became the sole supplier.
The unit is composed of a probe; a wire link; and a shipboard canister. Inside of the probe is a thermistor which is connected electronically to a chart recorder. The probe falls freely at 20 feet per second and that determines its depth and provides a temperature-depth trace on the recorder. A pair of fine copper wires which pay out from both a spool retained on the ship and one dropped with the instrument, provide a data transfer line to the ship for shipboard recording. Eventually, the wire runs out and breaks, and the XBT sinks to the ocean floor. Since the deployment of an XBT does not require the ship to slow down or otherwise interfere with normal operations, XBT's are often deployed from "vessels of opportunity", such as cargo ships or ferries, and also by dedicated research ships conducting underway operations when a CTD cast would require stopping the ship for several hours. Airborne versions (AXBT) are also used; these use radio frequencies to transmit the data to the aircraft during deployment. Today Lockheed Martin Sippican has manufactured over 5 million XBTs.
Types of XBTs.
Source:
Participation by Month of Country and Institutions deploying XBTs.
Below is the list of XBT deployments for 2013:
XBT Fall Rate Bias.
Since XBTs do not measure depth (e.g. via pressure), fall-rate equations are used to derive depth profiles from what is essentially a time series. The fall rate equation takes the form:
formula_0
where, z(t) is the depth of the XBT in meters; t is time; and a & b are coefficients determined using theoretical and empirical methods. The coefficient b can be thought of as the initial speed as the probe hits the water. The coefficient a can be thought of as the reduction in mass with time as the wire spools off.
For a considerable time, these equations were relatively well-established, however in 2007 Gouretski and Koltermann showed a bias between XBT temperature measurements and CTD temperature measurements. They also showed that this varies over time and could be due to both errors in the calculation of depth and in measurement of the temperature. From that the 2008 NOAA XBT Fall Rate Workshop began to address the problem, with no viable conclusion as to how to proceed with adjusting the measurements. In 2010 the second XBT Fall Rate Workshop was held in Hamburg, Germany to continue discussing the problem and forge a way forward.
A major implication of this is that a depth-temperature profile can be integrated to estimate upper ocean heat content; the bias in these equations lead to a warm bias in the heat content estimations. The introduction of Argo floats has provided a much more reliable source of temperature profiles than XBTs, however the XBT record remains important for estimating decadal trends and variability and hence much effort has been put into resolving these systematic biases.
XBT correction needs to include both a drop-rate correction and a temperature correction.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "z(t)=at^2+bt"
}
] |
https://en.wikipedia.org/wiki?curid=5660340
|
5660713
|
Signed distance function
|
Distance from a point to the boundary of a set
In mathematics and its applications, the signed distance function or signed distance field (SDF) is the orthogonal distance of a given point "x" to the boundary of a set Ω in a metric space (such as the surface of a geometric shape), with the sign determined by whether or not "x" is in the interior of Ω. The function has positive values at points "x" inside Ω, it decreases in value as "x" approaches the boundary of Ω where the signed distance function is zero, and it takes negative values outside of Ω. However, the alternative convention is also sometimes taken instead (i.e., negative inside Ω and positive outside). The concept also sometimes goes by the name oriented distance function/field.
Definition.
Let Ω be a subset of a metric space "X" with metric "d", and formula_0 be its boundary. The distance between a point x of "X" and the subset formula_0 of X is defined as usual as
formula_1
where formula_2 denotes the infimum.
The "signed distance function" from a point x of X to formula_3 is defined by
formula_4
Properties in Euclidean space.
If Ω is a subset of the Euclidean space R"n" with piecewise smooth boundary, then the signed distance function is differentiable almost everywhere, and its gradient satisfies the eikonal equation
formula_5
If the boundary of Ω is "C""k" for "k" ≥ 2 (see Differentiability classes) then "d" is "C""k" on points sufficiently close to the boundary of Ω. In particular, on the boundary "f" satisfies
formula_6
where "N" is the inward normal vector field. The signed distance function is thus a differentiable extension of the normal vector field. In particular, the Hessian of the signed distance function on the boundary of Ω gives the Weingarten map.
If, further, Γ is a region sufficiently close to the boundary of Ω that "f" is twice continuously differentiable on it, then there is an explicit formula involving the Weingarten map "W""x" for the Jacobian of changing variables in terms of the signed distance function and nearest boundary point. Specifically, if "T"("∂"Ω, "μ") is the set of points within distance "μ" of the boundary of Ω (i.e. the tubular neighbourhood of radius "μ"), and "g" is an absolutely integrable function on Γ, then
formula_7
where det denotes the determinant and "dS""u" indicates that we are taking the surface integral.
Algorithms.
Algorithms for calculating the signed distance function include the efficient fast marching method, fast sweeping method and the more general level-set method.
For voxel rendering, a fast algorithm for calculating the SDF in taxicab geometry uses summed-area tables.
Applications.
Signed distance functions are applied, for example, in real-time rendering, for instance the method of SDF ray marching, and computer vision.
SDF has been used to describe object geometry in real-time rendering, usually in a raymarching context, starting in the mid 2000s. By 2007, Valve is using SDFs to render large pixel-size (or high DPI) smooth fonts with GPU acceleration in its games. Valve's method is not perfect as it runs in raster space in order to avoid the computational complexity of solving the problem in the (continuous) vector space. The rendered text often loses sharp corners. In 2014, an improved method was presented by Behdad Esfahbod. Behdad's GLyphy approximates the font's Bézier curves with arc splines, accelerated by grid-based discretization techniques (which culls too-far-away points) to run in real time.
A modified version of SDF was introduced as a loss function to minimise the error in interpenetration of pixels while rendering multiple objects. In particular, for any pixel that does not belong to an object, if it lies outside the object in rendition, no penalty is imposed; if it does, a positive value proportional to its distance inside the object is imposed.
formula_8
In 2020, the FOSS game engine Godot 4.0 received SDF-based real-time global illumination (SDFGI), that became a compromise between more realistic voxel-based GI and baked GI. Its core advantage is that it can be applied to infinite space, which allows developers to use it for open-world games.
In 2023, a "GPUI" UI framework was released to draw all UI elements using the GPU, many parts using SDF. The author claims to have produced a "Zed" code editor that renders at 120 fps. The work makes use of Inigo Quilez's list of geometric primitives in SDF, Evan Wallace (co-founder of Figma)'s approximated gaussian blur in SDF, and a new rounded rectangle SDF.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\partial\\Omega"
},
{
"math_id": 1,
"text": " d(x, \\partial \\Omega) = \\inf_{y \\in \\partial \\Omega}d(x, y),"
},
{
"math_id": 2,
"text": "\\inf"
},
{
"math_id": 3,
"text": "\\Omega"
},
{
"math_id": 4,
"text": "f(x) = \\begin{cases}\n d(x, \\partial \\Omega) & \\text{if } x \\in \\Omega \\\\\n -d(x, \\partial \\Omega) & \\text{if }\\, x \\notin \\Omega.\n\\end{cases}"
},
{
"math_id": 5,
"text": "|\\nabla f|=1."
},
{
"math_id": 6,
"text": "\\nabla f(x) = N(x),"
},
{
"math_id": 7,
"text": "\\int_{T(\\partial\\Omega,\\mu)} g(x)\\,dx = \\int_{\\partial\\Omega}\\int_{-\\mu}^\\mu g(u+\\lambda N(u))\\, \\det(I-\\lambda W_u) \\,d\\lambda \\,dS_u,"
},
{
"math_id": 8,
"text": "f(x) = \\begin{cases}\n0 & \\text{if }\\, x \\in \\Omega^c\\\\\nd(x, \\partial\\Omega) & \\text{if }\\, x \\in \\Omega\n\\end{cases}"
}
] |
https://en.wikipedia.org/wiki?curid=5660713
|
56607693
|
Induced seismicity in Canada
|
With the development of both conventional and unconventional resources in Canada, induced seismicity caused by anthropological activities has been observed, documented, and studied.
Induced events are generally smaller in magnitude than the most ‘important’ earthquakes documented by Natural Resources Canada. The largest natural earthquakes are generally located in the coastal regions of the country. The majority of large, natural seismic events in Western Canada are located near the Cascadia and Juan de Fuca Subduction Zones. The majority of large, natural seismic events in Eastern Canada are localized to distinct seismic zones like the Charlevoix-Kamouraska region. The only ‘important’ interior earthquakes, as classified by Natural Resources Canada, are the two strike-slip fault failures of local magnitude (ML) 6.6 and 6.9 observed in the Nahanni region of the Northwest Territories. Induced earthquakes, however, tend to occur in a 150-km-wide band east of the Canadian Rocky Mountains where the tectonic strain rate is relatively high.
Induced seismicity in Canada is mainly related to hydraulic fracturing and wastewater disposal. Within the central Western Canadian Sedimentary Basin (WCSB), evidence shows that geological factors likely influence the nature of induced seismicity related to hydraulic fracturing operations, as they share two characterizations in terms of spatial distributions:
1. Pre-existing basement-controlled faults are more likely to be triggered based on the focal depth analysis of earthquake clusters;
2. The lateral distributions of earthquake clusters are significantly correlated with the margins of the fossil reef structures in the Swan Hills Formation.
These phenomena can be explained by the regional- and local-scale geological evolution of this area. First, basement tectonics play a role in reef growth which were nucleated on elevated structures during the Devonian period. Second, the generation of dolomitized strata requires deep-seated faults to transport Mg-enriched fluid, which can later provide transport conduits for injected fluids, thereby creating hydraulic connections with the reservoir. Dolomitization of Devonian strata increased the formation permeability and generated greater fluid diffusivity, thus more induced seismicity occurred.
A total of 216 induced earthquakes occurred between 2009 and 2011 at the Etsho and Kiwigana fields in Horn River, Canada. Of those, 19 were between magnitudes (ML) 2 and 3, and the largest felt event reached ML 3.8. Seismicity was temporally correlated to pumping fluids during hydraulic fracture treatments, with earthquakes starting several hours after the onset of pumping. Since at least 2009, the Horn River Basin has been the site of induced seismicity associated with oil and gas activities. The BC Oil and Gas Commission states that over 8,000 hydraulic fracturing completions have had no associated anomalous seismicity in this region between 2009 and 2011.
Studies on induced seismicity have been ongoing since the 1970s. Focus on the cause of induced seismicity has shifted from activities related to conventional resources like mining to unconventional resource exploration and production. Barriers to understanding induced seismicity processes include lack of access to subsurface hydrogeological and geomechanical data, insufficient stress state data, and limited records of seismicity at the nucleation process scale.
Mechanism.
The mechanism of induced seismicity can be categorized based on different causes. It is widely accepted that impoundment of reservoirs, mining, oil and gas exploration and production, including injecting fluids to the subsurface and extracting oil and gas from the underground, are related to induced seismic events.
Industrial operations that create tremor movement of the ground, such as mining and seismic data collecting, are often a cause of induced seismicity. In the vicinity of deep mining activities, seismicities are often related to rockbursts - the violent failure of rock due to excavation. The magnitudes of these seismic events, however, are dependent on the local geological settings, such as the rock properties, faulting system, and regional stresses. For geophysical exploration activities, usually seismic waves are generated by man-made explosions to help the geophysicists understand the underground formations and structures. These explosions are far away from the population-dense areas.
Induced seismicity related to fluid injection is generally triggered by two basic mechanisms (Fig. 2): pore pressure perturbation via direct hydrologic connections and/or a change in total stress on pre-existing fault through poroelastic transmissions. A fault will be activated once the shear stress on the fault plane formula_0reaches a critical value formula_1:
formula_2
where formula_3is the cohesive strength, formula_4 is the normal stress on the fault plane, formula_5 is the pore pressure and formula_6 is the coefficient of friction. This can also be explained by Mohr-Coulomb failure criterion.
If the fault is hydraulically connected to the permeable reservoir, fluid pressure formula_7 will increase due to fluid injection leading to a decrease in the effective normal stress: formula_8; thus, the critical shear stress formula_1will be reduced, resulting in less frictional resistance to shear slip.
For remote faults without direct hydrologic connection to the permeable reservoir, they can be triggered by total stress changes as a result of poro-thermoelastic effects. Based on the mass balance principal, the volume of the reservoir should be enlarged after injection, which can alter the stress field outside of the reservoir. By changing the total stress (e.g., increasing formula_9 and/or reducing formula_10), faults, especially for near-critically stressed or optimally oriented faults, can be reactivated.
History of induced seismicity in Canada.
Induced seismicity has been documented in Canada since 1970. Many publications differentiate these events from natural earthquakes and attribute them to specific industrial activities using the Davis and Frohlich criteria. In Canada, induced seismic events have been attributed to fluid extraction, fluid injection and wastewater disposal, mining, hydraulic fracturing, and reservoir impoundment. Carbon capture and sequestration (CCS) is considered to be a risk factor for induced seismicity, but no minor seismic events or larger (defined with a moment magnitude [MW] of 2.0 or above) have been observed to date.
Fluid extraction.
Several cases of induced seismicity attributed to fluid extraction have been documented in Western Canada. An earthquake swarm with two events exceeding Mw 4 was attributed to gas extraction in the Strachan field near Rocky Mountain House, Alberta in the 1970s. A series of earthquakes was also observed between 1984 and 1994 near Fort St. John, British Columbia and attributed to gas extraction and secondary recovery (the injection of fluid to maintain pressure during hydrocarbon recovery). Events were observed within Permian strata up to ML 4.3.
Fluid injection.
The first case documented of induced seismicity occurred on March 8, 1970 within the Snipe Lake oil field, northwest of Edmonton Alberta, resulting in a shallow ML 5.1 event. Secondary recovery or waste disposal were also attributed to the induced events (ML 2.0) documented in Cold Lake, Alberta by Nicolson and Wessen. A series of earthquakes that occurred between 1994 and 2012 in the Cordel Field in Alberta were attributed to wastewater injection. The series of earthquakes resulted in two ML 4.0 events on March 31, 1997 and July 2, 2001. The British Columbia Oil and Gas Commission (BCOGC) documented several incidents of wastewater disposal induced seismicity in the Pintail and Graham areas of British Columbia. The Pintail events occurred between 2013 and 2015 and resulted in events up to ML 3.1. The Graham events occurred between 2003 and 2015 and result in events up to ML 4.0. Secondary oil recovery also triggered seismicity in Cambrian strata near Gobles, Ontario in the 1980s, with several events exceeding ML 3.0.
Mining.
Separating natural earthquakes from mining related seismicity is difficult since most mines are also located in seismically active regions of Canada and the routine blasting at the mines is registered in Canadian seismic catalogues. For example, the Western Canadian Composite Seismicity Catalogue documented 3,898 earthquakes above moment magnitude 2 that were attributed to blasting as of July 2017.
Induced seismicity up to body wave magnitude (Mb) 5.0 has been documented at Devonian Potash mines in Saskatchewan and at Cretaceous coal mines in northern Ontario. One such sequence of earthquakes occurred in a potash mine near Saskatoon between 1979 and 1980 and another sequence occurred between 2005 and 2015, culminating in a Mb 4.0 event near Yorkton, Saskatchewan in January 2015.
Seven incidents of mining induced seismicity were documented in Ontario Canada between 2006 and 2009, with magnitudes ranging from ML 2.4 to ML 4.1. Documents show that a large series of mining related events occurred at the Strathcona nickel mine near Sudbury, Ontario, reaching ML 2.7 in 1988. Numerous seismic events with magnitudes up to Mb 4.1 from 2004 through 2009 in Ontario were also studied and believed to be related to nickel, gold, and copper mines near Sudbury.
Hydraulic fracturing.
Induced seismicity has recently been attributed to hydraulic fracturing in Western Canada. A local magnitude 4.3 was attributed to hydraulic fracturing operations near Fox Creek, Alberta on June 13, 2015. The event represented the first 'red light' event under the Alberta Energy Regulator Subsurface Order 2 and resulted in a 16-day suspension of operations. This event corresponded to sequences of seismicity that were also linked to hydraulic fracturing near the 'red light' event before and after its occurrence.
Numerous induced events, up to Mw 3.8, were recorded near Fort Nelson, BC between April 2009 and December 2011. The events were attributed to hydraulic fracturing in Devonian strata. Another series of large seismic events up to Mw 4.2 occurred in Triassic strata near Fort St. John between 2013 and 2014. Another large event (Mw 4.6) occurred near Fort St. John on August 15, 2015.
Another type of hydraulic fracturing is the Weyburn carbon capture and storage (CCS), yet microseismic monitoring and analysis indicated that the likelihood of a felt event was low, with low rates of seismicity and observed events with Mw under 0.
Reservoir impoundment.
Induced seismicity has also been identified as a result of reservoir impoundment. The first documented event swarm, culminating in an Mb 4.1 event on October 23, 1975, was documented at the Manicouagan 3 reservoir in Quebec. Another induced event, ML 4.1, occurred near Mica, British Columbia on January 5, 1974 due to reservoir impoundment. An extensive swarm of earthquakes was observed during the filling of the LG3 dam in Quebec between 1981 and 1984, culminating in an Mb 3.0 event.
Distinguishing induced seismicity from natural seismicity.
The approach to discriminating induced seismicity from natural earthquakes is often based on source parameters, physics-based probabilistic models and statistics-based models. In terms of the source-parameter approach, natural earthquakes are expected to show more double-couple characteristics, where the volumetric change can be negligible. Region-specific attributes can also be an important factor to identify induced seismicity. In the Canadian Shield, for instance, induced events have significantly shallower depths than natural events. Induced seismic events often cluster near operations and occur over a short period of time, while natural earthquakes are more sporadic both in time and locations. Statistical analysis has also observed that clusters of seismicity related to injection are moving away from the injection wells over time.
Even though industrial activities can trigger seismic events, it is still intrinsically difficult to distinguish induced seismicity and natural earthquakes. Difficulties include the observed time lag and displacement between the injection and the detected seismic events, as well as the complexity of the geological setting. For instance, it took approximately 80 minutes from the onset of pumping and evidence for fault reactivation in gas wells in Western Canada, whereas some of the felt seismicity documented in the Horn River Basin occurred several hours after pumping started.
Regulations.
Protocols regarding induced seismicity were developed in respect of different causes, from hydraulic fracturing to geothermal related causes. The main purpose of these regulations is to prevent and minimize potential damage of induced seismicity. In Alberta, different subsurface orders are required. According to Section 11.104 of the Oil and Gas Conservation Rules, subsurface orders are issued. For example, Alberta Energy Regulator (AER) issued subsurface No.2 to Duvernay formation. In Fox Creek, the two thresholds are ML 2 and 4. If a seismic event had a magnitude larger than ML4, immediate shut-down is required. In Red Deer, because the operations are closer to residents, two thresholds are ML 1 and 3. In consideration of the dam integrity, subsurface No. 6 is published around the Brazeau Reservoir.The thresholds are ML 1 and 2.5. BC Oil and Gas Commission has a similar protocol, where operations have to stop if the monitored seismicity is larger than ML 4 within 3 km of the operation site.
Besides the regulations enforced by the government, different operators have their own 'traffic light system' and specific mitigations to prevent inducing large earthquakes. Two surveys conducted by the Canadian Society of Exploration Geophysics(CSEG) show that some companies are being proactive with evaluating health, safety, environment, and public relations, and implementing on-site monitoring systems for induced seismicity. Canadian Association of Petroleum Producers (CAPP) also publishes guidebooks based on industry practices to help the operators better manage the risk of induced seismicity.
Induced seismicity related to hydraulic fracturing.
In British Columbia.
The majority of induced seismicity in BC has been attributed to hydraulic fracturing. In northern British Columbia, 272 seismic events related to hydraulic fracturing, ranging between ML 1.0 and ML 3.8, were reported from April 2009 to December 2011. No injuries or property damage resulted from the earthquakes and only of one of these events was felt at the ground surface. The Government of BC is conducting a scientific panel aiming to answer inquiries about how regulations affect the industrial activities and how these regulations affect First Nations. More monitors have been installed by BC Oil and Gas Commission (BCOGC) for implementing a better monitoring system for induced seismicity. Aside from earthquakes, potential impacts like water and air pollution also pose concerns to the public.
In Alberta.
In Alberta, most of the largest earthquakes attributed to hydraulic fracturing have been observed in the Devonian strata of the Duvernay and Horn River Basin. A ML 4.44 event was triggered on August 4, 2014 in the Horn River Basin. A ML 4.36 was triggered on January 23, 2015 in the Kaybob Area of the Duvernay. Reef complexes control the geology of both formations and significant subvertical faults are present in both.
An energy company halted their operations as the Alberta Energy Regulator (AER) was working on a review of fracking due to a registered Mw 4.8 earthquake reported about 35 km west of Fox Creek. No damage or injury had been reported from the earthquake but it was felt and caused noticeable ground shaking. A site survey was conducted by AER and operations were not restarted until mitigation plans were approved by the regulator.
Seismic monitoring and reporting requirements for hydraulic fracturing operators are implemented in the Fox Creek area for safe and environmental purposes by AER. Seismic events of a registered magnitude of 4.0 or greater are reported online by AER, as well as a real-time seismic event map. There are more than 40 seismic monitoring stations in Alberta.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\tau"
},
{
"math_id": 1,
"text": "\\tau_c\n"
},
{
"math_id": 2,
"text": "\\tau_c=c+(\\sigma_n-P_p)\\tan\\phi"
},
{
"math_id": 3,
"text": "c"
},
{
"math_id": 4,
"text": "\\tau_c"
},
{
"math_id": 5,
"text": "P_p"
},
{
"math_id": 6,
"text": "\\tan\\phi"
},
{
"math_id": 7,
"text": "P_p\n"
},
{
"math_id": 8,
"text": "\\sigma_n'=\\sigma_n-P_p"
},
{
"math_id": 9,
"text": "\\tau\n"
},
{
"math_id": 10,
"text": "\\sigma_n\n\n"
}
] |
https://en.wikipedia.org/wiki?curid=56607693
|
56608952
|
Algebraic representation
|
Group representation via algebra automorphisms
In mathematics, an algebraic representation of a group "G" on a "k"-algebra "A" is a linear representation formula_0 such that, for each "g" in "G", formula_1 is an algebra automorphism. Equipped with such a representation, the algebra "A" is then called a "G"-algebra.
For example, if "V" is a linear representation of a group "G", then the representation put on the tensor algebra formula_2 is an algebraic representation of "G".
If "A" is a commutative "G"-algebra, then formula_3 is an affine "G"-scheme.
|
[
{
"math_id": 0,
"text": "\\pi: G \\to GL(A)"
},
{
"math_id": 1,
"text": "\\pi(g)"
},
{
"math_id": 2,
"text": "T(A)"
},
{
"math_id": 3,
"text": "\\operatorname{Spec}(A)"
}
] |
https://en.wikipedia.org/wiki?curid=56608952
|
56608953
|
Floristic Quality Assessment
|
Ecological integrity assessment
Floristic Quality Assessment (FQA) is a tool used in the United States to assess an area's ecological integrity based on its plant species composition. Floristic Quality Assessment was originally developed in order to assess the likelihood that impacts to an area "would be irreversible or irretrievable...to make standard comparisons among various open land areas, to set conservation priorities, and to monitor site management or restoration efforts." The concept was developed by Gerould Wilhelm in the 1970s in a report on the natural lands of Kane County, Illinois. In 1979 Wilhelm and Floyd Swink codified this "scoring system"
for the 22-county Chicago Region.
Coefficient of conservatism.
Each plant species in a region is assigned a coefficient of conservatism, also known as a C-value, ranging between 0 and 10. A plant species with a higher score (e.g. 10) has a "lower" tolerance to environmental degradation such as overgrazing or development and therefore is naturally restricted to undisturbed, remnant habitats. Non-native plants are either assigned a C-value of 0 or are excluded from assessments. In the Chicago Region, 84% of the native plant species have a C-value of 4 or greater. Plants with a C-value of 4 or greater rarely naturally move from a remnant area to surrounding degraded land. For example, the federally endangered "Dalea foliosa" has a C-value of 10.
C-values are assigned within specific ecological and geographic regions by botanical experts familiar with the species' autecology within the respective regions. As of February 2018[ [update]], there were more than 50 different FQA databases ranging from the Gulf Coastal Plain to western Washington, though most databases represented regions in the eastern and central United States and Canada.
The mean C-value (formula_0) is calculated based on an inventory of plants. An area with a native mean C-value of 3.5 or higher likely has "sufficient floristic quality to be of at least marginal natural area quality." Remnant natural areas with mean C-values of 4.0 or greater are unmitigable.
Floristic Quality Index.
The Floristic Quality Index (FQI, or Rating Index according to Swink and Wilhelm) is calculated by multiplying the mean C value by the square root of the total number of species:
formula_1
For example, the FQI for Nelson Lake Marsh was 78 in 1994 and that for Russell R. Kirt Prairie was about 30 in 1999.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\bar{C}"
},
{
"math_id": 1,
"text": " \\bar{C}\\sqrt{n}"
}
] |
https://en.wikipedia.org/wiki?curid=56608953
|
56616299
|
Kramers–Moyal expansion
|
In stochastic processes, Kramers–Moyal expansion refers to a Taylor series expansion of the master equation, named after Hans Kramers and José Enrique Moyal. In many textbooks, the expansion is used only to derive the Fokker–Planck equation, and never used again. In general, continuous stochastic processes are essentially all Markovian, and so Fokker–Planck equations are sufficient for studying them. The higher-order Kramers–Moyal expansion only come into play when the process is jumpy. This usually means it is a Poisson-like process.
For a real stochastic process, one can compute its central moment functions from experimental data on the process, from which one can then compute its Kramers–Moyal coefficients, and thus empirically measure its Kolmogorov forward and backward equations. This is implemented as a python package
Statement.
Start with the integro-differential master equation
formula_0
where formula_1 is the transition probability function, and formula_2 is the probability density at time formula_3. The Kramers–Moyal expansion transforms the above to an infinite order partial differential equation
formula_4
and alsoformula_5
where formula_6 are the Kramers–Moyal coefficients, defined byformula_7and formula_8 are the central moment functions, defined by
formula_9
The Fokker–Planck equation is obtained by keeping only the first two terms of the series in which formula_10 is the drift and formula_11 is the diffusion coefficient.
Also, the moments, assuming they exist, evolves as
formula_12where angled brackets mean taking the expectation: formula_13.
n-dimensional version.
The above version is the one-dimensional version. It generalizes to n-dimensions. (Section 4.7 )
Proof.
In usual probability, where the probability density does not change, the moments of a probability density function determines the probability density itself by a Fourier transform (details may be found at the characteristic function page):formula_14formula_15Similarly,
formula_16
Now we need to integrate away the Dirac delta function. Fixing a small formula_17, we have by the Chapman-Kolmogorov equation,formula_18The formula_19 term is just formula_20, so taking derivative with respect to time,formula_21
The same computation with formula_1 gives the other equation.
Forward and backward equations.
The equation can be recast into a linear operator form, using the idea of infinitesimal generator. Define the linear operator formula_22then the equation above states formula_23In this form, the equations are precisely in the form of a general Kolmogorov forward equation. The backward equation then states thatformula_24whereformula_25
is the Hermitian adjoint of formula_26.
Computing the Kramers–Moyal coefficients.
By definition,formula_7This definition works because formula_27, as those are the central moments of the Dirac delta function. Since the even central moments are nonnegative, we have formula_28 for all formula_29. When the stochastic process is the Markov process formula_30, we can directly solve for formula_31 as approximated by a normal distribution with mean formula_32 and variance formula_33. This then allows us to compute the central moments, and soformula_34This then gives us the 1-dimensional Fokker–Planck equation:formula_35
Pawula theorem.
Pawula theorem states that either the sequence formula_36 becomes zero at the third term, or all its even terms are positive.
Proof.
By Cauchy–Schwarz inequality, the central moment functions satisfy formula_37. So, taking the limit, we have formula_38. If some formula_39 for some formula_40, then formula_41. In particular, formula_42. So the existence of any nonzero coefficient of order formula_43 implies the existence of nonzero coefficients of arbitrarily large order. Also, if formula_44, then formula_45. So the existence of any nonzero coefficient of order formula_46 implies all coefficients of order formula_47 are positive.
Interpretation.
Let the operator formula_48 be defined such formula_49. The probability density evolves by formula_50. Different order of formula_51 gives different level of approximation.
Pawula theorem means that if truncating to the second term is not exact, that is, formula_56, then truncating to any term is still not exact. Usually, this means that for any truncation formula_57, there exists a probability density function formula_58 that can become negative during its evolution formula_59 (and thus fail to be a probability density function). However, this doesn't mean that Kramers-Moyal expansions truncated at other choices of formula_51 is useless. Though the solution must have negative values at least for sufficiently small times, the resulting approximation probability density may still be better than the formula_54 approximation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{\\partial p(x,t)}{\\partial t} =\\int p(x,t|x_0, t_0)p(x_0, t_0) dx_0"
},
{
"math_id": 1,
"text": "p(x, t|x_0, t_0)"
},
{
"math_id": 2,
"text": "p(x,t)"
},
{
"math_id": 3,
"text": "t"
},
{
"math_id": 4,
"text": "\\partial_t p(x,t) = \\sum_{n=1}^\\infty (-\\partial_x)^n[D_n(x,t) p(x,t)]"
},
{
"math_id": 5,
"text": "\\partial_t p(x, t|x_0, t_0) = \n\\sum_{n=1}^\\infty (-\\partial_x)^n [D_n(x, t) p(x, t|x_0, t_0) ] "
},
{
"math_id": 6,
"text": "D_n(x, t)"
},
{
"math_id": 7,
"text": "D_n(x, t) = \\frac{1}{n!}\\lim_{\\tau\\to 0} \\frac{1}{\\tau} \\mu_n(t|x, t-\\tau)"
},
{
"math_id": 8,
"text": "\\mu_n"
},
{
"math_id": 9,
"text": "\\mu_n(t' | x, t) = \\int_{-\\infty}^\\infty (x'-x)^n p(x', t'\\mid x, t) \\ dx'."
},
{
"math_id": 10,
"text": "D_1"
},
{
"math_id": 11,
"text": "D_2"
},
{
"math_id": 12,
"text": "\\frac{\\partial}{\\partial t}\\left\\langle x^n\\right\\rangle=\\sum_{k=1}^n \\frac{n !}{(n-k) !}\\left\\langle x^{n-k} D^{(k)}(x, t)\\right\\rangle"
},
{
"math_id": 13,
"text": "\\left\\langle f\\right\\rangle = \\int f(x) p(x, t)dx"
},
{
"math_id": 14,
"text": "p(x) = \\frac{1}{2\\pi} \\int e^{-ikx}\\tilde p(k)dk \n= \\sum_{n=0}^\\infty \\frac{(-1)^n}{n!}\\delta^{(n)}(x)\\mu_n \n"
},
{
"math_id": 15,
"text": "\n\\tilde p(k) = \\int e^{ikx} p(x) dx = \\sum_{n=0}^\\infty\\frac{(ik)^n}{n!} \\mu_n "
},
{
"math_id": 16,
"text": "p(x, t| x_0, t_0 ) = \\sum_{n=0}^\\infty \\frac{(-1)^n}{n!}\\delta^{(n)}(x-x_0) \\mu_n(t|x_0, t_0)"
},
{
"math_id": 17,
"text": "\\tau > 0"
},
{
"math_id": 18,
"text": "\\begin{align}\np(x, t) &= \\int p(x,t|x', t-\\tau) p(x', t-\\tau) dx' \\\\\n&= \\sum_{n=0}^\\infty \\frac{(-1)^n}{n!}\\int p(x', t-\\tau) \\delta^{(n)}(x-x') \\mu_n(t|x', t-\\tau) dx' \\\\\n&= \\sum_{n=0}^\\infty \\frac{(-1)^n}{n!} \\partial_x^n (p(x, t-\\tau) \\mu_n(t|x, t-\\tau))\n\\end{align}\n "
},
{
"math_id": 19,
"text": "n=0"
},
{
"math_id": 20,
"text": "p(x, t-\\tau)"
},
{
"math_id": 21,
"text": "\\partial_t p(x, t) = \\lim_{\\tau \\to 0^+}\\frac 1\\tau \\sum_{n=1}^\\infty \\frac{(-1)^n}{n!} \\partial_x^n (p(x, t-\\tau) \\mu_n(t|x, t-\\tau)) = \n\\sum_{n=1}^\\infty (-\\partial_x)^n (p(x, t) D_n(x, t)) "
},
{
"math_id": 22,
"text": "\\mathcal A f := \\sum_{n=1}^\\infty (-\\partial_x)^n[D_n(x,t) f(x,t)] "
},
{
"math_id": 23,
"text": "\\begin{align}\n\\partial_t p(x, t) &= \\mathcal{A} p(x, t) \\\\\n\\partial_t p(x, t|x_0, t_0) &= \\mathcal{A} p(x, t|x_0, t_0)\n\\end{align}\n"
},
{
"math_id": 24,
"text": "\\partial_t p(x_1, t_1|x, t) = -\\mathcal{A}^\\dagger p(x_1, t_1|x, t)\n"
},
{
"math_id": 25,
"text": "\\mathcal A^\\dagger f := \\sum_{n=1}^\\infty D_n(x,t) \\partial_x^n[f(x,t)] "
},
{
"math_id": 26,
"text": "\\mathcal A"
},
{
"math_id": 27,
"text": "\\mu_n(t|x, t) = 0"
},
{
"math_id": 28,
"text": "D_{2n} \\geq 0"
},
{
"math_id": 29,
"text": "n\\geq 1"
},
{
"math_id": 30,
"text": "dX = bdt + \\sigma dW_t"
},
{
"math_id": 31,
"text": "p(x, t|x, t-\\tau)"
},
{
"math_id": 32,
"text": "x + b(x)\\tau"
},
{
"math_id": 33,
"text": "\\sigma^2\\tau"
},
{
"math_id": 34,
"text": "D_1 = b, \\quad D_2 = \\frac 12 \\sigma^2, \\quad D_3=D_4=\\cdots = 0"
},
{
"math_id": 35,
"text": "\\partial_t p = -\\partial_x(bp) + \\frac 12 \\partial_x^2(\\sigma^2 p)"
},
{
"math_id": 36,
"text": "D_1, D_2, D_3, ..."
},
{
"math_id": 37,
"text": "\\mu_{n+m}^2 \\leq \\mu_{2n}\\mu_{2m}"
},
{
"math_id": 38,
"text": "D_{n+m}^2 \\leq \\frac{(2n)!(2m)!}{(n+m)!^2}D_{2n}D_{2m}"
},
{
"math_id": 39,
"text": "D_{2+n} \\neq 0"
},
{
"math_id": 40,
"text": "n \\geq 1"
},
{
"math_id": 41,
"text": "D_2 D_{2+2n}> 0"
},
{
"math_id": 42,
"text": "D_{2+n}, D_{2+2n}, D_{2+4n}, ... > 0"
},
{
"math_id": 43,
"text": "\\geq 3"
},
{
"math_id": 44,
"text": "D_n \\neq 0"
},
{
"math_id": 45,
"text": "D_2D_{2n-2} > 0, D_4D_{2n-4} > 0, ..."
},
{
"math_id": 46,
"text": "n"
},
{
"math_id": 47,
"text": "2, 4, ..., 2n-2"
},
{
"math_id": 48,
"text": "\\mathcal A_m "
},
{
"math_id": 49,
"text": "\\mathcal A_m f := \\sum_{n=1}^m (-\\partial_x)^n[D_n(x,t) f(x,t)] "
},
{
"math_id": 50,
"text": "\\partial_t\\rho \\approx \\mathcal A_m \\rho"
},
{
"math_id": 51,
"text": "m"
},
{
"math_id": 52,
"text": "m = 0"
},
{
"math_id": 53,
"text": "m=1"
},
{
"math_id": 54,
"text": "m=2"
},
{
"math_id": 55,
"text": "m=\\infty"
},
{
"math_id": 56,
"text": "\\mathcal A_2 \\neq \\mathcal A"
},
{
"math_id": 57,
"text": "\\mathcal A_m"
},
{
"math_id": 58,
"text": "\\rho"
},
{
"math_id": 59,
"text": "\\partial_t\\rho \\approx\\mathcal A_m \\rho"
}
] |
https://en.wikipedia.org/wiki?curid=56616299
|
56616570
|
G-algebra
|
In mathematics, a G-algebra can mean either
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title .
|
[
{
"math_id": 0,
"text": "\\pi: G \\to \\mathsf{Aut}(A)"
}
] |
https://en.wikipedia.org/wiki?curid=56616570
|
5662011
|
Thompson subgroup
|
In mathematical finite group theory, the Thompson subgroup formula_0 of a finite "p"-group "P" refers to one of several characteristic subgroups of "P". John G. Thompson (1964) originally defined formula_0 to be the subgroup generated by the abelian subgroups of "P" of maximal rank. More often the Thompson subgroup formula_0 is defined to be the subgroup generated by the abelian subgroups of "P" of maximal order or the subgroup generated by the elementary abelian subgroups of "P" of maximal rank. In general these three subgroups can be different, though they are all called the Thompson subgroup and denoted by formula_0.
|
[
{
"math_id": 0,
"text": "J(P)"
}
] |
https://en.wikipedia.org/wiki?curid=5662011
|
56621706
|
Alternating conditional expectations
|
Alternating conditional expectations (ACE) is an algorithm to find the optimal transformations between the response variable and predictor variables in regression analysis.
Introduction.
In statistics, a nonlinear transformation of variables is commonly used in practice in regression problems. Alternating conditional expectations (ACE) is one of the methods to find those transformations that produce the best fitting additive model. Knowledge of such transformations aids in the interpretation and understanding of the relationship between the response and predictors.
ACE transforms the response variable formula_0 and its predictor variables, formula_1 to minimize the fraction of variance not explained. The transformation is nonlinear and is iteratively obtained from data.
Mathematical description.
Let formula_2 be random variables. We use formula_3 to predict formula_0. Suppose formula_4 are zero-mean functions and with these transformation functions, the fraction of variance of formula_5 not explained is
formula_6
Generally, the optimal transformations that minimize the unexplained part are difficult to compute directly. As an alternative, ACE is an iterative method to calculate the optimal transformations. The procedure of ACE has the following steps:
Bivariate case.
The optimal transformation formula_14 for formula_15 satisfies
formula_16
where formula_17 is Pearson correlation coefficient. formula_18 is known as the maximal correlation between formula_19 and formula_0. It can be used as a general measure of dependence.
In the bivariate case, the ACE algorithm can also be regarded as a method for estimating the maximal correlation between two variables.
Software implementation.
The ACE algorithm was developed in the context of known distributions. In practice, data distributions are seldom known and the conditional expectation should be estimated from data. R language has a package acepack which implements ACE algorithm. The following example shows its usage:
library(acepack)
TWOPI <- 8 * atan(1)
x <- runif(200, 0, TWOPI)
y <- exp(sin(x) + rnorm(200)/2)
a <- ace(x, y)
par(mfrow=c(3,1))
plot(a$y, a$ty) # view the response transformation
plot(a$x, a$tx) # view the carrier transformation
plot(a$tx, a$ty) # examine the linearity of the fitted model
Discussion.
The ACE algorithm provides a fully automated method for estimating optimal transformations in multiple regression. It also provides a method for estimating the maximal correlation between random variables. Since the process of iteration usually terminates in a limited number of runs, the time complexity of the algorithm is formula_20 where formula_21 is the number of samples. The algorithm is reasonably computer efficient.
A strong advantage of the ACE procedure is the ability to incorporate variables of quite different types in terms of the set of values they can assume. The transformation functions formula_22 assume values on the real line. Their arguments can, however, assume values on any set. For example, ordered real and unordered categorical variables can be incorporated in the same regression equation. Variables of mixed type are admissible.
As a tool for data analysis, the ACE procedure provides graphical output to indicate a need for transformations as well as to guide in their choice. If a particular plot suggests a familiar functional form for a transformation, then the data can be pre-transformed using this functional form and the ACE algorithm can be rerun.
As with any regression procedure, a high degree of association between predictor variables can sometimes cause the individual transformation estimates to be highly variable, even though the complete model is reasonably stable. When this is suspected, running the algorithm on randomly selected subsets of the data, or on bootstrap samples can assist
in assessing the variability.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Y"
},
{
"math_id": 1,
"text": "X_i"
},
{
"math_id": 2,
"text": "Y,X_1,\\dots,X_p"
},
{
"math_id": 3,
"text": "X_1,\\dots,X_p"
},
{
"math_id": 4,
"text": "\\theta(Y),\\varphi_1(X_1),\\dots,\\varphi_p(X_p)"
},
{
"math_id": 5,
"text": "\\theta(Y)"
},
{
"math_id": 6,
"text": " e^2(\\theta,\\varphi_1,\\dots,\\varphi_p)=\\frac{\\mathbb{E}\\left[\\theta(Y)-\\sum_{i=1}^p \\varphi_i(X_i)\\right]^2}{\\mathbb{E}[\\theta^2(Y)]}"
},
{
"math_id": 7,
"text": "\\varphi_1(X_1),\\dots,\\varphi_p(X_p)"
},
{
"math_id": 8,
"text": "e^2"
},
{
"math_id": 9,
"text": "\\theta_1(Y)=\\mathbb{E}\\left[\\sum_{i=1}^p \\varphi_i(X_i)\\Bigg|Y\\right]"
},
{
"math_id": 10,
"text": "\\theta_1(Y)"
},
{
"math_id": 11,
"text": "k"
},
{
"math_id": 12,
"text": "\\varphi_i(X_i)"
},
{
"math_id": 13,
"text": "\\tilde{\\varphi}_k = \\mathbb{E}\\left[\\theta(Y)-\\sum_{i\\neq k} \\varphi_i(X_i) \\Bigg| X_k\\right]"
},
{
"math_id": 14,
"text": "\\theta^*(Y), \\varphi^*(X)"
},
{
"math_id": 15,
"text": "p=1"
},
{
"math_id": 16,
"text": " \\rho^*(X, Y) = \\rho^*(\\theta^*, \\varphi^*) = \\max_{\\theta, \\varphi} \\rho(\\theta(Y), \\varphi(X))"
},
{
"math_id": 17,
"text": "\\rho"
},
{
"math_id": 18,
"text": " \\rho^*(X, Y)"
},
{
"math_id": 19,
"text": "X"
},
{
"math_id": 20,
"text": "O(np)"
},
{
"math_id": 21,
"text": "n"
},
{
"math_id": 22,
"text": "\\theta(y), \\varphi_i(x_i)"
}
] |
https://en.wikipedia.org/wiki?curid=56621706
|
5662276
|
Backward Euler method
|
Numerical method for ordinary differential equations
In numerical analysis and scientific computing, the backward Euler method (or implicit Euler method) is one of the most basic numerical methods for the solution of ordinary differential equations. It is similar to the (standard) Euler method, but differs in that it is an implicit method. The backward Euler method has error of order one in time.
Description.
Consider the ordinary differential equation
formula_0
with initial value formula_1 Here the function formula_2 and the initial data formula_3 and formula_4 are known; the function formula_5 depends on the real variable formula_6 and is unknown. A numerical method produces a sequence formula_7 such that formula_8 approximates formula_9, where formula_10 is called the step size.
The backward Euler method computes the approximations using
formula_11
This differs from the (forward) Euler method in that the forward method uses formula_12 in place of formula_13.
The backward Euler method is an implicit method: the new approximation formula_14 appears on both sides of the equation, and thus the method needs to solve an algebraic equation for the unknown formula_14. For non-stiff problems, this can be done with fixed-point iteration:
formula_15
If this sequence converges (within a given tolerance), then the method takes its limit as the new approximation
formula_14.
Alternatively, one can use (some modification of) the Newton–Raphson method to solve the algebraic equation.
Derivation.
Integrating the differential equation formula_0 from formula_16 to formula_17 yields
formula_18
Now approximate the integral on the right by the right-hand rectangle method (with one rectangle):
formula_19
Finally, use that formula_20 is supposed to approximate formula_21 and the formula for the backward Euler method follows.
The same reasoning leads to the (standard) Euler method if the left-hand rectangle rule is used instead of the right-hand one.
Analysis.
The local truncation error (defined as the error made in one step) of the backward Euler Method is formula_22, using the big O notation. The error at a specific time formula_23 is formula_22. It means that this method has order one. In general, a method with formula_24 LTE (local truncation error) is said to be of "k"th order.
The region of absolute stability for the backward Euler method is the complement in the complex plane of the disk with radius 1 centered at 1, depicted in the figure. This includes the whole left half of the complex plane, making it suitable for the solution of stiff equations. In fact, the backward Euler method is even L-stable.
The region for a discrete stable system by Backward Euler Method is a circle with radius 0.5 which is located at (0.5, 0) in the z-plane.
Extensions and modifications.
The backward Euler method is a variant of the (forward) Euler method. Other variants are the semi-implicit Euler method and the exponential Euler method.
The backward Euler method can be seen as a Runge–Kutta method with one stage, described by the Butcher tableau:
formula_25
The method can also be seen as a linear multistep method with one step. It is the first method of the family of Adams–Moulton methods, and also of the family of backward differentiation formulas.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\frac{\\mathrm{d} y}{\\mathrm{d} t} = f(t,y) "
},
{
"math_id": 1,
"text": " y(t_0) = y_0. "
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "t_0"
},
{
"math_id": 4,
"text": "y_0"
},
{
"math_id": 5,
"text": "y"
},
{
"math_id": 6,
"text": "t"
},
{
"math_id": 7,
"text": " y_0, y_1, y_2, \\ldots "
},
{
"math_id": 8,
"text": " y_k "
},
{
"math_id": 9,
"text": " y(t_0+kh) "
},
{
"math_id": 10,
"text": " h "
},
{
"math_id": 11,
"text": " y_{k+1} = y_k + h f(t_{k+1}, y_{k+1}). "
},
{
"math_id": 12,
"text": " f(t_k, y_k) "
},
{
"math_id": 13,
"text": "f(t_{k+1}, y_{k+1})"
},
{
"math_id": 14,
"text": " y_{k+1} "
},
{
"math_id": 15,
"text": " y_{k+1}^{[0]} = y_k, \\quad y_{k+1}^{[i+1]} = y_k + h f(t_{k+1}, y_{k+1}^{[i]}). "
},
{
"math_id": 16,
"text": " t_n "
},
{
"math_id": 17,
"text": " t_{n+1} = t_n + h "
},
{
"math_id": 18,
"text": " y(t_{n+1}) - y(t_n) = \\int_{t_n}^{t_{n+1}} f(t, y(t)) \\,\\mathrm{d}t. "
},
{
"math_id": 19,
"text": " y(t_{n+1}) - y(t_n) \\approx h f(t_{n+1}, y(t_{n+1})). "
},
{
"math_id": 20,
"text": " y_n "
},
{
"math_id": 21,
"text": " y(t_n) "
},
{
"math_id": 22,
"text": " O(h^2) "
},
{
"math_id": 23,
"text": " t "
},
{
"math_id": 24,
"text": " O(h^{k+1}) "
},
{
"math_id": 25,
"text": "\n\\begin{array}{c|c}\n1 & 1 \\\\\n\\hline\n & 1 \\\\\n\\end{array}\n"
}
] |
https://en.wikipedia.org/wiki?curid=5662276
|
56623290
|
Thrombus perviousness
|
Thrombus perviousness is an imaging biomarker which is used to estimate clot permeability from CT imaging. It reflects the ability of artery-occluding thrombi to let fluid seep into and through them. The more pervious a thrombus, the more fluid it lets through. Thrombus perviousness can be measured using radiological imaging routinely performed in the clinical management of acute ischemic stroke: CT scans without intravenous contrast (also called non-contrast CT, in short NCCT) combined with CT scans after intravenously administered contrast fluid (CT-angiography, in short CTA). Pervious thrombi may let more blood pass through to the ischemic brain tissue, and/or have a larger contact surface and histopathology more sensitive for thrombolytic medication. Thus, patients with pervious thrombi may have less brain tissue damage by stroke. The value of thrombus perviousness in acute ischemic stroke treatment is currently being researched.
Etymology.
Emilie Santos et al. introduced the term thrombus perviousness in 2016 to estimate thrombus permeability in ischemic stroke patients. Before, Mishra et al. used ‘residual flow within the clot’, and Frölich et al. used ‘antegrade flow across incomplete vessel occlusions’ to describe an estimate of thrombus permeability. Permeability is the physical measure of the ability of a material to transmit fluids over time. To measure thrombus permeability, one needs to measure contrast flow through a clot over time and the pressure drop caused by the occlusion, which is commonly not possible in the acute management of a patient with acute ischemic stroke. Current standard diagnostic protocol for acute ischemic stroke only requires single-phase imaging, visualizing the thrombus at a snapshot in time. Therefore, thrombus perviousness was introduced as a derivative measure of permeability.
Measurement.
The amount of contrast that seeps into a thrombus can be quantified by the density difference of thrombi between non-contrast computed tomography (NCCT) and CT angiography (CTA) images. Two measures for thrombus perviousness have been introduced: (1) the void fraction and (2) thrombus attenuation increase (TAI).
Void fraction (ε).
The void fraction represents the ratio of the void volume within a thrombus, filled with a volume of blood (Vblood) and the volume of thrombus material (Vthrombus):formula_0Void fraction can be estimated by measuring the attenuation increase (Δ) between NCCT and CTA in the thrombus (Δthrombus) and in the contralateral artery, filling with contrast on CTA (Δblood), and subsequently compute the ratio of these Δs:formula_1
Thrombus attenuation increase.
To measure TAI, the mean attenuation (density, in Hounsfield Units) of a clot is measured on NCCT (ρthrombusNCCT) and subtracted from the thrombus density measured on CTA (ρthrombusCTA). CTA thrombus density increases after administration of the high-density contrast fluid used in CTA:
Δthrombus = ρthrombusCTA – ρthrombusNCCT
A manual (volume of interest [ROI]-based) and semi-automated (full thrombus segmentation) method have been described to measure thrombus density.
Manual 3-ROI TAI assessment.
In the manual thrombus perviousness assessment, spherical ROIs with a diameter of 2 mm are manually placed in the thrombus, both on NCCT and CTA. To improve reflection of possible thrombus heterogeneity, three ROIs are placed per imaging modality rather than one. The average of every three ROIs is calculated and used as ρthrombusNCCT and ρthrombusCTA.
Semi-automated full thrombus segmentation.
In automated measurements, the thrombus on CTA images is semi-automatically segmented in three steps.
Comparison between 3-ROI and semi-automated full thrombus measurement.
It has been shown that manual measurement tends to overestimate actual entire thrombus density, especially in low-density thrombi. Measurements based on the full thrombus show a wider variety of thrombus densities and better discrimination of high- and low-density thrombi and shows a stronger correlation with outcome measures than measurements based on 3 ROIs.
Influence of imaging parameters.
TAI measurements performed on CT scans with thicker slices will be less accurate, because volume averaging results in a reduction of thrombus density on NCCT. Therefore, it has been suggested to only use thin-slice CT images (≤2.5 mm) to measure thrombus perviousness.
Additional permeability measures.
Alternative measures of similar thrombus permeability characteristics have been introduced and are still being introduced. Mishra et al. introduced the residual flow grade, which distinguishes no contrast penetration (grade 0); contrast permeating diffusely through thrombus (grade 1); and tiny hairline lumen or streak of well-defined contrast within the thrombus extending either through its entire length or part of the thrombus (grade 2).
Clinical relevance.
Currently, treatment for acute ischemic stroke due to an occlusion of one of the arteries of the proximal anterior intracranial circulation consists of intravenous thrombolysis followed by endovascular thrombectomy for patients that arrive at the hospital within 4.5 hours of stroke onset. Patients that arrive later than 4.5 hours after onset, or have contra-indications for intravenous thrombolysis can still be eligible for endovascular thrombectomy only. Even with treatment, not all patients recover after their stroke; many are left with permanent brain damage. Increased thrombus perviousness may decrease brain damage during stroke by allowing more blood to reach the ischemic tissue. Furthermore, level of perviousness may reflect histopathological composition of clots or size of contact surface for thrombolytic medication, thereby influencing effectiveness of thrombolysis.
Thrombus perviousness in research.
A number of studies has been conducted on the effects of thrombus perviousness on NCCT and CTA. In addition, dynamic imaging modalities have been used to investigate thrombus perviousness/permeability in animal and laboratory studies and in humans using digital subtraction angiography (DSA) and CT Perfusion/4D-CTA. 4D-CTA may enable more accurate measurement of TAI, since it overcomes the influence of varying scan timing and contrast arrival in single phase CTA.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\text{ε}=\\frac{\\text{V}_\\text{blood}}{\\text{V}_\\text{thrombus}}"
},
{
"math_id": 1,
"text": "\\text{ε}=\\frac{\\text{ρ}_\\text{CTA}\\,^\\text{thrombus}-\\text{ρ}_\\text{NCCT}\\,^\\text{thrombus}}{\\text{ρ}_\\text{CTA}\\,^\\text{blood}-\\text{ρ}_\\text{NCCT}\\,^\\text{blood}}=\\frac{\\text{Δ}_\\text{thrombus}}{\\text{Δ}_\\text{blood}}"
}
] |
https://en.wikipedia.org/wiki?curid=56623290
|
56625587
|
Product of exponentials formula
|
The product of exponentials (POE) method is a robotics convention for mapping the links of a spatial kinematic chain. It is an alternative to Denavit–Hartenberg parameterization. While the latter method uses the minimal number of parameters to represent joint motions, the former method has a number of advantages: uniform treatment of prismatic and revolute joints, definition of only two reference frames, and an easy geometric interpretation from the use of screw axes for each joint.
The POE method was introduced by Roger W. Brockett in 1984.
Method.
The following method is used to determine the product of exponentials for a kinematic chain, with the goal of parameterizing an affine transformation matrix between the base and tool frames in terms of the joint angles formula_0
Define "zero configuration".
The first step is to select a "zero configuration" where all the joint angles are defined as being zero. The 4x4 matrix formula_1 describes the transformation from the base frame to the tool frame in this configuration. It is an affine transform consisting of the 3x3 rotation matrix "R" and the 1x3 translation vector "p". The matrix is augmented to create a 4x4 square matrix.
formula_2
Calculate matrix exponential for each joint.
The following steps should be followed for each of "N" joints to produce an affine transform for each.
Define the origin and axis of action.
For each joint of the kinematic chain, an origin point "q" and an axis of action are selected for the zero configuration, using the coordinate frame of the base. In the case of a prismatic joint, the axis of action "v" is the vector along which the joint extends; in the case of a revolute joint, the axis of action "ω" the vector normal to the rotation.
Find twist for each joint.
A 1x6 twist vector is composed to describe the movement of each joint.
For a revolute joint,formula_3
For a prismatic joint,formula_4
The resulting twist has two 1x3 vector components: Linear motion along an axis (formula_5) and rotational motion along the same axis ("ω").formula_6
Calculate rotation matrix.
The 3x1 vector "ω" is rewritten in cross product matrix notation: formula_7
Per Rodrigues' rotation formula, the rotation matrix is calculated from the rotational component:formula_8
Calculate translation.
The 3x1 translation vector is calculated from the components of the twist.formula_9where "I" is the 3x3 identity matrix.
Compose matrix exponential.
For each joint "i", the matrix exponential formula_10 for a given joint angle formula_11 is composed from the rotation matrix and translation vector, combined into an augmented 4x4 matrix:formula_12
Compose structure equation.
The matrix exponentials are multiplied to produce a 4 × 4 affine transform formula_13 from the base frame to the tool frame in a given configuration.formula_14
Application to kinematics.
Forward kinematics may be computed directly from the POE chain for a given manipulator. This allows generating of complex trajectories of the end-effector in Cartesian space (Cartesian coordinate system) given trajectories in the joint space. Inverse kinematics for most common robot manipulators can be solved with the use of Paden–Kahan subproblems. The problem of inverse kinematics can also be approached with the use of nonlinear root-finding methods, such as the Newton-Raphson iterative method (Newton's method).
Relationship to Denavit–Hartenberg parameters.
Advantages.
The product of exponentials method uses only two frames of reference: the base frame "S" and the tool frame "T". Constructing the Denavit–Hartenberg parameters for a robot requires the careful selection of tool frames in order to enable particular cancellations, such that the twists can be represented by four parameters instead of six. In the product of exponentials method, the joint twists can be constructed directly without considering adjacent joints in the chain. This makes the joint twists easier to construct, and easier to process by computer. In addition, revolute and prismatic joints are treated uniformly in the POE method, while they are treated separately when using the Denavit–Hartenberg parameters. Moreover, there are multiple conventions for assigning link frames when using the Denavit–Hartenberg parameters.
Conversion.
There is not a one-to-one mapping between twist coordinate mapping in both methods, but algorithmic mapping from POE to Denavit–Hartenberg has been demonstrated.
Application to parallel robots.
When analyzing parallel robots, the kinematic chain of each leg is analyzed individually and the tool frames are set equal to one another. This method is extensible to grasp analyses.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\theta_1...\\theta_N."
},
{
"math_id": 1,
"text": "g_{st}(0)"
},
{
"math_id": 2,
"text": "g_{st}(0) = \n \\left[ \n \\begin{array}{cc}\nR & p \\\\\n0 & 1 \\\\\n\\end{array} \n\\right]"
},
{
"math_id": 3,
"text": "\\xi_i=\\left(\\begin{array}{c} - \\omega_i \\times q_i \\\\ \\omega_i \\\\ \\end{array}\\right) ."
},
{
"math_id": 4,
"text": "\\xi_i=\\left(\\begin{array}{c} v_i \\\\ 0 \\\\ \\end{array}\\right) ."
},
{
"math_id": 5,
"text": "v"
},
{
"math_id": 6,
"text": "\\xi= \\left(\\begin{array}{c} v \\\\ \\omega \\\\ \\end{array}\\right)."
},
{
"math_id": 7,
"text": "\\hat{\\omega} = \n\\left[ \\begin{array}{ccc}\n0 & -\\omega_3 & \\omega_2 \\\\\n\\omega_3 & 0 & -\\omega_1 \\\\\n-\\omega_2 & \\omega_1 & 0 \\\\\n \\end{array} \\right]. \n"
},
{
"math_id": 8,
"text": "e^{\\hat{\\omega} \\theta} = I + \\hat{\\omega}\\sin{\\theta} + \\hat{\\omega}^2(1-\\cos{\\theta})."
},
{
"math_id": 9,
"text": "\n t = (I-e^{\\hat{\\omega} \\theta})(\\omega \\times v) + \\omega \\omega^T v \\theta\n "
},
{
"math_id": 10,
"text": "e^{\\hat{\\xi}_{i} \\theta _{i}}"
},
{
"math_id": 11,
"text": "\\theta"
},
{
"math_id": 12,
"text": "e^{\\hat{\\xi}_{i} \\theta _{i}} = \n\\left[ \\begin{array}{cc}\n e^{\\hat{\\omega} \\theta} & t \\\\\n0 & 1 \\\\\n \\end{array} \\right]"
},
{
"math_id": 13,
"text": "g_d(\\theta_i,\\ldots,\\theta_n)"
},
{
"math_id": 14,
"text": "g_d=e^{\\hat{\\xi}_1\\theta_1}\\cdots e^{\\hat{\\xi}_n\\theta_n}g_{st}(0)."
}
] |
https://en.wikipedia.org/wiki?curid=56625587
|
56625736
|
Paden–Kahan subproblems
|
Set of solved geometric problems
Paden–Kahan subproblems are a set of solved geometric problems which occur frequently in inverse kinematics of common robotic manipulators. Although the set of problems is not exhaustive, it may be used to simplify inverse kinematic analysis for many industrial robots. Beyond the three classical subproblems several others have been proposed.
Simplification strategies.
For a structure equation defined by the product of exponentials method, Paden–Kahan subproblems may be used to simplify and solve the inverse kinematics problem. Notably, the matrix exponentials are non-commutative.
Generally, subproblems are applied to solve for particular points in the inverse kinematics problem (e.g., the intersection of joint axes) in order to solve for joint angles.
Eliminating revolute joints.
Simplification is accomplished by the principle that a rotation has no effect on a point lying on its axis. For example, if the point formula_0 is on the axis of a revolute twist formula_1, its position is unaffected by the actuation of the twist. To wit:formula_2
Thus, for a structure equationformula_3where formula_4, formula_5 and formula_6 are all zero-pitch twists, applying both sides of the equation to a point formula_0 which is on the axis of formula_6 (but not on the axes of formula_4 or formula_5) yieldsformula_7By the cancellation of formula_6, this yieldsformula_8which, if formula_4 and formula_5 intersect, may be solved by Subproblem 2.
Norm.
In some cases, the problem may also be simplified by subtracting a point from both sides of the equation and taking the norm of the result.
For example, to solveformula_3for formula_6, where formula_4 and formula_5 intersect at the point formula_9, both sides of the equation may be applied to a point formula_0 that is not on the axis of formula_6. Subtracting formula_9 and taking the norm of both sides yieldsformula_10
This may be solved using Subproblem 3.
List of subproblems.
Each subproblem is presented as an algorithm based on a geometric proof. Code to solve a given subproblem, which should be written to account for cases with multiple solutions or no solution, may be integrated into inverse kinematics algorithms for a wide range of robots.
Let formula_1 be a zero-pitch twist with unit magnitude and formula_11 be two points. Find formula_12 such that formula_13
Subproblem 1: Rotation about a single axis.
In this subproblem, a point formula_0 is rotated around a given axis formula_1 such that it coincides with a second point formula_9.
Solution.
Let formula_14 be a point on the axis of formula_1. Define the vectors formula_15 and formula_16. Since formula_14 is on the axis of formula_1, formula_17 Therefore, formula_18
Next, the vectors formula_19 and formula_20 are defined to be the projections of formula_21 and formula_22 onto the plane perpendicular to the axis of formula_1. For a vector formula_23 in the direction of the axis of formula_1,formula_24andformula_25In the event that formula_26, formula_27 and both points lie on the axis of rotation. The subproblem therefore yields an infinite number of possible solutions in that case.
In order for the problem to have a solution, it is necessary that the projections of formula_21 and formula_22 onto the formula_28 axis and onto the plane perpendicular to formula_28 have equal lengths. It is necessary to check, to wit, that:formula_29and thatformula_30
If these equations are satisfied, the value of the joint angle formula_12 may be found using the atan2 function:formula_31Provided that formula_32, this subproblem should yield one solution for formula_12.
Let formula_4 and formula_5 be two zero-pitch twists with unit magnitude and intersecting axes. Let formula_11 be two points. Find formula_33 and formula_34 such that formula_35
Subproblem 2: Rotation about two subsequent axes.
This problem corresponds to rotating formula_0 around the axis of formula_5 by formula_34, then rotating it around the axis of formula_4 by formula_33, so that the final location of formula_0 is coincident with formula_9. (If the axes of formula_4 and formula_5 are coincident, then this problem reduces to Subproblem 1, admitting all solutions such that formula_36.)
Solution.
Provided that the two axes are not parallel (i.e., formula_37), let formula_38 be a point such that formula_39 In other words, formula_38 represents the point to which formula_0 is rotated around one axis before it is rotated around the other axis to be coincident with formula_9. Each individual rotation is equivalent to Subproblem 1, but it's necessary to identify one or more valid solutions for formula_38 in order to solve for the rotations.
Let formula_14 be the point of intersection of the two axes:formula_40Define the vectors formula_15, formula_16 and formula_41. Therefore,formula_42
This implies that formula_43, formula_44, and formula_45. Since formula_46, formula_47 and formula_48 are linearly independent, formula_49 can be written asformula_50
The values of the coefficients may be solved thus:formula_51formula_52, andformula_53The subproblem yields two solutions in the event that the circles intersect at two points; one solution if the circles are tangential; and no solution if the circles fail to intersect.
Let formula_1 be a zero-pitch twist with unit magnitude; let formula_11 be two points; and let formula_54 be a real number greater than 0. Find formula_12 such that formula_55
Subproblem 3: Rotation to a given distance.
In this problem, a point formula_0 is rotated about an axis formula_1 until the point is a distance formula_54 from a point formula_9. In order for a solution to exist, the circle defined by rotating formula_0 around formula_1 must intersect a sphere of radius formula_54 centered at formula_9.
Solution.
Let formula_14 be a point on the axis of formula_1. The vectors formula_15 and formula_16 are defined so thatformula_56
The projections of formula_21 and formula_22 are formula_57 and formula_58 The “projection” of the line segment defined by formula_54 is found by subtracting the component of formula_59 in the formula_28 direction:formula_60The angle formula_61 between the vectors formula_19 and formula_20 is found using the atan2 function:formula_62The joint angle formula_12 is found by the formulaformula_63This subproblem may yield zero, one, or two solutions, depending on the number of points at which the circle of radius formula_64 intersects the circle of radius formula_65.
Let formula_4 and formula_5 be two zero-pitch twists with unit magnitude and intersecting axes. Let formula_66 be points. Find formula_33 and formula_34 such that formula_67 and formula_68
Subproblem 4: Rotation about two axes to a given distance.
This problem is analogous to Subproblem 2, except that the final point is constrained by distances to two known points.
Let formula_1 be an infinite-pitch unit magnitude twist; formula_11 two points; and formula_54 a real number greater than 0. Find formula_12 such that formula_69
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "\\xi"
},
{
"math_id": 2,
"text": "e^{\\widehat{\\xi}\\theta}p=p"
},
{
"math_id": 3,
"text": "e^{\\widehat{\\xi}_1\\theta_1}e^{\\widehat{\\xi}_2\\theta_2}e^{\\widehat{\\xi}_3\\theta_3}=g"
},
{
"math_id": 4,
"text": "\\xi_1"
},
{
"math_id": 5,
"text": "\\xi_2"
},
{
"math_id": 6,
"text": "\\xi_3"
},
{
"math_id": 7,
"text": "e^{\\widehat{\\xi}_1\\theta_1}e^{\\widehat{\\xi}_2\\theta_2}e^{\\widehat{\\xi}_3\\theta_3}p=gp"
},
{
"math_id": 8,
"text": "e^{\\widehat{\\xi}_1\\theta_1}e^{\\widehat{\\xi}_2\\theta_2}p=gp"
},
{
"math_id": 9,
"text": "q"
},
{
"math_id": 10,
"text": "\\begin{aligned}\n\\delta_i = \\|gp-q\\| = \\|e^{\\widehat{\\xi}_1\\theta_1}e^{\\widehat{\\xi}_2\\theta_2}e^{\\widehat{\\xi}_3\\theta_3}p-q\\|\n = \\|e^{\\widehat{\\xi}_1\\theta_1}e^{\\widehat{\\xi}_2\\theta_2}(e^{\\widehat{\\xi}_3\\theta_3}p-q)\\|\n = \\|e^{\\widehat{\\xi}_3\\theta_3}p-q\\|\n \\end{aligned}"
},
{
"math_id": 11,
"text": "p, q \\in \\R^3"
},
{
"math_id": 12,
"text": "\\theta"
},
{
"math_id": 13,
"text": "e^{\\widehat{\\xi}\\theta}p=q."
},
{
"math_id": 14,
"text": "r"
},
{
"math_id": 15,
"text": "u=(p-r)"
},
{
"math_id": 16,
"text": "v=(q-r)"
},
{
"math_id": 17,
"text": "e^{\\widehat{\\xi}\\theta}r=r."
},
{
"math_id": 18,
"text": "e^{\\widehat{\\omega}\\theta}u=v."
},
{
"math_id": 19,
"text": "u'"
},
{
"math_id": 20,
"text": "v'"
},
{
"math_id": 21,
"text": "u"
},
{
"math_id": 22,
"text": "v"
},
{
"math_id": 23,
"text": "\\omega \\in \\R"
},
{
"math_id": 24,
"text": "u'=u-\\omega\\omega^Tu"
},
{
"math_id": 25,
"text": "v'=v-\\omega\\omega^Tv."
},
{
"math_id": 26,
"text": "u'=0"
},
{
"math_id": 27,
"text": "p=q"
},
{
"math_id": 28,
"text": "\\omega"
},
{
"math_id": 29,
"text": "\\omega^Tu=\\omega^Tv"
},
{
"math_id": 30,
"text": "\\|u'\\|=\\|v'\\|"
},
{
"math_id": 31,
"text": "\\theta=\\mathrm{atan2}(\\omega^T(u'\\times v'), u'^Tv')."
},
{
"math_id": 32,
"text": "u'\\neq 0"
},
{
"math_id": 33,
"text": "\\theta_1"
},
{
"math_id": 34,
"text": "\\theta_2"
},
{
"math_id": 35,
"text": "e^{\\widehat{\\xi}_1\\theta_1}e^{\\widehat{\\xi}_2\\theta_2}p=q."
},
{
"math_id": 36,
"text": "\\theta_1+\\theta_2=\\theta"
},
{
"math_id": 37,
"text": "\\omega_1 \\neq \\omega_2"
},
{
"math_id": 38,
"text": "c"
},
{
"math_id": 39,
"text": "e^{\\widehat{\\xi}_2\\theta_2} p=c=e^{-\\widehat{\\xi}_1\\theta_1}q."
},
{
"math_id": 40,
"text": "e^{\\widehat{\\xi}_2\\theta_2}(p-r)=c-r=e^{-\\widehat{\\xi}_1\\theta_1}(q-r)."
},
{
"math_id": 41,
"text": "z=(c-r)"
},
{
"math_id": 42,
"text": "e^{\\widehat{\\xi}_2\\theta_2}u=z=e^{-\\widehat{\\xi}_1\\theta_1}v."
},
{
"math_id": 43,
"text": "\\omega^T_2u=\\omega^T_2z"
},
{
"math_id": 44,
"text": "\\omega^T_1 v=\\omega^T_1z"
},
{
"math_id": 45,
"text": "\\|u\\|^2=\\|z\\|^2=\\|v\\|^2"
},
{
"math_id": 46,
"text": "\\omega_1"
},
{
"math_id": 47,
"text": "\\omega_2"
},
{
"math_id": 48,
"text": "\\omega_1 \\times \\omega_2"
},
{
"math_id": 49,
"text": "z"
},
{
"math_id": 50,
"text": "z=\\alpha \\omega_1 + \\beta\\omega_2 + \\gamma(\\omega_1 \\times \\omega_2)."
},
{
"math_id": 51,
"text": "\\alpha = \\frac{(\\omega^T_1 \\omega_2)\\omega^T_2u-\\omega^T_1v}{(\\omega^T_1\\omega_2)^2 - 1}"
},
{
"math_id": 52,
"text": "\\beta = \\frac{(\\omega^T_1 \\omega_2)\\omega^T_1v-\\omega^T_2u}{(\\omega^T_1\\omega_2)^2 - 1}"
},
{
"math_id": 53,
"text": "\\gamma^2=\\frac{\\|u\\|^2-\\alpha^2-\\beta^2-2\\alpha\\beta\\omega^T_1 \\omega_2}{\\|\\omega_1 \\times \\omega_2\\|^2.}"
},
{
"math_id": 54,
"text": "\\delta"
},
{
"math_id": 55,
"text": "\\|q-e^{\\widehat{\\xi} \\theta}p\\| = \\delta."
},
{
"math_id": 56,
"text": "\\|v-e^{\\widehat{\\xi}\\theta}u\\|^2=\\delta^2."
},
{
"math_id": 57,
"text": "u'=u-\\omega\\omega^Tu"
},
{
"math_id": 58,
"text": "v'=v-\\omega\\omega^Tv."
},
{
"math_id": 59,
"text": "p-q"
},
{
"math_id": 60,
"text": "\\delta'^2 = \\delta^2 - |\\omega^T(p-q)|^2."
},
{
"math_id": 61,
"text": "\\theta_0"
},
{
"math_id": 62,
"text": "\\theta_0 = atan2(\\omega^T(u'\\times v'), u'^Tv')."
},
{
"math_id": 63,
"text": "\\theta=\\theta_0 \\pm \\cos^{-1} \\left( \\frac{\\|u'\\|^2 + \\|v'\\|^2 - \\delta'^2}{2\\|u'\\| \\|v'\\|} \\right)."
},
{
"math_id": 64,
"text": "\\|u'\\|"
},
{
"math_id": 65,
"text": "\\delta'"
},
{
"math_id": 66,
"text": "p, q_1, q_2 \\in \\R^3"
},
{
"math_id": 67,
"text": "\\|e^{\\widehat{\\xi}_1\\theta_1}e^{\\widehat{\\xi}_2\\theta_2}p-q_1\\|=\\delta_1"
},
{
"math_id": 68,
"text": "\\|e^{\\widehat{\\xi}_1\\theta_1}e^{\\widehat{\\xi}_2\\theta_2}p-q_2\\|=\\delta_2."
},
{
"math_id": 69,
"text": "\\|q-e^{\\widehat{\\xi}\\theta}p\\|=\\delta."
}
] |
https://en.wikipedia.org/wiki?curid=56625736
|
56628682
|
Subfield of an algebra
|
In algebra, a subfield of an algebra "A" over a field "F" is an "F"-subalgebra that is also a field. A maximal subfield is a subfield that is not contained in a strictly larger subfield of "A".
If "A" is a finite-dimensional central simple algebra, then a subfield "E" of "A" is called a strictly maximal subfield if formula_0.
|
[
{
"math_id": 0,
"text": "[E : F] = (\\dim_F A)^{1/2}"
}
] |
https://en.wikipedia.org/wiki?curid=56628682
|
5663113
|
Flue-gas stack
|
Stack
A flue-gas stack, also known as a smoke stack, chimney stack or simply as a stack, is a type of chimney, a vertical pipe, channel or similar structure through which combustion product gases called flue gases are exhausted to the outside air. Flue gases are produced when coal, oil, natural gas, wood or any other fuel is combusted in an industrial furnace, a power plant's steam-generating boiler, or other large combustion device. Flue gas is usually composed of carbon dioxide (CO2) and water vapor, as well as nitrogen and excess oxygen remaining from the intake combustion air. It also contains a small percentage of pollutants such as particulate matter, carbon monoxide, nitrogen oxides and sulfur oxides. The flue gas stacks are often quite tall, up to , to increase the stack effect and dispersion of pollutants.
When the flue gases are exhausted from stoves, ovens, fireplaces, heating furnaces and boilers, or other small sources within residential abodes, restaurants, hotels, or other public buildings and small commercial enterprises, their flue gas stacks are referred to as chimneys.
History.
The first industrial chimneys were built in the mid-17th century, when it was first understood how they could improve the combustion of a furnace by increasing the draft of air into the combustion zone. As such, they played an important part in the development of reverberatory furnaces and a coal-based metallurgical industry, one of the key sectors of the early Industrial Revolution. Most 18th-century industrial chimneys (now commonly referred to as flue gas stacks) were built into the walls of the furnace, much like a domestic chimney. The first freestanding industrial chimneys were probably those erected at the end of the long condensing flues associated with smelting lead.
The powerful association between industrial chimneys and the characteristic smoke-filled landscapes of the industrial revolution was due to the universal application of the steam engine for most manufacturing processes. The chimney is part of a steam-generating boiler, and its evolution is closely linked to increases in the power of the steam engine. The chimneys of Thomas Newcomen’s steam engine were incorporated into the walls of the engine house. The taller, freestanding industrial chimneys that appeared in the early 19th century were related to the changes in boiler design associated with James Watt’s "double-powered" engines, and they continued to grow in stature throughout the Victorian period. Decorative embellishments are a feature of many industrial chimneys from the 1860s, with over-sailing caps and patterned brickwork.
The invention of fan-assisted forced draft in the early 20th century removed the industrial chimney's original function, that of drawing air into the steam-generating boilers or other furnaces. With the replacement of the steam engine as a prime mover, first by diesel engines and then by electric motors, the early industrial chimneys began to disappear from the industrial landscape. Building materials changed from stone and brick to steel and later reinforced concrete, and the height of the industrial chimney was determined by the need to disperse combustion flue gases to comply with governmental air pollution control regulations.
Flue-gas stack draft.
The combustion flue gases inside the flue gas stacks are much hotter than the ambient outside air and therefore less dense than the ambient air. That causes the bottom of the vertical column of hot flue gas to have a lower pressure than the pressure at the bottom of a corresponding column of outside air. That higher pressure outside the chimney is the driving force that moves the required combustion air into the combustion zone and also moves the flue gas up and out of the chimney. That movement or flow of combustion air and flue gas is called "natural draft", "natural ventilation", "chimney effect", or "stack effect". The taller the stack, the more draft is created.
The equation below provides an approximation of the pressure difference, Δ"P", (between the bottom and the top of the flue gas stack) that is created by the draft:
formula_0
where:
The above equation is an approximation because it assumes that the molar mass of the flue gas and the outside air are equal and that the pressure drop through the flue gas stack is quite small. Both assumptions are fairly good but not exactly accurate.
Flue-gas flow-rate induced by the draft.
As a "first guess" approximation, the following equation can be used to estimate the flue-gas flow-rate induced by the draft of a flue-gas stack. The equation assumes that the molar mass of the flue gas and the outside air are equal and that the frictional resistance and heat losses are negligible:.
formula_1
where:
Also, this equation is only valid when the resistance to the draft flow is caused by a single orifice characterized by the discharge coefficient C. In many, if not most situations, the resistance is primarily imposed by the flue stack itself. In these cases, the resistance is proportional to the stack height H. This causes a cancellation of the H in the above equation predicting Q to be invariant with respect to the flue height.
Designing chimneys and stacks to provide the correct amount of natural draft involves a great many factors such as:
The calculation of many of the above design factors requires trial-and-error reiterative methods.
Government agencies in most countries have specific codes which govern how such design calculations must be performed. Many non-governmental organizations also have codes governing the design of chimneys and stacks (notably, the ASME codes).
Stack design.
The design of large stacks poses considerable engineering challenges. Vortex shedding in high winds can cause dangerous oscillations in the stack, and may lead to its collapse. The use of helical strake is common to prevent this process occurring at or close to the resonant frequency of the stack.
Other items of interest.
Some fuel-burning industrial equipment does not rely upon natural draft. Many such equipment items use large fans or blowers to accomplish the same objectives, namely: the flow of combustion air into the combustion chamber and the flow of the hot flue gas out of the chimney or stack.
A great many power plants are equipped with facilities for the removal of sulfur dioxide (i.e., flue-gas desulfurization), nitrogen oxides (i.e., selective catalytic reduction, exhaust gas recirculation, thermal deNOx, or low NOx burners) and particulate matter (i.e., electrostatic precipitators). At such power plants, it is possible to use a cooling tower as a flue gas stack. Examples can be seen in Germany at the Power Station Staudinger Grosskrotzenburg and at the Rostock Power Station. Power plants without flue gas purification would experience serious corrosion in such stacks.
In the United States and a number of other countries, atmospheric dispersion modeling studies are required to determine the flue gas stack height needed to comply with the local air pollution regulations. The United States also limits the maximum height of a flue gas stack to what is known as the "Good Engineering Practice" (GEP) stack height. In the case of existing flue gas stacks that exceed the GEP stack height, any air pollution dispersion modelling studies for such stacks must use the GEP stack height rather than the actual stack height.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Delta P =Cah\\bigg(\\frac {1}{T_o} - \\frac {1}{T_i}\\bigg)"
},
{
"math_id": 1,
"text": "Q = CA\\sqrt {2gH\\frac{T_i - T_o}{T_i}}"
}
] |
https://en.wikipedia.org/wiki?curid=5663113
|
56631455
|
Yulij Ilyashenko
|
Russian mathematician (born 1943)
Yulij Sergeevich Ilyashenko (Юлий Сергеевич Ильяшенко, 4 November 1943, Moscow) is a Russian mathematician, specializing in dynamical systems, differential equations, and complex foliations.
Ilyashenko received in 1969 from Moscow State University his Russian candidate degree (Ph.D.) under Evgenii Landis and Vladimir Arnold. Ilyashenko was a professor at Moscow State University, an academic at Steklov Institute, and also taught at the Independent University of Moscow. He became a professor at Cornell University.
His research deals with, among other things, what he calls the "infinitesimal Hilbert's sixteenth problem", which asks what one can say about the number and location of the boundary cycles of planar polynomial vector fields. The problem is not yet completely solved. Ilyashenko attacked the problem using new techniques of complex analysis (such as functional cochains). He proved that planar polynomial vector fields have only finitely many limit cycles. Jean Écalle independently proved the same result, and an earlier attempted proof by Henri Dulac (in 1923) was shown to be defective by Ilyashenko in the 1970s.
He was an Invited Speaker of the ICM in 1978 at Helsinki and in 1990 with talk "Finiteness theorems for limit cycles" at Kyoto. In 2017 he was elected a Fellow of the American Mathematical Society.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C^2"
}
] |
https://en.wikipedia.org/wiki?curid=56631455
|
56638029
|
Repeated median regression
|
In robust statistics, repeated median regression, also known as the repeated median estimator, is a robust linear regression algorithm.
The estimator has a breakdown point of 50%. Although it is equivariant under scaling, or under linear transformations of either its explanatory variable or its response variable, it is not under affine transformations that combine both variables. It can be calculated in formula_0 time by brute force, in formula_1 time using more sophisticated techniques, or in formula_2 randomized expected time. It may also be calculated using an on-line algorithm with formula_3 update time.
Method.
The repeated median method estimates the slope of the regression line formula_4 for a set of points formula_5 as
formula_6
where formula_7 is defined as formula_8.
The estimated Y-axis intercept is defined as
formula_9
where formula_10 is defined as formula_11.
A simpler and faster alternative to estimate the intercept formula_12 is to use the value formula_13 just estimated, thus:
formula_14
Note: The direct and hierarchical methods of estimating formula_12 give slightly different values, with the hierarchical method normally being the best estimate. This latter hierarchical approach is idential to the method of estimating formula_12 in Theil–Sen estimator regression.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "O(n^2)"
},
{
"math_id": 1,
"text": "O(n \\log^2 n)"
},
{
"math_id": 2,
"text": "O(n\\log n)"
},
{
"math_id": 3,
"text": "O(n)"
},
{
"math_id": 4,
"text": "y = A + Bx"
},
{
"math_id": 5,
"text": "(X_i, Y_i)"
},
{
"math_id": 6,
"text": "\\widehat B = \\underset{i}{\\operatorname{median}} \\ \\underset{j\\,\\ne\\,i}{\\operatorname{median}} \\ \\operatorname{slope}(i, j)"
},
{
"math_id": 7,
"text": "\\operatorname{slope}(i,j)"
},
{
"math_id": 8,
"text": "(Y_j - Y_i) / (X_j - X_i)"
},
{
"math_id": 9,
"text": "\\widehat A = \\underset{i}{\\operatorname{median}} \\ \\underset{j\\,\\ne\\,i}{\\operatorname{median}} \\ \\operatorname{intercept}(i, j)"
},
{
"math_id": 10,
"text": "\\operatorname{intercept}(i, j)"
},
{
"math_id": 11,
"text": "(X_j Y_i - X_i Y_j ) / (X_j - X_i)"
},
{
"math_id": 12,
"text": "\\widehat A"
},
{
"math_id": 13,
"text": "\\widehat B"
},
{
"math_id": 14,
"text": "\\widehat A = \\underset{i}{\\operatorname{median}} \\ (y_i - \\widehat {B} x_i)"
}
] |
https://en.wikipedia.org/wiki?curid=56638029
|
5664126
|
Henyey track
|
The Henyey track is a path taken by pre-main-sequence stars with masses greater than 0.5 solar masses in the Hertzsprung–Russell diagram after the end of the Hayashi track. The astronomer Louis G. Henyey and his colleagues in the 1950s showed that the pre-main-sequence star can remain in radiative equilibrium throughout some period of its contraction to the main sequence.
The Henyey track is characterized by a slow collapse in near hydrostatic equilibrium, approaching the main sequence almost horizontally in the Hertzsprung–Russell diagram (i.e. the luminosity remains almost constant).
Deviation from Hayashi Track.
formula_0
The equation for radiative heat transfer tells us the relation of opacity (κ) and temperature gradient T. Stars with high opacity will be convective, while low opacity will be radiative for heat transfer.
Protostars on the Hayashi track are fully convective and due to the large presence of H- ions, are optically thick. These stars will continue to contract, until the central core reaches a certain temperature threshold, where the H- ions will break apart, causing a decrease in opacity.
What determines when and how long a star moves from the Hayashi track to the Henyey track is heavily dependent on its initial mass. Stars that are massive enough (0.6 solar mass) will deviate onto the Henyey Track, depicted as a near-horizontal line on an HR diagram. A core that becomes sufficiently hot enough will become less opaque, making convection inefficient. The core will instead become fully radiative to transfer its thermal energy. During this phase the luminosity stays constant or gradually increases, with the temperature increasing as the core undergoes radiative contraction. At the end of the track, the star will undergo nuclear burning, however, will experience a dip in luminosity, until it reaches the main sequence.
Larger mass stars will evolve quickly from the Hayashi track, while lower mass stars will enter later. Stars that are not sufficiently massive on the other hand will never develop a radiative core, as the core does not become hot enough, and instead, will remain on the Hayashi track until it reaches the main sequence.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\begin{align}\n \\frac {dT}{dt} &= \\frac {-3\\rho\\kappa l(r)}{64\\pi\\sigma _{sb}T^3r^2} \\\\[4pt]\n\\end{align}"
}
] |
https://en.wikipedia.org/wiki?curid=5664126
|
5664494
|
Neutral-beam injection
|
Method used to heat plasma inside a fusion device
Neutral-beam injection (NBI) is one method used to heat plasma inside a fusion device consisting in a beam of high-energy neutral particles that can enter the magnetic confinement field. When these neutral particles are ionized by collision with the plasma particles, they are kept in the plasma by the confining magnetic field and can transfer most of their energy by further collisions with the plasma. By tangential injection in the torus, neutral beams also provide momentum to the plasma and current drive, one essential feature for long pulses of burning plasmas. Neutral-beam injection is a flexible and reliable technique, which has been the main heating system on a large variety of fusion devices. To date, all NBI systems were based on positive precursor ion beams. In the 1990s there has been impressive progress in negative ion sources and accelerators with the construction of multi-megawatt negative-ion-based NBI systems at LHD (H0, 180 keV) and JT-60U (D0, 500 keV). The NBI designed for ITER is a substantial challenge (D0, 1 MeV, 40 A) and a prototype is being constructed to optimize its performance in view of the ITER future operations. Other ways to heat plasma for nuclear fusion include RF heating, electron cyclotron resonance heating (ECRH), ion cyclotron resonance heating (ICRH), and lower hybrid resonance heating (LH).
Mechanism.
This is typically done by:
It is critical to inject neutral material into plasma, because if it is charged, it can start harmful plasma instabilities. Most fusion devices inject isotopes of hydrogen, such as pure deuterium or a mix of deuterium and tritium. This material becomes part of the fusion plasma. It also transfers its energy into the existing plasma within the machine. This hot stream of material should raise the overall temperature. Although the beam has no electrostatic charge when it enters, as it passes through the plasma, the atoms are ionized. This happens because the beam bounces off ions already in the plasma .
Neutral-beam injectors installed in fusion experiments.
At present, all main fusion experiments use NBIs. Traditional positive-ion-based injectors (P-NBI) are installed for instance in JET and in ASDEX-U. To allow power deposition in the center of the burning plasma in larger devices, a higher neutral-beam energy is required. High-energy (>100 keV) systems require the use of negative ion technology (N-NBI).
<templatestyles src="Col-float/styles.css" />
<templatestyles src="Legend/styles.css" /> Active
<templatestyles src="Legend/styles.css" /> In development
<templatestyles src="Col-float/styles.css" />
<templatestyles src="Legend/styles.css" /> Retired
<templatestyles src="Legend/styles.css" /> Active, NBI being updated and revised
Coupling with fusion plasma.
Because the magnetic field inside the torus is circular, these fast ions are confined to the background plasma. The confined fast ions mentioned above are slowed down by the background plasma, in a similar way to how air resistance slows down a baseball. The energy transfer from the fast ions to the plasma increases the overall plasma temperature.
It is very important that the fast ions are confined within the plasma long enough for them to deposit their energy. Magnetic fluctuations are a big problem for plasma confinement in this type of device (see plasma stability) by scrambling what were initially well-ordered magnetic fields. If the fast ions are susceptible to this type of behavior, they can escape very quickly. However, some evidence suggests that they are not susceptible.
The interaction of fast neutrals with the plasma consist of
Design of neutral beam systems.
Beam energy.
The adsorption length formula_0 for neutral beam ionization in a plasma is roughly
formula_1
with formula_0 in m, particle density "n" in 1019 m−3, atomic mass "M" in amu, particle energy "E" in keV. Depending on the plasma minor diameter and density, a minimum particle energy can be defined for the neutral beam, in order to deposit a sufficient power on the plasma core rather than to the plasma edge.
For a fusion-relevant plasma, the required fast neutral energy gets in the range of 1 MeV. With increasing energy, it is increasingly difficult to obtain fast hydrogen atoms starting from precursor beams composed of positive ions. For that reason, recent and future heating neutral beams will be based on negative-ion beams. In the interaction with background gas, it is much easier to detach the extra electron from a negative ion (H− has a binding energy of 0.75 eV and a very large cross-section for electron detachment in this energy range) rather than to attach one electron to a positive ion.
Charge state of the precursor ion beam.
A neutral beam is obtained by neutralisation of a precursor ion beam, commonly accelerated in large electrostatic accelerators. The precursor beam could either be a positive-ion beam or a negative-ion beam: in order to obtain a sufficiently high current, it is produced extracting charges from a plasma discharge.
However, few negative hydrogen ions are created in a hydrogen plasma discharge. In order to generate a sufficiently high negative-ion density and obtain a decent negative-ion beam current, caesium vapors are added to the plasma discharge (surface-plasma negative-ion sources). Caesium, deposited at the source walls, is an efficient electron donor; atoms and positive ions scattered at caesiated surface have a relatively high probability of being scattered as negatively charged ions. Operation of caesiated sources is complex and not so reliable. The development of alternative concepts for negative-ion beam sources is mandatory for the use of neutral beam systems in future fusion reactors.
Existing and future negative-ion-based neutral beam systems (N-NBI) are listed in the following table:
Ion beam neutralisation.
Neutralisation of the precursor ion beam is commonly performed by passing the beam through a gas cell. For a precursor negative-ion beam at fusion-relevant energies, the key collisional processes are:
D− + D2 → D0 + e + D2 (singe-electron detachment, with formula_2−10=1.13×10−20 m2 at 1 MeV)
D− + D2 → D+ + e + D2 (double-electron detachment, with formula_2−11=7.22×10−22 m2 at 1 MeV)
D0 + D2 → D+ + e + D2 (reionization, with formula_201=3.79×10−21 m2 at 1 MeV)
D+ + D2 → D0 + D2+ (charge exchange, formula_210 negligible at 1 MeV)
Underline indicates the fast particles, while subscripts "i", "j" of the cross-section formula_2ij indicate the charge state of fast particle before and after collision.
Cross-sections at 1 MeV are such that, once created, a fast positive ion cannot be converted into a fast neutral, and this is the cause of the limited achievable efficiency of gas neutralisers.
The fractions of negatively charged, positively charged, and neutral particles exiting the neutraliser gas cells depend on the integrated gas density or target thickness formula_3 with formula_4 the gas density along the beam path formula_5. In the case of D− beams, the maximum neutralisation yield occurs at a target thickness formula_6 m−2.
Typically, the background gas density shall be minimised all along the beam path (i.e. within the accelerating electrodes, along the duct connecting to the fusion plasma) to minimise losses except in the neutraliser cell. Therefore, the required target thickness for neutralisation is obtained by injecting gas in a cell with two open ends. A peaked density profile is realised along the cell, when injection occurs at mid-length. For a given gas throughput formula_7 [Pa·m3/s], the maximum gas pressure at the centre of the cell depends on the gas conductance formula_8 [m3/s]:
formula_9
and formula_8 in molecular-flow regime can be calculated as
formula_10
with the geometric parameters formula_11, formula_12, formula_13 indicated in figure, formula_14 gas molecule mass, and formula_15 gas temperature.
Very high gas throughput is commonly adopted, and neutral-beam systems have custom vacuum pumps among the largest ever built, with pumping speeds in the range of million liters per second. If there are no space constraints, a large gas cell length formula_11 is adopted, but this solution is unlikely in future devices due to the limited volume inside the bioshield protecting from energetic neutron flux (for instance, in the case of JT-60U the N-NBI neutraliser cell is about 15 m long, while in the ITER HNB its length is limited to 3 m).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\lambda"
},
{
"math_id": 1,
"text": "\\lambda = \\frac{E}{18 \\cdot n \\cdot M},"
},
{
"math_id": 2,
"text": "\\sigma"
},
{
"math_id": 3,
"text": "\\tau = \\int n \\,dl,"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "l"
},
{
"math_id": 6,
"text": "\\tau_{\\text{D}^-, \\text{1 MeV}} \\approx 1.4 \\cdot 10^{-16}"
},
{
"math_id": 7,
"text": "Q"
},
{
"math_id": 8,
"text": "C"
},
{
"math_id": 9,
"text": "P_0 = P_\\text{tank} + \\frac{Q}{2C},"
},
{
"math_id": 10,
"text": "C = \\frac{9.7}{L/2} \\sqrt{\\frac{T}{m}} \\frac{a^2 \\cdot b^2}{a + b},"
},
{
"math_id": 11,
"text": "L"
},
{
"math_id": 12,
"text": "a"
},
{
"math_id": 13,
"text": "b"
},
{
"math_id": 14,
"text": "m"
},
{
"math_id": 15,
"text": "T"
}
] |
https://en.wikipedia.org/wiki?curid=5664494
|
5665228
|
Quasi-open map
|
Function that maps non-empty open sets to sets that have non-empty interior in its codomain
In topology a branch of mathematics, a quasi-open map or quasi-interior map is a function which has similar properties to continuous maps.
However, continuous maps and quasi-open maps are not related.
Definition.
A function "f" : "X" → "Y" between topological spaces X and Y is quasi-open if, for any non-empty open set "U" ⊆ "X", the interior of "f" ("'U") in Y is non-empty.
Properties.
Let formula_0 be a map between topological spaces.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f:X\\to Y"
},
{
"math_id": 1,
"text": "f"
}
] |
https://en.wikipedia.org/wiki?curid=5665228
|
56652332
|
Schlömilch's series
|
Fourier series type expansion
Schlömilch's series is a Fourier series type expansion of twice continuously differentiable function in the interval formula_0 in terms of the Bessel function of the first kind, named after the German mathematician Oskar Schlömilch, who derived the series in 1857. The real-valued function formula_1 has the following expansion:
formula_2
where
formula_3
Examples.
Some examples of Schlömilch's series are the following:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(0,\\pi)"
},
{
"math_id": 1,
"text": "f(x)"
},
{
"math_id": 2,
"text": "f(x) = a_0 + \\sum_{n=1}^\\infty a_n J_0(nx),"
},
{
"math_id": 3,
"text": "\\begin{align}\na_0 &= f(0) + \\frac{1}{\\pi} \\int_0^\\pi \\int_0^{\\pi/2} u f'(u\\sin\\theta)\\ d\\theta\\ du, \\\\\na_n &= \\frac{2}{\\pi} \\int_0^\\pi \\int_0^{\\pi/2} u\\cos nu \\ f'(u\\sin\\theta)\\ d\\theta\\ du.\n\\end{align}"
},
{
"math_id": 4,
"text": "0 = \\frac{1}{2}+\\sum_{n=1}^\\infty (-1)^n J_0(nx)"
},
{
"math_id": 5,
"text": "0<x<\\pi"
},
{
"math_id": 6,
"text": "x=0"
},
{
"math_id": 7,
"text": "x=\\pi"
},
{
"math_id": 8,
"text": "0 = \\frac{1}{2\\Gamma(\\nu+1)}+\\sum_{n=1}^\\infty (-1)^n J_0(nx)/(nx/2)^\\nu"
},
{
"math_id": 9,
"text": "-1/2<\\nu\\leq 1/2"
},
{
"math_id": 10,
"text": "\\nu> 1/2"
},
{
"math_id": 11,
"text": "0<x\\leq \\pi"
},
{
"math_id": 12,
"text": "x = \\frac{\\pi^2}{4}-2\\sum_{n=1,3,...}^\\infty \\frac{J_0(nx)}{n^2}, \\quad 0<x<\\pi."
},
{
"math_id": 13,
"text": "x^2 = \\frac{2\\pi^2}{3} + 8 \\sum_{n=1}^\\infty \\frac{(-1)^n}{n^2}J_0(nx), \\quad -\\pi<x<\\pi."
},
{
"math_id": 14,
"text": "\\frac{1}{x} + \\sum_{m=1}^k\\frac{2}{\\sqrt{x^2-4m^2\\pi^2}} = \\frac{1}{2} + \\sum_{n=1}^\\infty J_0(nx), \\quad 2k\\pi<x<2(k+1)\\pi."
},
{
"math_id": 15,
"text": "(r,z)"
},
{
"math_id": 16,
"text": "1+\\sum_{n=1}^\\infty e^{-nz}J_0(nr)"
},
{
"math_id": 17,
"text": "z>0"
}
] |
https://en.wikipedia.org/wiki?curid=56652332
|
56652586
|
Highly powerful number
|
Positive integers that have a property about their divisors
In elementary number theory, a highly powerful number is a positive integer that satisfies a property introduced by the Indo-Canadian mathematician Mathukumalli V. Subbarao. The set of highly powerful numbers is a proper subset of the set of powerful numbers.
Define prodex(1) = 1. Let formula_0 be a positive integer, such that formula_1, where formula_2 are formula_3 distinct primes in increasing order and formula_4 is a positive integer for formula_5. Define formula_6. (sequence in the OEIS) The positive integer formula_0 is defined to be a highly powerful number if and only if, for every positive integer formula_7 implies that formula_8
The first 25 highly powerful numbers are: 1, 4, 8, 16, 32, 64, 128, 144, 216, 288, 432, 864, 1296, 1728, 2592, 3456, 5184, 7776, 10368, 15552, 20736, 31104, 41472, 62208, 86400. (sequence in the OEIS)
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "\nn = \\prod_{i=1}^k p_i^{e_{p_i}(n)}\n"
},
{
"math_id": 2,
"text": "p_1, \\ldots , p_k"
},
{
"math_id": 3,
"text": "k"
},
{
"math_id": 4,
"text": "e_{p_i}(n)"
},
{
"math_id": 5,
"text": "i = 1, \\ldots ,k"
},
{
"math_id": 6,
"text": "\\operatorname{prodex}(n) = \\prod_{i=1}^k e_{p_i}(n)"
},
{
"math_id": 7,
"text": "m,\\, 1 \\le m < n"
},
{
"math_id": 8,
"text": "\\operatorname{prodex}(m) < \\operatorname{prodex}(n)."
}
] |
https://en.wikipedia.org/wiki?curid=56652586
|
56660684
|
Dittert conjecture
|
The Dittert conjecture, or Dittert–Hajek conjecture, is a mathematical hypothesis in combinatorics concerning the maximum achieved by a particular function formula_0 of matrices with real, nonnegative entries satisfying a summation condition. The conjecture is due to Eric Dittert and (independently) Bruce Hajek.
Let formula_1 be a square matrix of order formula_2 with nonnegative entries and with formula_3. Its permanent is defined as
formula_4
where the sum extends over all elements formula_5 of the symmetric group.
The Dittert conjecture asserts that the function formula_6 defined by formula_7 is (uniquely) maximized when formula_8, where formula_9 is defined to be the square matrix of order formula_2 with all entries equal to 1.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\phi"
},
{
"math_id": 1,
"text": "A = [a_{ij}]"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": " \\sum_{i=1}^n \\left ( \\sum_{j=1}^n a_{ij} \\right ) = n "
},
{
"math_id": 4,
"text": " \\operatorname{per}(A)=\\sum_{\\sigma\\in S_n}\\prod_{i=1}^n a_{i,\\sigma(i)}, "
},
{
"math_id": 5,
"text": "\\sigma"
},
{
"math_id": 6,
"text": "\\operatorname{\\phi}(A)"
},
{
"math_id": 7,
"text": "\\prod_{i=1}^n \\left ( \\sum_{j=1}^n a_{ij} \\right ) + \\prod_{j=1}^n \\left ( \\sum_{i=1}^n a_{ij} \\right ) - \\operatorname{per}(A)"
},
{
"math_id": 8,
"text": "A = (1/n) J_n"
},
{
"math_id": 9,
"text": "J_n"
}
] |
https://en.wikipedia.org/wiki?curid=56660684
|
56661772
|
Automorphism group
|
Mathematical group formed from the automorphisms of an object
In mathematics, the automorphism group of an object "X" is the group consisting of automorphisms of "X" under composition of morphisms. For example, if "X" is a finite-dimensional vector space, then the automorphism group of "X" is the group of invertible linear transformations from "X" to itself (the general linear group of "X"). If instead "X" is a group, then its automorphism group formula_0 is the group consisting of all group automorphisms of "X".
Especially in geometric contexts, an automorphism group is also called a symmetry group. A subgroup of an automorphism group is sometimes called a transformation group.
Automorphism groups are studied in a general way in the field of category theory.
Examples.
If "X" is a set with no additional structure, then any bijection from "X" to itself is an automorphism, and hence the automorphism group of "X" in this case is precisely the symmetric group of "X". If the set "X" has additional structure, then it may be the case that not all bijections on the set preserve this structure, in which case the automorphism group will be a subgroup of the symmetric group on "X". Some examples of this include the following:
If "G" is a group acting on a set "X", the action amounts to a group homomorphism from "G" to the automorphism group of "X" and conversely. Indeed, each left "G"-action on a set "X" determines formula_7, and, conversely, each homomorphism formula_8 defines an action by formula_9. This extends to the case when the set "X" has more structure than just a set. For example, if "X" is a vector space, then a group action of "G" on "X" is a "group representation" of the group "G", representing "G" as a group of linear transformations (automorphisms) of "X"; these representations are the main object of study in the field of representation theory.
Here are some other facts about automorphism groups:
In category theory.
Automorphism groups appear very naturally in category theory.
If "X" is an object in a category, then the automorphism group of "X" is the group consisting of all the invertible morphisms from "X" to itself. It is the unit group of the endomorphism monoid of "X". (For some examples, see PROP.)
If formula_10 are objects in some category, then the set formula_11 of all formula_12 is a left formula_13-torsor. In practical terms, this says that a different choice of a base point of formula_11 differs unambiguously by an element of formula_13, or that each choice of a base point is precisely a choice of a trivialization of the torsor.
If formula_15 and formula_16 are objects in categories formula_17 and formula_18, and if formula_19 is a functor mapping formula_15 to formula_16, then formula_20 induces a group homomorphism formula_21, as it maps invertible morphisms to invertible morphisms.
In particular, if "G" is a group viewed as a category with a single object * or, more generally, if "G" is a groupoid, then each functor formula_22, "C" a category, is called an action or a representation of "G" on the object formula_23, or the objects formula_24. Those objects are then said to be formula_3-objects (as they are acted by formula_3); cf. formula_25-object. If formula_26 is a module category like the category of finite-dimensional vector spaces, then formula_3-objects are also called formula_3-modules.
Automorphism group functor.
Let formula_27 be a finite-dimensional vector space over a field "k" that is equipped with some algebraic structure (that is, "M" is a finite-dimensional algebra over "k"). It can be, for example, an associative algebra or a Lie algebra.
Now, consider "k"-linear maps formula_28 that preserve the algebraic structure: they form a vector subspace formula_29 of formula_30. The unit group of formula_29 is the automorphism group formula_31. When a basis on "M" is chosen, formula_30 is the space of square matrices and formula_29 is the zero set of some polynomial equations, and the invertibility is again described by polynomials. Hence, formula_31 is a linear algebraic group over "k".
Now base extensions applied to the above discussion determines a functor: namely, for each commutative ring "R" over "k", consider the "R"-linear maps formula_32 preserving the algebraic structure: denote it by formula_33. Then the unit group of the matrix ring formula_33 over "R" is the automorphism group formula_34 and formula_35 is a group functor: a functor from the category of commutative rings over "k" to the category of groups. Even better, it is represented by a scheme (since the automorphism groups are defined by polynomials): this scheme is called the automorphism group scheme and is denoted by formula_31.
In general, however, an automorphism group functor may not be represented by a scheme.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\operatorname{Aut}(X)"
},
{
"math_id": 1,
"text": "L/K"
},
{
"math_id": 2,
"text": "\\operatorname{PGL}_n(k)."
},
{
"math_id": 3,
"text": "G"
},
{
"math_id": 4,
"text": "(\\mathbb{Z}/n\\mathbb{Z})^\\times"
},
{
"math_id": 5,
"text": "\\overline{a} \\mapsto \\sigma_a \\in G, \\, \\sigma_a(x) = x^a"
},
{
"math_id": 6,
"text": "\\mathfrak{g}"
},
{
"math_id": 7,
"text": "G \\to \\operatorname{Aut}(X), \\, g \\mapsto \\sigma_g, \\, \\sigma_g(x) = g \\cdot x"
},
{
"math_id": 8,
"text": "\\varphi: G \\to \\operatorname{Aut}(X)"
},
{
"math_id": 9,
"text": "g \\cdot x = \\varphi(g)x"
},
{
"math_id": 10,
"text": "A, B"
},
{
"math_id": 11,
"text": "\\operatorname{Iso}(A, B)"
},
{
"math_id": 12,
"text": "A \\mathrel{\\overset{\\sim}\\to} B"
},
{
"math_id": 13,
"text": "\\operatorname{Aut}(B)"
},
{
"math_id": 14,
"text": "\\operatorname{Aut}(P) \\hookrightarrow \\operatorname{GL}_n(R)"
},
{
"math_id": 15,
"text": "X_1"
},
{
"math_id": 16,
"text": "X_2"
},
{
"math_id": 17,
"text": "C_1"
},
{
"math_id": 18,
"text": "C_2"
},
{
"math_id": 19,
"text": "F: C_1 \\to C_2"
},
{
"math_id": 20,
"text": "F"
},
{
"math_id": 21,
"text": "\\operatorname{Aut}(X_1) \\to \\operatorname{Aut}(X_2)"
},
{
"math_id": 22,
"text": "F: G \\to C"
},
{
"math_id": 23,
"text": "F(*)"
},
{
"math_id": 24,
"text": "F(\\operatorname{Obj}(G))"
},
{
"math_id": 25,
"text": "\\mathbb{S}"
},
{
"math_id": 26,
"text": "C"
},
{
"math_id": 27,
"text": "M"
},
{
"math_id": 28,
"text": "M \\to M"
},
{
"math_id": 29,
"text": "\\operatorname{End}_{\\text{alg}}(M)"
},
{
"math_id": 30,
"text": "\\operatorname{End}(M)"
},
{
"math_id": 31,
"text": "\\operatorname{Aut}(M)"
},
{
"math_id": 32,
"text": "M \\otimes R \\to M \\otimes R"
},
{
"math_id": 33,
"text": "\\operatorname{End}_{\\text{alg}}(M \\otimes R)"
},
{
"math_id": 34,
"text": "\\operatorname{Aut}(M \\otimes R)"
},
{
"math_id": 35,
"text": "R \\mapsto \\operatorname{Aut}(M \\otimes R)"
}
] |
https://en.wikipedia.org/wiki?curid=56661772
|
56669979
|
Rudin's conjecture
|
Mathematical conjecture
Rudin's conjecture is a mathematical conjecture in additive combinatorics and elementary number theory about an upper bound for the number of squares in finite arithmetic progressions. The conjecture, which has applications in the theory of trigonometric series, was first stated by Walter Rudin in his 1960 paper "Trigonometric series with gaps".
For positive integers formula_0 define the expression formula_1 to be the number of perfect squares in the arithmetic progression formula_2, for formula_3, and define formula_4 to be the maximum of the set {"Q"("N"; "q", "a") : "q", "a" ≥ 1}. The conjecture asserts (in big O notation) that formula_5 and in its stronger form that, if formula_6, formula_7.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "N, q, a"
},
{
"math_id": 1,
"text": "Q(N; q, a)"
},
{
"math_id": 2,
"text": "qn + a"
},
{
"math_id": 3,
"text": "n = 0, 1, \\ldots, N-1"
},
{
"math_id": 4,
"text": "Q(N)"
},
{
"math_id": 5,
"text": "Q(N) = O(\\sqrt { N })"
},
{
"math_id": 6,
"text": "N > 6"
},
{
"math_id": 7,
"text": "Q(N) = Q(N; 24, 1)"
}
] |
https://en.wikipedia.org/wiki?curid=56669979
|
5667758
|
Signal-to-noise statistic
|
In mathematics the signal-to-noise statistic distance between two vectors "a" and "b" with mean values formula_0 and formula_1 and standard deviation formula_2 and formula_3 respectively is:
formula_4
In the case of Gaussian-distributed data and unbiased class distributions, this statistic can be related to classification accuracy given an ideal linear discrimination, and a decision boundary can be derived.
This distance is frequently used to identify vectors that have significant difference. One usage is in bioinformatics to locate genes that are differential expressed on microarray experiments.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mu _a"
},
{
"math_id": 1,
"text": "\\mu _b"
},
{
"math_id": 2,
"text": "\\sigma _a"
},
{
"math_id": 3,
"text": "\\sigma _b"
},
{
"math_id": 4,
"text": "D_{sn} = {(\\mu _a - \\mu _b) \\over (\\sigma _a + \\sigma _b)}"
}
] |
https://en.wikipedia.org/wiki?curid=5667758
|
56679268
|
Eleny Ionel
|
Romanian American mathematician
Eleny-Nicoleta Ionel (born April 1969) is a Romanian mathematician whose research concerns symplectic geometry, including the study of the Gromov–Witten invariants and Gopakumar–Vafa invariants. Among her most significant results are the proofs of Gopakumar-Vafa conjectures (joint with Thomas H. Parker et. al.), and the proof of Getzler's conjecture, asserting vanishing in codimension at least "g" of the tautological ring of the moduli space of genus-"g" curves.
She is a professor of mathematics at Stanford University, where she was chair of the mathematics department from 2016 to 2019.
Education and career.
Ionel is from Iași. She is the daughter of Adrian Ionel, a professor at the Ion Ionescu de la Brad University of Agricultural Sciences and Veterinary Medicine of Iași. She attended the prestigious Costache Negruzzi National College, graduating in 1987. She earned a bachelor's degree from Alexandru Ioan Cuza University in 1991, and completed her Ph.D. in 1996 from Michigan State University. Her dissertation, "Genus One Enumerative Invariants in formula_0", was supervised by Thomas H. Parker.
After postdoctoral research at the Mathematical Sciences Research Institute in Berkeley, California and a position as C. L. E. Moore instructor at the Massachusetts Institute of Technology, she joined the University of Wisconsin–Madison faculty in 1998, and moved to Stanford in 2004.
Recognition.
Ionel is a Sloan Research Fellow and a Simons Fellow. She was an invited speaker at the International Congress of Mathematicians in 2002. She was selected as a Fellow of the American Mathematical Society in the 2020 Class, for "contributions to symplectic geometry and the geometric analysis approach to Gromov–Witten Theory".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf{P}^n"
}
] |
https://en.wikipedia.org/wiki?curid=56679268
|
5667970
|
Side reaction
|
A side reaction is a chemical reaction that occurs at the same time as the actual main reaction, but to a lesser extent. It leads to the formation of by-product, so that the yield of main product is reduced:
<chem>{A} + B ->[{k_1}] P1</chem>
<chem>{A} + C ->[{k_2}] P2</chem>
P1 is the main product if k1> k2. The by-product P2 is generally undesirable and must be separated from the actual main product (usually in a costly process).
In organic synthesis.
B and C from the above equations usually represent different compounds. However, they could also just be different positions in the same molecule.
A side reaction is also referred to as competing reaction when different compounds (B, C) compete for another reactant (A). If the side reaction occurs about as often as the main reaction, it is spoken of parallel reactions (especially in the kinetics, see below).
Also there may be more complicated relationships: Compound A could reversibly but quickly react to substance B (with speed k1) or irreversible but slow (k1> k−1 » k2) to substance C:
<chem>B <=> A ->[{k_2}] C</chem>
Assuming that the reaction to substance C is irreversible, as it is thermodynamically very stable. In this case, B is the kinetic and C is the thermodynamic product of the reaction (see also here). If the reaction is carried out at low temperatures and stopped after a short time, it is spoken of kinetic control, primarily the kinetic product B would be formed. When the reaction is carried out at high temperatures and for long time (in which case the necessary activation energy for the reaction to C is available, which is progressively formed over time), it is spoken of thermodynamic control; the thermodynamic product C is primarily formed.
Conditions for side reactions.
In organic synthesis, elevated temperatures usually lead to more side products. Side products are usually undesirable, therefore low temperatures are preferred ("mild conditions"). The ratio between competing reactions may be influenced by a change in temperature because their activation energies are different in most cases. Reactions with high activation energy can be more strongly accelerated by an increase in temperature than those with low activation energy. Also, the state of equilibrium depends on temperature.
Detection reactions can be distorted by side reactions.
Kinetics.
Side reactions are also described in the reaction kinetics, a branch of physical chemistry. Side reactions are understood as complex reaction, since the overall reaction (main reaction + side reaction) is composed of several (at least two) elementary reactions. Other complex reactions are competing reactions, parallel reactions, consecutive reactions, chain reactions, reversible reactions, etc.
If one reaction occurs much faster than the other one (k1 > k2), it (k1) will be called the main reaction, the other one (k2) side reaction. If both reactions roughly of same speed (k1 ≅ k2) is spoken of parallel reactions.
If the reactions <chem>{A} + B ->[{k_1}] P1</chem> and <chem>{A} + C ->[{k_2}] P2</chem> are irreversibly (without reverse reaction), then the ratio of P1 and P2 corresponds to the relative reactivity of B and C compared with A:
formula_0
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac\\ce{[P1]}\\ce{[P2]} = \\frac{k_1 [\\ce B]}{k_2 [\\ce C]}"
}
] |
https://en.wikipedia.org/wiki?curid=5667970
|
56680625
|
Eva Viehmann
|
German mathematician
Eva Viehmann (born in 1980) is a German mathematician who holds a professorial chair in the arithmetic geometry and representation theory research group at the University of Münster. Before that she was a professor working on arithmetic geometry at the Technical University of Munich.
Viehmann studied at the University of Bonn, where her 2005 doctoral thesis, "On affine Deligne-Lusztig varieties for formula_0" (supervised by Michael Rapoport) won the Felix Hausdorff Memorial Award. She earned her habilitation in 2010, and in 2012 was appointed to her professorship at the Technical University of Munich.
Viehmann won the 2012 von Kaven Award in mathematics of the Deutsche Forschungsgemeinschaft for her work on the Langlands program.
She was an invited speaker at the 2018 International Congress of Mathematicians, speaking in the section on Lie Theory and Generalizations.
She was also the Emmy Noether Lecturer of the German Mathematical Society in 2018. In 2021 she became a member of the German Academy of Sciences Leopoldina.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\operatorname{GL}_n"
}
] |
https://en.wikipedia.org/wiki?curid=56680625
|
56681224
|
Virginia Vassilevska Williams
|
Theoretical computer scientist
Virginia Vassilevska Williams (née Virginia Panayotova Vassilevska) is a theoretical computer scientist and mathematician known for her research in computational complexity theory and algorithms. She is currently the Steven and Renee Finn Career Development Associate Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology. She is notable for her breakthrough results in fast matrix multiplication, for her work on dynamic algorithms, and for helping to develop the field of fine-grained complexity.
Education and career.
Williams is originally from Bulgaria, and attended a German-language high school in Sofia. She graduated from the California Institute of Technology in 2003, and completed her Ph.D. at Carnegie Mellon University in 2008. Her dissertation, "Efficient Algorithms for Path Problems in Weighted Graphs", was supervised by Guy Blelloch.
After postdoctoral research at the Institute for Advanced Study and University of California, Berkeley, Williams became an assistant professor of computer science at Stanford University in 2013. She moved to MIT as an associate professor in 2017.
Research.
In 2011, Williams found an algorithm for multiplying two formula_0 matrices in time formula_1. This improved a previous time bound for matrix multiplication algorithms, the Coppersmith–Winograd algorithm, that had stood as the best known for 24 years. Her initial improvement was independent of Andrew Stothers, who also improved the same bound a year earlier; after learning of Stothers' work, she combined ideas from both methods to improve his bound as well. As of 2023, her work also establishes the current best-known algorithm for matrix multiplication with her collaborators, in time formula_2.
Recognition.
Williams was an NSF Computing Innovation Fellow for 2009–2011, and won a Sloan Research Fellowship in 2017. She was an invited speaker at the 2018 International Congress of Mathematicians, speaking in the section on Mathematical Aspects of Computer Science.
Personal life.
Williams is the daughter of applied mathematicians Panayot Vassilevski and Tanya Kostova-Vassilevska. She is married to Ryan Williams, also a computer science professor at MIT; they have worked together in the field of fine-grained complexity.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n\\times n"
},
{
"math_id": 1,
"text": "O(n^{2.373})"
},
{
"math_id": 2,
"text": "O(n^{2.371552})"
}
] |
https://en.wikipedia.org/wiki?curid=56681224
|
56682148
|
Stochastic scheduling
|
Problems involving random attributes
Stochastic scheduling concerns scheduling problems involving random attributes, such as random processing times, random due dates, random weights, and stochastic machine breakdowns. Major applications arise in manufacturing systems, computer systems, communication systems, logistics and transportation, and machine learning, among others.
Introduction.
The objective of the stochastic scheduling problems can be regular objectives such as minimizing the total flowtime, the makespan, or the total tardiness cost of missing the due dates; or can be irregular objectives such as minimizing both earliness and tardiness costs of completing the jobs, or the total cost of scheduling tasks under likely arrival of a disastrous event such as a severe typhoon.
The performance of such systems, as evaluated by a regular performance measure or an irregular performance measure, can be significantly affected by the scheduling policy adopted to prioritize over time the access of jobs to resources. The goal of stochastic scheduling is to identify scheduling policies that can optimize the objective.
Stochastic scheduling problems can be classified into three broad types: problems concerning the scheduling of a batch of stochastic jobs, multi-armed bandit problems, and problems concerning the scheduling of queueing systems
. These three types are usually under the assumption that complete information is available in the sense that the probability distributions of the random variables involved are known in advance. When such distributions are not fully specified and there are multiple competing distributions to model the random variables of interest, the problem is referred to as incomplete information. The Bayesian method has been applied to treat stochastic scheduling problems with incomplete information.
Scheduling of a batch of stochastic jobs.
In this class of models, a fixed batch of formula_0 jobs with random process times, whose distributions are known, have to be completed by a set of formula_1 machines to optimize a given performance objective.
The simplest model in this class is the problem of sequencing a set of formula_0 jobs on a single machine to minimize the expected weighted flowtime. Job processing times are independent random variables with a general distribution formula_2 with mean formula_3 for job formula_4. Admissible policies must be nonanticipative (scheduling decisions are based on the system's history up to and including the present time) and nonpreemptive (processing of a job must proceed uninterruptedly to completion once started).
Let formula_5 denote the cost rate incurred per unit time in the system for job formula_4, and let formula_6 denote its random completion time. Let formula_7 denote the class of all admissible policies, and let formula_8 denote expectation under policy formula_9. The problem can be stated as
formula_10
The optimal solution in the special deterministic case is given by the Shortest Weighted Processing Time rule of Smith: sequence jobs in nonincreasing order of the priority index formula_11. The natural extension of Smith's rule is also optimal to the above stochastic model.
In general, the rule that assigns higher priority to jobs with shorter expected processing time is optimal for the flowtime objective under the following assumptions: when all the job processing time distributions are exponential; when all the jobs have a common general processing time distribution with a nondecreasing hazard rate function; and when job processing time distributions are stochastically ordered.
Multi-armed bandit problems.
Multi-armed bandit models form a particular type of optimal resource allocation (usually working with time assignment), in which a number of machines or processors are to be allocated to serve a set of competing projects (termed as arms). In the typical framework, the system consists of a single machine and a set of stochastically independent projects, which will contribute random rewards continuously or at certain discrete time points, when they are served. The objective is to maximize the expected total discounted rewards over all dynamically revisable policies.
The first version of multi-bandit problems was formulated in the area of sequential designs by Robbins (1952). Since then, there had not been any essential progress in two decades, until Gittins and his collaborators made celebrated research achievements in Gittins (1979), Gittins and Jones (1974), Gittins and Glazebrook (1977), and Whittle (1980) under the Markov and semi-Markov settings. In this early model, each arm is modeled by a Markov or semi-Markov process in which the time points of making state transitions are decision epochs. The machine can at each epoch pick an arm to serve with a reward represented as a function of the current state of the arm being processed, and the solution is characterized by allocation indices assigned to each state that depends only on the states of the arms. These indices are therefore known as Gittins indices and the optimal policies are usually called Gittins index policies, due to his reputable contributions.
Soon after the seminal paper of Gittins, the extension to branching bandit problem to model stochastic arrivals (also known as the open bandit or arm acquiring bandit problem) was investigated by Whittle (1981). Other extensions include the models of restless bandit, formulated by Whittle (1988), in which each arm evolves restlessly according to two different mechanisms (idle fashion and busy fashion), and the models with switching costs/delays by Van Oyen et al. (1992), who showed that no index policy is optimal when switching between arms incurs costs/delays.
Scheduling of queueing systems.
Models in this class are concerned with the problems of designing optimal service disciplines in queueing systems, where the jobs to be completed arrive at random epochs over time, instead of being available at the start. The main class of models in this setting is that of multiclass queueing networks (MQNs), widely applied as versatile models of computer communications and manufacturing systems.
The simplest types of MQNs involve scheduling a number of job classes in a single server. Similarly as in the two model categories discussed previously, simple priority-index rules have been shown to be optimal for a variety of such models.
More general MQN models involve features such as changeover times for changing service from one job class to another (Levy and Sidi, 1990), or multiple processing stations, which provide service to corresponding nonoverlapping subsets of job classes. Due to the intractability of such models, researchers have aimed to design relatively simple heuristic policies which achieve a performance close to optimal.
Stochastic scheduling with incomplete information.
The majority of studies on stochastic scheduling models have largely been established based on the assumption of complete information, in the sense that the probability distributions of the random variables involved, such as the processing times and the machine up/downtimes, are completely specified a priori.
However, there are circumstances where the information is only partially available. Examples of scheduling with incomplete information can be found in environmental clean-up, project management, petroleum exploration, sensor scheduling in mobile robots, and cycle time modeling, among many others.
As a result of incomplete information, there may be multiple competing distributions to model the random variables of interest. An effective approach is developed by Cai et al. (2009), to tackle this problem, based on Bayesian information update. It identifies each competing distribution by a realization of a random variable, say formula_12. Initially, formula_12 has a prior distribution based on historical information or assumption (which may be non-informative if no historical information is available). Information on formula_12 may be updated after realizations of the random variables are observed. A key concern in decision making is how to utilize the updated information to refine and enhance the decisions. When the scheduling policy is static in the sense that it does not change over time, optimal sequences are identified to minimize the expected discounted reward and stochastically minimize the number of tardy jobs under a common exponential due date. When the scheduling policy is dynamic in the sense that it can make adjustments during the process based on up-to-date information, posterior Gittins index is developed to find the optimal policy that minimizes the expected discounted reward in the class of dynamic policies.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " n "
},
{
"math_id": 1,
"text": "m "
},
{
"math_id": 2,
"text": " G_i(\\cdot) "
},
{
"math_id": 3,
"text": "p_i"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "w_i\\ge 0"
},
{
"math_id": 6,
"text": "\\tilde{C}_i"
},
{
"math_id": 7,
"text": "\\Pi"
},
{
"math_id": 8,
"text": "E_{\\pi}[\\cdot]"
},
{
"math_id": 9,
"text": "\\pi\\in \\Pi"
},
{
"math_id": 10,
"text": "\n\\min_{\\pi\\in \\Pi} w_1E_{\\pi}[\\tilde{C}_1]+\\cdots+w_nE_{\\pi}[\\tilde{C}_n].\n"
},
{
"math_id": 11,
"text": "w_ip_i"
},
{
"math_id": 12,
"text": "\\Theta"
}
] |
https://en.wikipedia.org/wiki?curid=56682148
|
56683458
|
Brennan conjecture
|
The Brennan conjecture is a mathematical hypothesis (in complex analysis) for estimating (under specified conditions) the integral powers of the moduli of the derivatives of conformal maps into the open unit disk. The conjecture was formulated by James E. Brennan in 1978.
Let W be a simply connected open subset of formula_0 with at least two boundary points in the extended complex plane. Let formula_1 be a conformal map of W onto the open unit disk. The Brennan conjecture states that
formula_2 whenever formula_3. Brennan proved the result when formula_4 for some constant formula_5. Bertilsson proved in 1999 that the result holds when formula_6, but the full result remains open.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbb{C}"
},
{
"math_id": 1,
"text": "\\varphi"
},
{
"math_id": 2,
"text": "\\int_W |\\varphi\\ '|^p\\, \\mathrm{d}x\\, \\mathrm{d}y < \\infty"
},
{
"math_id": 3,
"text": "4/3 < p < 4"
},
{
"math_id": 4,
"text": "4/3 < p < p_0"
},
{
"math_id": 5,
"text": "p_0 > 3"
},
{
"math_id": 6,
"text": "4/3 < p < 3.422"
}
] |
https://en.wikipedia.org/wiki?curid=56683458
|
56685825
|
Unscented optimal control
|
Mathematics concept
In mathematics, unscented optimal control combines the notion of the unscented transform with deterministic optimal control to address a class of uncertain optimal control problems. It is a specific application of tychastic optimal control theory, which is a generalization of Riemmann-Stieltjes optimal control theory, a concept introduced by Ross and his coworkers.
Mathematical description.
Suppose that the initial state formula_0 of a dynamical system,
formula_1
is an uncertain quantity. Let formula_2 be the sigma points. Then sigma-copies of the dynamical system are given by,
formula_3
Applying standard deterministic optimal control principles to this ensemble generates an unscented optimal control. Unscented optimal control is a special case of tychastic optimal control theory. According to Aubin and Ross, tychastic processes differ from stochastic processes in that a tychastic process is conditionally deterministic.
Applications.
Unscented optimal control theory has been applied to UAV guidance, spacecraft attitude control, air-traffic control and low-thrust trajectory optimization
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x^0"
},
{
"math_id": 1,
"text": "\\dot{x} = f(x, u, t)"
},
{
"math_id": 2,
"text": "\\Chi^i"
},
{
"math_id": 3,
"text": "\\dot\\Chi^i = f(\\Chi^i, u, t)"
}
] |
https://en.wikipedia.org/wiki?curid=56685825
|
566869
|
Quasi-arithmetic mean
|
Generalization of means
In mathematics and statistics, the quasi-arithmetic mean or generalised "f"-mean or Kolmogorov-Nagumo-de Finetti mean is one generalisation of the more familiar means such as the arithmetic mean and the geometric mean, using a function formula_0. It is also called Kolmogorov mean after Soviet mathematician Andrey Kolmogorov. It is a broader generalization than the regular generalized mean.
Definition.
If "f" is a function which maps an interval formula_1 of the real line to the real numbers, and is both continuous and injective, the "f"-mean of formula_2 numbers
formula_3
is defined as formula_4, which can also be written
formula_5
We require "f" to be injective in order for the inverse function formula_6 to exist. Since formula_0 is defined over an interval, formula_7 lies within the domain of formula_6.
Since "f" is injective and continuous, it follows that "f" is a strictly monotonic function, and therefore that the "f"-mean is neither larger than the largest number of the tuple formula_8 nor smaller than the smallest number in formula_8.
Properties.
The following properties hold for formula_21 for any single function formula_0:
Symmetry: The value of formula_21is unchanged if its arguments are permuted.
Idempotency: for all "x", formula_22.
Monotonicity: formula_21 is monotonic in each of its arguments (since formula_0 is monotonic).
Continuity: formula_21 is continuous in each of its arguments (since formula_0 is continuous).
Replacement: Subsets of elements can be averaged a priori, without altering the mean, given that the multiplicity of elements is maintained. With formula_23 it holds:
formula_24
Partitioning: The computation of the mean can be split into computations of equal sized sub-blocks:formula_25
Self-distributivity: For any quasi-arithmetic mean formula_26 of two variables: formula_27.
Mediality: For any quasi-arithmetic mean formula_26 of two variables:formula_28.
Balancing: For any quasi-arithmetic mean formula_26 of two variables:formula_29.
Central limit theorem : Under regularity conditions, for a sufficiently large sample, formula_30 is approximately normal.
A similar result is available for Bajraktarević means, which are generalizations of quasi-arithmetic means.
Scale-invariance: The quasi-arithmetic mean is invariant with respect to offsets and scaling of formula_0: formula_31.
Characterization.
There are several different sets of properties that characterize the quasi-arithmetic mean (i.e., each function that satisfies these properties is an "f"-mean for some function "f").
Homogeneity.
Means are usually homogeneous, but for most functions formula_0, the "f"-mean is not.
Indeed, the only homogeneous quasi-arithmetic means are the power means (including the geometric mean); see Hardy–Littlewood–Pólya, page 68.
The homogeneity property can be achieved by normalizing the input values by some (homogeneous) mean formula_32.
formula_33
However this modification may violate monotonicity and the partitioning property of the mean.
Generalizations.
Consider a Legendre-type strictly convex function formula_34. Then the gradient map formula_35 is globally invertible and the weighted multivariate quasi-arithmetic mean is defined by
formula_36, where formula_37 is a normalized weight vector (formula_38 by default for a balanced average). From the convex duality, we get a dual quasi-arithmetic mean formula_39 associated to the quasi-arithmetic mean formula_40.
For example, take formula_41 for formula_42 a symmetric positive-definite matrix.
The pair of matrix quasi-arithmetic means yields the matrix harmonic mean:
formula_43
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "I"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "x_1, \\dots, x_n \\in I"
},
{
"math_id": 4,
"text": "M_f(x_1, \\dots, x_n) = f^{-1}\\left( \\frac{f(x_1)+ \\cdots + f(x_n)}n \\right)"
},
{
"math_id": 5,
"text": " M_f(\\vec x)= f^{-1}\\left(\\frac{1}{n} \\sum_{k=1}^{n}f(x_k) \\right)"
},
{
"math_id": 6,
"text": "f^{-1}"
},
{
"math_id": 7,
"text": "\\frac{f(x_1)+ \\cdots + f(x_n)}n"
},
{
"math_id": 8,
"text": "x"
},
{
"math_id": 9,
"text": "I = \\mathbb{R}"
},
{
"math_id": 10,
"text": "f(x) = x"
},
{
"math_id": 11,
"text": "x\\mapsto a\\cdot x + b"
},
{
"math_id": 12,
"text": "a"
},
{
"math_id": 13,
"text": "I = \\mathbb{R}^+"
},
{
"math_id": 14,
"text": "f(x) = \\log(x)"
},
{
"math_id": 15,
"text": "f(x) = \\frac{1}{x}"
},
{
"math_id": 16,
"text": "f(x) = x^p"
},
{
"math_id": 17,
"text": "p"
},
{
"math_id": 18,
"text": "f(x) = \\exp(x)"
},
{
"math_id": 19,
"text": "M_f(x_1, \\dots, x_n) = \\mathrm{LSE}(x_1, \\dots, x_n)-\\log(n)"
},
{
"math_id": 20,
"text": "-\\log(n)"
},
{
"math_id": 21,
"text": "M_f"
},
{
"math_id": 22,
"text": "M_f(x,\\dots,x) = x"
},
{
"math_id": 23,
"text": "m=M_f(x_1,\\dots,x_k)"
},
{
"math_id": 24,
"text": "M_f(x_1,\\dots,x_k,x_{k+1},\\dots,x_n) = M_f(\\underbrace{m,\\dots,m}_{k \\text{ times}},x_{k+1},\\dots,x_n)"
},
{
"math_id": 25,
"text": "\nM_f(x_1,\\dots,x_{n\\cdot k}) =\n M_f(M_f(x_1,\\dots,x_{k}),\n M_f(x_{k+1},\\dots,x_{2\\cdot k}),\n \\dots,\n M_f(x_{(n-1)\\cdot k + 1},\\dots,x_{n\\cdot k}))\n"
},
{
"math_id": 26,
"text": "M"
},
{
"math_id": 27,
"text": "M(x,M(y,z))=M(M(x,y),M(x,z))"
},
{
"math_id": 28,
"text": "M(M(x,y),M(z,w))=M(M(x,z),M(y,w))"
},
{
"math_id": 29,
"text": "M\\big(M(x, M(x, y)), M(y, M(x, y))\\big)=M(x, y)"
},
{
"math_id": 30,
"text": "\\sqrt{n}\\{M_f(X_1, \\dots, X_n) - f^{-1}(E_f(X_1, \\dots, X_n))\\}"
},
{
"math_id": 31,
"text": "\\forall a\\ \\forall b\\ne0 ((\\forall t\\ g(t)=a+b\\cdot f(t)) \\Rightarrow \\forall x\\ M_f (x) = M_g (x)"
},
{
"math_id": 32,
"text": "C"
},
{
"math_id": 33,
"text": "M_{f,C} x = C x \\cdot f^{-1}\\left( \\frac{f\\left(\\frac{x_1}{C x}\\right) + \\cdots + f\\left(\\frac{x_n}{C x}\\right)}{n} \\right)"
},
{
"math_id": 34,
"text": "F"
},
{
"math_id": 35,
"text": "\\nabla F"
},
{
"math_id": 36,
"text": "\nM_{\\nabla F}(\\theta_1,\\ldots,\\theta_n;w) = {\\nabla F}^{-1}\\left(\\sum_{i=1}^n w_i \\nabla F(\\theta_i)\\right)\n"
},
{
"math_id": 37,
"text": "w"
},
{
"math_id": 38,
"text": "w_i=\\frac{1}{n}"
},
{
"math_id": 39,
"text": "M_{\\nabla F^*}"
},
{
"math_id": 40,
"text": "M_{\\nabla F}"
},
{
"math_id": 41,
"text": "F(X)=-\\log\\det(X)"
},
{
"math_id": 42,
"text": "X"
},
{
"math_id": 43,
"text": "M_{\\nabla F}(\\theta_1,\\theta_2)=2(\\theta_1^{-1}+\\theta_2^{-1})^{-1}.\n"
}
] |
https://en.wikipedia.org/wiki?curid=566869
|
5669330
|
Bhabha scattering
|
Electron-positron scattering
In quantum electrodynamics, Bhabha scattering is the electron-positron scattering process:
formula_0
There are two leading-order Feynman diagrams contributing to this interaction: an annihilation process and a scattering process. Bhabha scattering is named after the Indian physicist Homi J. Bhabha.
The Bhabha scattering rate is used as a luminosity monitor in electron-positron colliders.
Differential cross section.
To leading order, the spin-averaged differential cross section for this process is
formula_1
where "s","t", and "u" are the Mandelstam variables, formula_2 is the fine-structure constant, and formula_3 is the scattering angle.
This cross section is calculated neglecting the electron mass relative to the collision energy and including only the contribution from photon exchange. This is a valid approximation at collision energies small compared to the mass scale of the Z boson, about 91 GeV; at higher energies the contribution from Z boson exchange also becomes important.
Mandelstam variables.
In this article, the Mandelstam variables are defined by
where the approximations are for the high-energy (relativistic) limit.
Deriving unpolarized cross section.
Matrix elements.
Both the scattering and annihilation diagrams contribute to the transition matrix element. By letting "k" and "k' " represent the four-momentum of the positron, while letting "p" and "p' " represent the four-momentum of the electron, and by using Feynman rules one can show the following diagrams give these matrix elements:
Notice that there is a relative sign difference between the two diagrams.
Square of matrix element.
To calculate the unpolarized cross section, one must "average" over the spins of the incoming particles ("s"e- and "s"e+ possible values) and "sum" over the spins of the outgoing particles. That is,
First, calculate formula_4:
Scattering term (t-channel).
Sum over spins.
Next, we'd like to sum over spins of all four particles. Let "s" and "s' " be the spin of the electron and "r" and "r' " be the spin of the positron.
Now that is the exact form, in the case of electrons one is usually interested in energy scales that far exceed the electron mass. Neglecting the electron mass yields the simplified form:
Annihilation term (s-channel).
The process for finding the annihilation term is similar to the above. Since the two diagrams are related by crossing symmetry, and the initial and final state particles are the same, it is sufficient to permute the momenta, yielding
(This is proportional to
formula_5
where formula_3 is the scattering angle in the center-of-mass frame.)
Solution.
Evaluating the interference term along the same lines and adding the three terms yields the final result
formula_6
Simplifying steps.
Completeness relations.
The completeness relations for the four-spinors "u" and "v" are
formula_7
formula_8
where
formula_9 (see Feynman slash notation)
formula_10
Trace identities.
To simplify the trace of the Dirac gamma matrices, one must use trace identities. Three used in this article are:
Using these two one finds that, for example,
Uses.
Bhabha scattering has been used as a luminosity monitor in a number of e+e− collider physics experiments. The accurate measurement of luminosity is necessary for accurate measurements of cross sections.
Small-angle Bhabha scattering was used to measure the luminosity of the 1993 run of the Stanford Large Detector (SLD), with a relative uncertainty of less than 0.5%.
Electron-positron colliders operating in the region of the low-lying hadronic resonances (about 1 GeV to 10 GeV), such as the Beijing Electron–Positron Collider II and the Belle and BaBar "B-factory" experiments, use large-angle Bhabha scattering as a luminosity monitor. To achieve the desired precision at the 0.1% level, the experimental measurements must be compared to a theoretical calculation including next-to-leading-order radiative corrections. The high-precision measurement of the total hadronic cross section at these low energies is a crucial input into the theoretical calculation of the anomalous magnetic dipole moment of the muon, which is used to constrain supersymmetry and other models of physics beyond the Standard Model.
|
[
{
"math_id": 0,
"text": "e^+ e^- \\rightarrow e^+ e^-"
},
{
"math_id": 1,
"text": "\\frac{\\mathrm{d} \\sigma}{\\mathrm{d} (\\cos\\theta)} = \\frac{\\pi \\alpha^2}{s} \\left( u^2 \\left( \\frac{1}{s} + \\frac{1}{t} \\right)^2 + \\left( \\frac{t}{s} \\right)^2 + \\left( \\frac{s}{t} \\right)^2 \\right) \\,"
},
{
"math_id": 2,
"text": "\\alpha"
},
{
"math_id": 3,
"text": "\\theta"
},
{
"math_id": 4,
"text": "|\\mathcal{M}|^2 \\,"
},
{
"math_id": 5,
"text": "(1 + \\cos^2\\theta)"
},
{
"math_id": 6,
"text": "\\frac{\\overline{|\\mathcal{M}|^2}}{2e^4} = \\frac{u^2 + s^2}{t^2} + \\frac{2 u^2}{st} + \\frac{u^2 + t^2}{s^2} \\,"
},
{
"math_id": 7,
"text": "\\sum_{s=1,2}{u^{(s)}_p \\bar{u}^{(s)}_p} = p\\!\\!\\!/ + m \\,"
},
{
"math_id": 8,
"text": "\\sum_{s=1,2}{v^{(s)}_p \\bar{v}^{(s)}_p} = p\\!\\!\\!/ - m \\,"
},
{
"math_id": 9,
"text": "p\\!\\!\\!/ = \\gamma^\\mu p_\\mu \\,"
},
{
"math_id": 10,
"text": "\\bar{u} = u^{\\dagger} \\gamma^0 \\,"
},
{
"math_id": 11,
"text": "\\gamma_\\mu \\,"
},
{
"math_id": 12,
"text": "\\operatorname{Tr} (\\gamma^\\mu\\gamma^\\nu) = 4\\eta^{\\mu\\nu}"
},
{
"math_id": 13,
"text": "\\operatorname{Tr}\\left( \\gamma_\\rho \\gamma_\\mu \\gamma_\\sigma \\gamma_\\nu \\right) = 4 \\left( \\eta_{\\rho\\mu}\\eta_{\\sigma\\nu}-\\eta_{\\rho\\sigma}\\eta_{\\mu\\nu}+\\eta_{\\rho\\nu}\\eta_{\\mu\\sigma} \\right) \\,"
}
] |
https://en.wikipedia.org/wiki?curid=5669330
|
56694947
|
IOTA (technology)
|
Open-source distributed ledger and cryptocurrency
Cryptocurrency
IOTA is an open-source distributed ledger and cryptocurrency designed for the Internet of things (IoT). It uses a directed acyclic graph to store transactions on its ledger, motivated by a potentially higher scalability over blockchain based distributed ledgers. IOTA does not use miners to validate transactions, instead, nodes that issue a new transaction on the network must approve two previous transactions. Transactions can therefore be issued without fees, facilitating microtransactions. The network currently achieves consensus through a coordinator node, operated by the IOTA Foundation. As the coordinator is a single point of failure, the network is currently centralized.
IOTA has been criticized due to its unusual design, of which it is unclear whether it will work in practice. As a result, IOTA was rewritten from the ground up for a network update called Chrysalis, or IOTA 1.5, which launched on 28 April 2021. In this update, controversial decisions such as ternary encoding and quantum proof cryptography were left behind and replaced with established standards. A testnet for a follow-up update called Coordicide, or IOTA 2.0, was deployed in late 2020, with the aim of releasing a distributed network that no longer relies on the coordinator for consensus in 2021.
History.
The value transfer protocol IOTA, named after the smallest letter of the Greek alphabet, was created in 2015 by David Sønstebø, Dominik Schiener, Sergey Ivancheglo, and Serguei Popov. Initial development was funded by an online public crowdsale, with the participants buying the IOTA value token with other digital currencies. Approximately 1300BTC were raised, corresponding to approximately US$500,000 at that time, and the total token supply was distributed pro-rata over the initial investors. The IOTA network went live in 2016.
IOTA foundation.
In 2017, early IOTA token investors donated 5% of the total token supply for continued development and to endow what became later became the IOTA Foundation. In 2018, the IOTA Foundation was chartered as a Stiftung in Berlin, with the goal to assist in the research and development, education and standardisation of IOTA technology. The IOTA Foundation is a board member of International Association for Trusted Blockchain Applications (INATBA), and founding member of the Trusted IoT Alliance and Mobility Open Blockchain Initiative (MOBI), to promote blockchain and distributed ledgers in regulatory approaches, the IoT ecosystem and mobility.
Following a dispute between IOTA founders David Sønstebø and Sergey Ivancheglo, Ivancheglo resigned from the board of directors on 23 June 2019. On 10 December 2020 the IOTA Foundation Board of Directors and supervisory board announced that the Foundation officially parted ways with David Sønstebø.
In November 2023, the IOTA Ecosystem DLT Foundation was created in the United Arab Emirates. The purpose of the foundation is to facilitate the growth of IOTA's distributed ledger technology in the Middle East. It was the first crypto-centric organization to be approved by regulators of the Abu Dhabi Global Market.
In 2024, the Imperial IOTA Infrastructures Lab (otherwise known as the I3-Lab) at Imperial College London was launched. The IOTA Foundation committed £1 million to the lab while Imperial College London provided additional funding. The I3-Lab focuses on circular economy research, sustainable business models, and translational research based on IOTA's technology.
DCI vulnerability disclosure.
On 8 September 2017, researchers Ethan Heilman from Boston University and Neha Nerula et al. from MIT's Digital Currency Initiative (DCI) reported on potential security flaws with IOTA's former Curl-P-27 hash function. The IOTA Foundation received considerable backlash in their handling of the incident. FT Alphaville reported legal posturing by an IOTA Founder against a security researcher for his involvement in the DCI report, as well as instances of aggressive language levelled against a Forbes contributor and other unnamed journalists covering the DCI report. The Center for Blockchain Technologies at the University College London severed ties with the IOTA Foundation due to legal threats against security researchers involved in the report.
Attacks.
As a speculative blockchain and cryptocurrency-related technology, IOTA has been the target of phishing, scamming, and hacking attempts, which have resulted in the thefts of user tokens and extended periods of downtime. In January 2018, more than US$10 million worth of IOTA tokens were stolen from users that used a malicious online seed-creator, a password that protects their ownership of IOTA tokens. The seed-generator scam was the largest fraud in IOTA history to date, with over 85 victims. In January 2019, the UK and German law enforcement agencies arrested a 36-year-old man from Oxford, England believed to be behind the theft.
On 26 November 2019 a hacker discovered a vulnerability in a third-party payment service, provided by "MoonPay", integrated in the mobile and desktop wallet managed by the "IOTA Foundation". The attacker compromised over 50 IOTA seeds, resulting in the theft of approximately US$2 Million worth in IOTA tokens. After receiving reports that hackers were stealing funds from user wallets, the IOTA Foundation shut down the coordinator on 12 February 2020. This had the side-effect of effectively shutting down the entire IOTA cryptocurrency. Users at-risk were given seven days to migrate their potentially compromised seed to a new seed, until 7 March 2020. The coordinator was restarted on 10 March 2020.
IOTA 1.5 (Chrysalis) and IOTA 2.0 (Coordicide).
The IOTA network is currently centralized, a transaction on the network is considered valid if and only if it is referenced by a milestone issued by a node operated by the IOTA foundation called the coordinator. In 2019 the IOTA Foundation announced that it would like to operate the network without a coordinator in the future, using a two-stage network update, termed Chrysalis for IOTA 1.5 and Coordicide for IOTA 2.0. The Chrysalis update went live on 28 April 2021, and removed its controversial design choices such as ternary encoding and Winternitz one-time signatures, to create an enterprise-ready blockchain solution. In parallel Coordicide is currently developed, to create a distributed network that no longer relies on the coordinator for consensus. A testnet of Coordicide was deployed late 2020, with the aim of releasing a final version in 2021.
Characteristics.
The Tangle.
The Tangle is the moniker used to describe IOTAs directed acyclic graph (DAG) transaction settlement and data integrity layer. It is structured as a string of individual transactions that are interlinked to each other and stored through a network of node participants. The Tangle does not have miners validating transactions, rather, network participants are jointly responsible for transaction validation, and must confirm two transactions already submitted to the network for every one transaction they issue. Transactions can therefore be issued to the network at no cost, facilitating micropayments. To avoid spam, every transaction requires computational resources based on Proof of Work (PoW) algorithms, to find the answer to a simple cryptographic puzzle.
IOTA supports both value and data transfers. A second layer protocol provides encryption and authentication of messages, or data streams, transmitted and stored on the Tangle as zero-value transactions. Each message holds a reference to the address of a follow-up message, connecting the messages in a data stream, and providing forward secrecy. Authorised parties with the correct decryption key can therefore only follow a datastream from their point of entry. When the owner of the data stream wants to revoke access, it can change the decryption key when publishing a new message. This provides the owner granular controls over the way in which data is exchanged to authorised parties.
IOTA token.
The IOTA token is a unit of value in the IOTA network. There is a fixed supply of 2,779,530,283,277,761 IOTA tokens in circulation on the IOTA network. IOTA tokens are stored in IOTA wallets protected by an 81-character seed, similar to a password. To access and spend the tokens, IOTA provides a cryptocurrency wallet. A hardware wallet can be used to keep credentials offline while facilitating transactions.
Coordinator node.
IOTA currently requires a majority of honest actors to prevent network attacks. However, as the concept of mining does not exist on the IOTA network, it is unlikely that this requirement will always be met. Therefore, consensus is currently obtained through referencing of transactions issued by a special node operated by the IOTA foundation, called the coordinator. The coordinator issues zero value transactions at given time intervals, called milestones. Any transaction, directly or indirectly, referenced by such a milestone is considered valid by the nodes in the network. The coordinator is an authority operated by the IOTA foundation and as such single point of failure for the IOTA network, which makes the network centralized.
Markets.
IOTA is traded in megaIOTA units (1,000,000 IOTA) on digital currency exchanges such as Bitfinex, and listed under the MIOTA ticker symbol. Like other digital currencies, IOTA's token value has soared and fallen.
Fast Probabilistic Consensus (FPC).
The crux of cryptocurrencies is to stop double spends, the ability to spend the same money twice in two simultaneous transactions. Bitcoin's solution has been to use Proof of Work (PoW) making it a significant financial burden to have a minted block be rejected for a double spend. IOTA has designed a voting algorithm called Fast Probabilistic Consensus to form a consensus on double spends. Instead of starting from scratch, the IOTA Foundation started with Simple Majority Consensus where the first opinion update is defined by,
formula_0
Where formula_1 is the opinion of node formula_2 at time formula_3. The function formula_4 is the percent of all the nodes that have the opinion formula_3 and formula_5 is the threshold for majority, set by the implementation. After the first round, the successive opinions change at time formula_6 to the function,
formula_7
Although, this model is fragile against malicious attackers which is why the IOTA Foundation decided not to use it. Instead the IOTA Foundation decided to augment the leaderless consensus mechanism called, "Random neighbors majority consensus (RMC)" which is similar to SMC although, the nodes in which their opinions are queries is randomized. They took RMC then augmented it to create FPC by having the threshold of majority be a random number generated from a Decentralized Random Number Generator (dRNG). For FPC, the first sound is the same,
formula_0
For success rounds though,
formula_8
Where formula_9 where formula_10, is a randomized threshold for majority. Randomizing the threshold for majority makes it extremely difficult for adversaries to manipulate the consensus by either making it converge to a specific value or prolonging consensus. Note that FPC is only utilized to form consensus on a transaction during a double spend.
Ultimately, IOTA uses Fast Probabilistic Consensus for consensus and uses Proof of Work as a rate controller. Because IOTA does not use PoW for consensus, its overall network and energy per transaction is extremely small.
Applications and testbeds.
Proof-of-concepts building on IOTA technology are being developed in the automotive and IoT industry by corporates as Jaguar Land Rover, STMicroelectronics and Bosch. IOTA is a participant in smart city testbeds, to establish digital identity, waste management and local trade of energy. In project Alvarium, formed under the Linux Foundation, IOTA is used as an immutable storage and validation mechanism. The privacy centered search engine Xayn uses IOTA as a trust anchor for its aggregated AI model.
On 11 February 2020, the Eclipse Foundation and IOTA Foundation jointly launched the Tangle EE (Enterprise Edition) Working Group. Tangle EE is aimed at enterprise users that can take IOTA technology and enable larger organizations to build applications on top of the project, where the Eclipse Foundation will provide a vendor-neutral governance framework .
Announcements of partners were critically received. In 2017, IOTA released the data marketplace, a pilot for a market where connected sensors or devices can store, sell or purchase data. The data marketplace was received critically by the cryptocurrency community over the extent of the involvement of the participants of the data marketplace, suggesting that "the IOTA Foundation was actively asking publications to use Microsoft’s name following the data marketplace announcement.". Izabella Kaminska criticized a Jaguar press release: "our interpretation is that it's very unlikely Jaguar will be bringing a smart-wallet-enabled marketplace any time soon."
Criticism.
IOTA promises to achieve the same benefits that blockchain-based DLTs bring — decentralization, distribution, immutability and trust — but removes the downsides of wasted resources associated with mining as well as transaction costs. However, several of the design features of IOTA are unusual, and it is unclear whether they work in practice.
The security of IOTA's consensus mechanism against double-spending attacks is unclear, as long as the network is immature. Essentially, in the IoT, with heterogeneous devices having varying levels of low computational power, sufficiently strong computational resources will render the tangle insecure. This is a problem in traditional proof-of-work blockchains as well, however, they provide a much greater degree of security through higher fault tolerance and transaction fees. At the beginning, when there is a lower number of participants and incoming transactions, a central coordinator is needed to prevent an attack on the IOTA tangle.
Critics have opposed the role of the coordinator for being the single source of consensus in the IOTA network. Polychain Capital founder Olaf Carlson-Wee, says "IOTA is not decentralized, even though IOTA makes that claim, because it has a central "coordinator node" that the network needs to operate. If a regulator or a hacker shut down the coordinator node, the network would go down." This was demonstrated during the Trinity attack incident, when the IOTA foundation shutdown the coordinator to prevent further thefts. Following a discovered vulnerability in October 2017, the IOTA foundation transferred potentially compromised funds to addresses under its control, providing a process for users to later apply to the IOTA Foundation in order to reclaim their funds.
Additionally, IOTA has seen several network outages as a result of bugs in the coordinator as well as DDoS attacks. Early in the seed generator scam, a DDoS network attack distracted IOTA admins, leaving initial thefts undetected.
In 2020, the IOTA Foundation announced that it would like to operate the network without a coordinator in the future, but implementation of this is still in an early development phase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "s_i(1) = \\begin{cases}1 & \\mu_i(1) \\geq \\tau\\\\0 & \\text{otherwise} \\end{cases}\n"
},
{
"math_id": 1,
"text": "s_i(\\cdot)"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "1"
},
{
"math_id": 4,
"text": "\\mu_i(1)"
},
{
"math_id": 5,
"text": "\\tau \\in (0.5,1] "
},
{
"math_id": 6,
"text": "t"
},
{
"math_id": 7,
"text": "s_i(t+1) = \\begin{cases}1 & \\mu_i(t+1) > 0.5 \\\\0 & \\mu_i(t+1) < 0.5 \\\\ s_i(t) & \\text{otherwise}\\end{cases}\n"
},
{
"math_id": 8,
"text": "s_i(t+1) = \\begin{cases}1 & \\mu_i(t+1) > U_t \\\\0 & \\mu_i(t+1) < U_t \\\\ s_i(t) & \\text{otherwise}\\end{cases}\n"
},
{
"math_id": 9,
"text": "U_t \\sim \\textbf{U}(\\beta,1-\\beta)"
},
{
"math_id": 10,
"text": "\\beta \\in [0,1/2]"
}
] |
https://en.wikipedia.org/wiki?curid=56694947
|
56695515
|
Majda's model
|
Majda's model is a qualitative model (in mathematical physics) introduced by Andrew Majda in 1981 for the study of interactions in the combustion theory of shock waves and explosive chemical reactions.
The following definitions are with respect to a Cartesian coordinate system with 2 variables. For functions formula_0, formula_1 of one spatial variable formula_2 representing the Lagrangian specification of the fluid flow field and the time variable formula_3, functions formula_4, formula_5 of one variable formula_6, and positive constants formula_7, the Majda model is a pair of coupled partial differential equations:
formula_8
formula_9
the unknown function formula_10 is a lumped variable, a scalar variable formed from a complicated nonlinear average of various aspects of density, velocity, and temperature in the exploding gas;
the unknown function formula_11 is the mass fraction in a simple one-step chemical reaction scheme;
the given flux function formula_12 is a nonlinear convex function;
the given ignition function formula_13 is the starter for the chemical reaction scheme;
formula_14 is the constant reaction rate;
formula_15 is the constant heat release;
formula_16 is the constant diffusivity.
<templatestyles src="Template:Blockquote/styles.css" />Since its introduction in the early 1980s, Majda's simplified "qualitative" model for detonation ... has played an important role in the mathematical literature as test-bed for both the development of mathematical theory and computational techniques. Roughly, the model is a formula_17 system consisting of a Burgers equation coupled to a chemical kinetics equation. For example, Majda (with Colella & Roytburd) used the model as a key diagnostic tool in the development of fractional-step computational schemes for the Navier-Stokes equations of compressible reacting fluids ...
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "u(x,t)"
},
{
"math_id": 1,
"text": "z(x,t)"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "t"
},
{
"math_id": 4,
"text": "f(w)"
},
{
"math_id": 5,
"text": "\\phi (w)"
},
{
"math_id": 6,
"text": "w"
},
{
"math_id": 7,
"text": "k, q, B"
},
{
"math_id": 8,
"text": "\\frac{\\partial u(x,t)}{\\partial t} + q \\cdot \\frac{\\partial z(x,t)}{\\partial t} + \\frac{\\partial f(u(x,t))}{\\partial x} = B \\cdot \\frac{\\partial^2u(x,t)}{\\partial x^2}"
},
{
"math_id": 9,
"text": "\\frac{\\partial z(x,t)}{\\partial t} = - k \\cdot \\phi (u(x,t)) \\cdot z(x,t)"
},
{
"math_id": 10,
"text": "u = u(x,t)"
},
{
"math_id": 11,
"text": "z = z(x,t) \\in [0,1]"
},
{
"math_id": 12,
"text": "f = f(w)"
},
{
"math_id": 13,
"text": "\\phi = \\phi (w)"
},
{
"math_id": 14,
"text": "k"
},
{
"math_id": 15,
"text": "q"
},
{
"math_id": 16,
"text": "B"
},
{
"math_id": 17,
"text": "2 \\times 2"
}
] |
https://en.wikipedia.org/wiki?curid=56695515
|
566959
|
Standard electrode potential
|
Electromotive force of a half reaction cell versus standard hydrogen electrode
In electrochemistry, standard electrode potential formula_0, or formula_1, is a measure of the reducing power of any element or compound. The IUPAC "Gold Book" defines it as; "the value of the standard emf (electromotive force) of a cell in which molecular hydrogen under standard pressure is oxidized to solvated protons at the left-hand electrode".
Background.
The basis for an electrochemical cell, such as the galvanic cell, is always a redox reaction which can be broken down into two half-reactions: oxidation at anode (loss of electron) and reduction at cathode (gain of electron). Electricity is produced due to the difference of electric potential between the individual potentials of the two metal electrodes with respect to the electrolyte.
Although the overall potential of a cell can be measured, there is no simple way to accurately measure the electrode/electrolyte potentials in isolation. The electric potential also varies with temperature, concentration and pressure. Since the oxidation potential of a half-reaction is the negative of the reduction potential in a redox reaction, it is sufficient to calculate either one of the potentials. Therefore, standard electrode potential is commonly written as standard reduction potential.
Calculation.
The electrode potential cannot be obtained empirically. The galvanic cell potential results from a "pair" of electrodes. Thus, only one empirical value is available in a pair of electrodes and it is not possible to determine the value for each electrode in the pair using the empirically obtained galvanic cell potential. A reference electrode, standard hydrogen electrode (SHE), for which the potential is "defined" or agreed upon by convention, needed to be established. In this case the standard hydrogen electrode is set to 0.00 V and any electrode, for which the electrode potential is not yet known, can be paired with standard hydrogen electrode&mgalvanic cell potential gives the unknown electrode's potential. Using this process, any electrode with an unknown potential can be paired with either the standard hydrogen electrode or another electrode for which the potential has already been derived and that unknown value can be established.
Since the electrode potentials are conventionally defined as reduction potentials, the sign of the potential for the metal electrode being oxidized must be reversed when calculating the overall cell potential. The electrode potentials are independent of the number of electrons transferred —they are expressed in volts, which measure energy per electron transferred—and so the two electrode potentials can be simply combined to give the overall "cell" potential even if different numbers of electrons are involved in the two electrode reactions.
For practical measurements, the electrode in question is connected to the positive terminal of the electrometer, while the standard hydrogen electrode is connected to the negative terminal.
Reversible electrode.
A reversible electrode is an electrode that owes its potential to changes of a reversible nature. A first condition to be fulfilled is that the system is close to the chemical equilibrium. A second set of conditions is that the system is submitted to very small solicitations spread on a sufficient period of time so, that the chemical equilibrium conditions nearly always prevail. In theory, it is very difficult to experimentally achieve reversible conditions because any perturbation imposed to a system near equilibrium in a finite time forces it out of equilibrium. However, if the solicitations exerted on the system are sufficiently small and applied slowly, one can consider an electrode to be reversible. By nature, electrode reversibility depends on the experimental conditions and the way the electrode is operated. For example, electrodes used in electroplating are operated with a high over-potential to force the reduction of a given metal cation to be deposited onto a metallic surface to be protected. Such a system is far from equilibrium and continuously submitted to important and constant changes in a short period of time
Standard reduction potential table.
The larger the value of the standard reduction potential, the easier it is for the element to be reduced (gain electrons); in other words, they are better oxidizing agents.
For example, F2 has a standard reduction potential of +2.87 V and Li+ has −3.05 V:
F2("g") + 2 "e"−⇌ 2 F- = +2.87 V
Li+ + "e"−⇌ Li("s") = −3.05 V
The highly positive standard reduction potential of F2 means it is reduced easily and is therefore a good oxidizing agent. In contrast, the greatly negative standard reduction potential of Li+ indicates that it is not easily reduced. Instead, Li("s") would rather undergo oxidation (hence it is a good reducing agent).
Zn2+ has a standard reduction potential of −0.76 V and thus can be oxidized by any other electrode whose standard reduction potential is greater than −0.76 V (e.g., H+ (0 V), Cu2+ (0.34 V), F2 (2.87 V)) and can be reduced by any electrode with standard reduction potential less than −0.76 V (e.g. H2 (−2.23 V), Na+ (−2.71 V), Li+ (−3.05 V)).
In a galvanic cell, where a spontaneous redox reaction drives the cell to produce an electric potential, Gibbs free energy formula_2 must be negative, in accordance with the following equation:
formula_3 (unit: Joule = Coulomb × Volt)
where n is number of moles of electrons per mole of products and F is the Faraday constant, ~ 96 485 C/mol.
As such, the following rules apply:
If formula_4 > 0, then the process is spontaneous (galvanic cell): formula_5 < 0, and energy is liberated.
If formula_4 < 0, then the process is non-spontaneous (electrolytic cell): formula_5 > 0, and energy is consumed.
Thus in order to have a spontaneous reaction (formula_5 < 0), formula_4must be positive, where:
formula_6
where formula_7 is the standard potential at the cathode (called as standard cathodic potential or standard reduction potential and formula_8 is the standard potential at the anode (called as standard anodic potential or standard oxidation potential) as given in the table of standard electrode potential.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "E^\\ominus"
},
{
"math_id": 1,
"text": "E^\\ominus_{red}"
},
{
"math_id": 2,
"text": "\\Delta G^\\ominus"
},
{
"math_id": 3,
"text": "\\Delta G^\\ominus_{cell} = -n F E^\\ominus_{cell}"
},
{
"math_id": 4,
"text": "E^\\ominus_{cell}"
},
{
"math_id": 5,
"text": "\\Delta G^\\ominus_{cell}"
},
{
"math_id": 6,
"text": "E^\\ominus_{cell} = E^\\ominus_{cathode} - E^\\ominus_{anode}"
},
{
"math_id": 7,
"text": "E^\\ominus_{cathode}"
},
{
"math_id": 8,
"text": "E^\\ominus_{anode}"
}
] |
https://en.wikipedia.org/wiki?curid=566959
|
5670370
|
Specific speed
|
Specific speed "N""s", is used to characterize turbomachinery speed. Common commercial and industrial practices use dimensioned versions which are of equal utility. Specific speed is most commonly used in pump applications to define the suction specific speed —a quasi non-dimensional number that categorizes pump impellers as to their type and proportions. In Imperial units it is defined as the speed in revolutions per minute at which a geometrically similar impeller would operate if it were of such a size as to deliver one gallon per minute against one foot of hydraulic head. In metric units flow may be in l/s or m3/s and head in m, and care must be taken to state the units used.
Performance is defined as the ratio of the pump or turbine against a reference pump or turbine, which divides the actual performance figure to provide a unitless figure of merit. The resulting figure would more descriptively be called the "ideal-reference-device-specific performance." This resulting unitless ratio may loosely be expressed as a "speed," only because the performance of the reference ideal pump is linearly dependent on its speed, so that the ratio of [device-performance to reference-device-performance] is "also" the increased speed at which the reference device would need to operate, in order to produce the performance, instead of its reference speed of "1 unit."
Specific speed is an index used to predict desired pump or turbine performance. i.e. it predicts the general shape of a pump's impeller. It is this impeller's "shape" that predicts its flow and head characteristics so that the designer can then select a pump or turbine most appropriate for a particular application. Once the desired specific speed is known, basic dimensions of the unit's components can be easily calculated.
Several mathematical definitions of specific speed (all of them actually ideal-device-specific) have been created for different devices and applications.
Pump specific speed.
Low-specific speed radial flow impellers develop hydraulic head principally through centrifugal force. Pumps of higher specific speeds develop head partly by centrifugal force and partly by axial force. An axial flow or propeller pump with a specific speed of 10,000 or greater generates its head exclusively through axial forces. Radial impellers are generally low flow/high head designs whereas axial flow impellers are high flow/low head designs. In theory, the discharge of a "purely" centrifugal machine (pump, turbine, fan, etc.) is tangential to the rotation of the impeller whereas a "purely" axial-flow machine's discharge will be parallel to the axis of rotation. There are also machines that exhibit a combination of both properties and are specifically referred to as "mixed-flow" machines.
Centrifugal pump impellers have specific speed values ranging from 500 to 10,000 (English units), with radial flow pumps at 500 to 4,000, mixed flow at 2,000 to 8,000, and axial flow pumps at 7,000 to 20,000. Values of specific speed less than 500 are associated with positive displacement pumps.
As the specific speed increases, the ratio of the impeller outlet diameter to the inlet or eye diameter decreases. This ratio becomes 1.0 for a true axial flow impeller.
The following equation gives a dimensionless specific speed:
formula_0
where:
formula_1 is specific speed (dimensionless)
formula_2 is pump rotational speed (rad/sec)
formula_3 is flowrate (m3/s) at the point of best efficiency
formula_4 is total head (m) per stage at the point of best efficiency
Note that the units used affect the specific speed value in the above equation and consistent units should be used for comparisons. Pump specific speed can be calculated using British gallons or using Metric units (m3/s and metres head), changing the values listed above.
Suction specific speed.
The suction specific speed is mainly used to see if there will be problems with cavitation during the pump's operation on the suction side. It is defined by centrifugal and axial pumps' inherent physical characteristics and operating point. The suction specific speed of a pump will define the range of operation in which a pump will experience stable operation. The higher the suction specific speed, then the smaller the range of stable operation, up to the point of cavitation at 8500 (unitless). The envelope of stable operation is defined in terms of the best efficiency point of the pump.
The suction specific speed is defined as:
formula_5
where:
formula_6suction specific speed
formula_7rotational speed of pump in rpm
formula_8flow of pump in US gallons per minute
formula_9 Net positive suction head (NPSH) required in feet at pump's best efficiency point
Turbine specific speed.
The specific speed value for a turbine is the speed of a geometrically similar turbine which would produce unit power (one kilowatt) under unit head (one meter). The specific speed of a turbine is given by the manufacturer (along with other ratings) and will always refer to the point of maximum efficiency. This allows accurate calculations to be made of the turbine's performance for a range of heads.
Well-designed efficient machines typically use the following values: Impulse turbines have the lowest "n""s" values, typically ranging from 1 to 10, a Pelton wheel is typically around 4, Francis turbines fall in the range of 10 to 100, while Kaplan turbines are at least 100 or more, all in imperial units.
Deriving the Turbine Specific Speed.
To derive the Turbine specific speed equation we first start with the Power formula for water then using proportionalities with η,ρ, and g being constant they can be removed. The power of the turbine is therefore only dependent on the head H and flow Q.
formula_10
so formula_11
let:
formula_12 = Diameter of the turbine runner
formula_13 = Width of the turbine runner
formula_14 = Speed of the turbine (rpm)
formula_15 = Tangential velocity of the turbine blade (m/s)
formula_1 = Specific Speed of the Turbine
formula_16 = Velocity of water at turbine (m/s)
Now utilising the constant speed ratio at the turbine tip, the following proportionality can be made that the tangential velocity of the turbine blade is proportional to the square root of the head.
formula_17
Speed ratio formula_18
so formula_19
But from rotational speed in RPM to linear speed in m/s the following equation and proportionality can be made.
formula_20
so formula_21
The flow through a turbine is the product of flow velocity and area so the flow through a turbine can be quantified.
formula_22
with formula_23
and as shown previously:
formula_24
So using the above 2, the following is obtained
formula_25
By combining the equation for diameter and tangential speed, with tangential speed and head a relationship between flow and head can be reached.
formula_26
Substituting this back into the power equation gives:
formula_27
To convert this proportionality into an equation a factor of proportionality, say K, must be introduced which gives:
formula_28
Now assuming our original proposition of producing 1 kilowatt at 1m head our speed N becomes our specific speed formula_29. So substituting these values into our equation gives:
formula_30
Now we know formula_31 we have a complete formula for specific speed,formula_1:
formula_32
So rearranging for Specific Speed give the final following result:
formula_33
where:
English units.
Expressed in English units, the "specific speed" is defined as "n""s" = "n" √"P"/"h"5/4
Metric units.
Expressed in metric units, the "specific speed" is "n""s" = 0.2626 "n" √"P"/"h"5/4
The factor 0.2626 is only required when the specific speed is to be adjusted to English units. In countries which use the metric system, the factor is omitted, and quoted specific speeds are correspondingly larger.
Example.
Given a flow and head for a specific hydro site, and the RPM requirement of the generator, calculate the specific speed. The result is the main criteria for turbine selection or the starting point for analytical design of a new turbine. Once the desired specific speed is known, basic dimensions of the turbine parts can be easily calculated.
Turbine calculations:
formula_35
formula_36
formula_37 = Runner diameter (m)
|
[
{
"math_id": 0,
"text": "N_s = \\frac { n \\sqrt Q } { (gH)^{ 3/4 } } "
},
{
"math_id": 1,
"text": "N_s"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "Q"
},
{
"math_id": 4,
"text": "H"
},
{
"math_id": 5,
"text": "\nN_{ss} = \\frac{n\\sqrt{Q}} {{NPSH}_R^{0.75}}\n"
},
{
"math_id": 6,
"text": "N_{ss} = "
},
{
"math_id": 7,
"text": "n = "
},
{
"math_id": 8,
"text": "Q = "
},
{
"math_id": 9,
"text": "{NPSH}_R = "
},
{
"math_id": 10,
"text": " P=\\eta \\rho gQH "
},
{
"math_id": 11,
"text": " P \\propto QH "
},
{
"math_id": 12,
"text": "D"
},
{
"math_id": 13,
"text": "B"
},
{
"math_id": 14,
"text": "N"
},
{
"math_id": 15,
"text": "u"
},
{
"math_id": 16,
"text": "V"
},
{
"math_id": 17,
"text": " V = \\sqrt{2gH} "
},
{
"math_id": 18,
"text": " = \\frac{u}{V} = \\frac{u}{\\sqrt{2gH}}"
},
{
"math_id": 19,
"text": "u \\propto \\sqrt{H} "
},
{
"math_id": 20,
"text": " u = \\frac{\\pi DN}{60} "
},
{
"math_id": 21,
"text": " D \\propto \\frac{u}{N} "
},
{
"math_id": 22,
"text": " Q = \\pi DBV_{flow}"
},
{
"math_id": 23,
"text": " B \\propto D "
},
{
"math_id": 24,
"text": " V_{flow} \\propto V \\propto \\sqrt{2gH} \\propto \\sqrt{H}"
},
{
"math_id": 25,
"text": "Q \\propto D^2\\sqrt{H}"
},
{
"math_id": 26,
"text": "Q \\propto \\left ( \\frac{\\sqrt{H}}{N} \\right )^2 \\sqrt{H} \\therefore Q \\propto \\frac{H^{3/2}}{N^2}"
},
{
"math_id": 27,
"text": " P \\propto \\frac{H^{3/2}}{N^2} H \\therefore P \\propto \\frac{H^{5/2}}{N^2}"
},
{
"math_id": 28,
"text": " P = K \\frac{H^{5/2}}{N^2}"
},
{
"math_id": 29,
"text": " N_s"
},
{
"math_id": 30,
"text": " 1 = K \\frac{1^{5/2}}{{N_s}^2} \\therefore K = {N_s}^2"
},
{
"math_id": 31,
"text": "K"
},
{
"math_id": 32,
"text": " P = {N_s}^2 \\frac{H^{5/2}}{N^2}"
},
{
"math_id": 33,
"text": "N_s=\\frac{N\\sqrt{P}}{H^{5/4}} "
},
{
"math_id": 34,
"text": "P"
},
{
"math_id": 35,
"text": " N_s=\\frac{2.294}{H_n^{0.486}} "
},
{
"math_id": 36,
"text": " D_e=84.5(0.79+1.602 N_s) \\frac{\\sqrt{H_n}}{60 * \\Omega} "
},
{
"math_id": 37,
"text": " D_e"
}
] |
https://en.wikipedia.org/wiki?curid=5670370
|
56707200
|
Patricia Clark Kenschaft
|
American mathematician
Patricia Clark Kenschaft (March 25, 1940 – November 20, 2022) was an American mathematician. She was a professor of mathematics at Montclair State University. She is known as a prolific author of books on mathematics, as a founder of PRIMES, the Project for Resourceful Instruction of Mathematics in the Elementary School, and for her work for equity and diversity in mathematics.
Early life, education and career.
Kenschaft was born in 1940 to political activist and history teacher Bertha Francis Clark and organic chemist John Randolph Clark. The family lived in Nutley. [#endnote_]
The eldest of four children, Kenschaft took on a nurturing in the household. Her brother, Bruce, was born with cognitive disabilities. This led to her mother and family becoming advocates for special education and state income taxes to better support school systems. [#endnote_]
Kenschaft graduated from Swarthmore College in 1961, earning a bachelor's degree with honors in mathematics and with minors in English, philosophy, and education. She earned a master's degree from the University of Pennsylvania in 1963, and then returned to the University of Pennsylvania for doctoral studies, completing a Ph.D. in 1973, while in the same period raising a child and founding a nursery school in Concord, Massachusetts. Her dissertation, in functional analysis, was "Homogeneous formula_0-Algebras over formula_1", and was supervised by Edward Effros.
After working in adjunct positions at St. Elizabeth's College and Bloomfield College, she joined the Montclair State faculty in 1973, and was promoted to full professor in 1988. She retired in 2005 and died in 2022.
Service to the profession.
Kenschaft became the founding president of the New Jersey Association for Women in Mathematics in 1981 and of the New Jersey Faculty Forum in 1988. She chaired the Committee on Participation of Women of the Mathematical Association of America (MAA) from 1987 to 1993, the Committee on Mathematics and the Environment of the MAA from 2000 to 2004, and the Equity and Diversity Integration Task Force of the National Council of Teachers of Mathematics in 2003. She served as director of several projects to improve mathematics education in public schools, including PRIMES (Project for Resourceful Instruction of Mathematics in the Elementary School). Kenschaft hosted a live weekly radio talk show called "Math Medley" from 1998 - 2004.
Books.
Kenschaft was the author of:
Additionally, she edited
Recognition.
Kenschaft was the 2006 winner of the Louise Hay Award of the Association for Women in Mathematics "in recognition of her long career of dedicated service to mathematics and mathematics education" and for her work "writing about, speaking about, and working for mathematics and mathematics education in the areas of K–12 education, the environment, affirmative action and equity, and public awareness of the importance of mathematics in society".
In 2012, she received the Sr. Stephanie Sloyan Distinguished Service Award from the New Jersey section of the Mathematical Association of America.
She was the 2013 Falconer Lecturer of the Association for Women in Mathematics and Mathematical Association of America, speaking on "Improving Equity and Education: Why and How".
Kenshaft was selected a Fellow of the Association for Women in Mathematics in the Class of 2021 "for almost 50 years of sustained and lasting commitment to the advancement of underrepresented groups in the mathematical sciences, especially girls, women, and African Americans. Her extensive service, publications, and outreach bring to light racism, sexism, and inequities, always delivered with the message that positive change is possible".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C^*"
},
{
"math_id": 1,
"text": "S^n"
}
] |
https://en.wikipedia.org/wiki?curid=56707200
|
56729165
|
Tammo tom Dieck
|
German mathematician
Tammo tom Dieck (29 May 1938, São Paulo) is a German mathematician, specializing in algebraic topology.
Tammo tom Dieck studied mathematics from 1957 at the University of Göttingen
and at Saarland University, where he received his promotion (Ph.D.) in 1964 under Dieter Puppe with thesis "Zur formula_0-Theorie und ihren Kohomologie-Operationen". In 1969 tom Dieck received his habilitation at Heidelberg University under Albrecht Dold. From 1970 to 1975 he was a professor at Saarland University. In 1975 he became a professor at the University of Göttingen.
Tammo tom Dieck is a world-class expert in algebraic topology and author of several widely-used textbooks in topology. He has done research on Lie groups, G-structures, and cobordism. In the 1990s and 2000s, his research dealt with knot theory (and its algebras) and quantum groups.
In 1986 he was an Invited Speaker with talk "Geometric representation theory of compact Lie groups" at the ICM in Berkeley, California. In 1984 he was elected a full member of the Akademie der Wissenschaften zu Göttingen.
His doctoral students include Stefan Bauer and Wolfgang Lück.
Tammo tom Dieck is a grandson of the architect Walter Klingenberg, a brother of the chemist Heindirk tom Dieck, and the father of the pianist Wiebke tom Dieck.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K"
}
] |
https://en.wikipedia.org/wiki?curid=56729165
|
567292
|
Mental calculation
|
Arithmetical calculations using only the human brain
Mental calculation consists of arithmetical calculations using only the human brain, with no help from any supplies (such as pencil and paper) or devices such as a calculator. People may use mental calculation when computing tools are not available, when it is faster than other means of calculation (such as conventional educational institution methods), or even in a competitive context. Mental calculation often involves the use of specific techniques devised for specific types of problems. People with unusually high ability to perform mental calculations are called mental calculators or "lightning calculator"s.
Many of these techniques take advantage of or rely on the decimal numeral system.
Methods and techniques.
Casting out nines.
After applying an arithmetic operation to two operands and getting a result, the following procedure can be used to improve confidence in the correctness of the result:
Example
The same procedure can be used with multiple operations, repeating steps 1 and 2 for each operation.
Factors.
When multiplying, a useful thing to remember is that the factors of the operands still remain. For example, to say that 14 × 15 was 201 would be unreasonable. Since 15 is a multiple of 5, the product should be as well. Likewise, 14 is a multiple of 2, so the product should be even. Furthermore, any number which is a multiple of both 5 and 2 is necessarily a multiple of 10, and in the decimal system would end with a 0. The correct answer is 210. It is a multiple of 10, 7 (the other prime factor of 14) and 3 (the other prime factor of 15).
Calculating differences: "a" − "b".
Direct calculation.
When the digits of "b" are all smaller than the corresponding digits of "a", the calculation can be done digit by digit. For example, evaluate 872 − 41 simply by subtracting 1 from 2 in the units place, and 4 from 7 in the tens place: 831.
Indirect calculation.
When the above situation does not apply, there is another method known as indirect calculation.
Look-ahead borrow method.
This method can be used to subtract numbers left to right, and if all that is required is to read the result aloud, it requires little of the user's memory even to subtract numbers of arbitrary size.
One place at a time is handled, left to right.
Example:
4075
− 1844
Thousands: 4 − 1 = 3, look to right, 075 < 844, need to borrow.
3 − 1 = 2, say "Two thousand".
One is performing 3 - 1 rather than 4 - 1 because the column to the right is
going to borrow from the thousands place.
Hundreds: 0 − 8 = negative numbers not allowed here.
One is going to increase this place by using the number one borrowed from the
column to the left. Therefore:
10 − 8 = 2. It is 10 rather than 0, because one borrowed from the Thousands
place. 75 > 44 so no need to borrow,
say "two hundred"
Tens: 7 − 4 = 3, 5 > 4, so 5 - 4 = 1
Hence, the result is 2231.
Calculating products: "a" × "b".
Many of these methods work because of the distributive property.
The "Ends of Five" Formula.
For any 2-digit by 2-digit multiplication problem, if both numbers end in five, the following algorithm can be used to quickly multiply them together:
formula_0
As a preliminary step simply round the smaller number down and the larger up to the nearest multiple of ten. In this case:
formula_1
formula_2
The algorithm reads as follows:
formula_3
Where t1 is the tens unit of the original larger number (75) and t2 is the tens unit of the original smaller number (35).
formula_4
Multiplying any 2-digit numbers.
To easily multiply any 2-digit numbers together a simple algorithm is as follows (where a is the tens digit of the first number, b is the ones digit of the first number, c is the tens digit of the second number and d is the ones digit of the second number):
formula_5
formula_6
For example,
formula_7
800
+120
+140
+ 21
1081
Note that this is the same thing as the conventional sum of partial products, just restated with brevity. To minimize the number of elements being retained in one's memory, it may be convenient to perform the sum of the "cross" multiplication product first, and then add the other two elements:
formula_8
formula_9 [of which only the tens digit will interfere with the first term]
formula_10
i.e., in this example
(12 + 14) = 26, 26 × 10 = 260,
to which is it is easy to add 21: 281 and then 800: 1081
An easy mnemonic to remember for this would be FOIL. F meaning first, O meaning outer, I meaning inner and L meaning last. For example:
formula_11
and
formula_12
where 7 is "a", 5 is "b", 2 is "c" and 3 is "d".
Consider
formula_13
this expression is analogous to any number in base 10 with a hundreds, tens and ones place. FOIL can also be looked at as a number with F being the hundreds, OI being the tens and L being the ones.
formula_14 is the product of the first digit of each of the two numbers; F.
formula_15 is the addition of the product of the outer digits and the inner digits; OI.
formula_16 is the product of the last digit of each of the two numbers; L.
Multiplying by 9.
Since 9 = 10 − 1, to multiply a number by nine, multiply it by 10 and then subtract the original number from the result. For example, 9 × 27 = 270 − 27 = 243.
This method can be adjusted to multiply by eight instead of nine, by doubling the number being subtracted; 8 × 27 = 270 − (2×27) = 270 − 54 = 216.
Similarly, by adding instead of subtracting, the same methods can be used to multiply by 11 and 12, respectively (although simpler methods to multiply by 11 exist).
Multiplying by 11.
For single digit numbers simply duplicate the number into the tens digit, for example: 1 × 11 = 11, 2 × 11 = 22, up to 9 × 11 = 99.
The product for any larger non-zero integer can be found by a series of additions to each of its digits from right to left, two at a time.
First take the ones digit and copy that to the temporary result. Next, starting with the ones digit of the multiplier, add each digit to the digit to its left. Each sum is then added to the left of the result, in front of all others. If a number sums to 10 or higher take the tens digit, which will always be 1, and carry it over to the next addition. Finally copy the multipliers left-most (highest valued) digit to the front of the result, adding in the carried 1 if necessary, to get the final product.
In the case of a negative 11, multiplier, or both apply the sign to the final product as per normal multiplication of the two numbers.
A step-by-step example of 759 × 11:
Further examples:
Another method is to simply multiply the number by 10, and add the original number to the result.
For example:
17 × 11
17 × 10 = 170
170 + 17 = 187
17 × 11 = 187
One last easy way:
If one has a two-digit number, take it and add the two numbers together and put that sum in the middle, and one can get the answer.
For example: 24 x 11 = 264 because 2 + 4 = 6 and the 6 is placed in between the 2 and the 4.
Second example: 87 x 11 = 957 because 8 + 7 = 15 so the 5 goes in between the 8 and the 7 and the 1 is carried to the 8. So it is basically 857 + 100 = 957.
Or if 43 x 11 is equal to first 4+3=7 (For the tens digit) Then 4 is for the hundreds and 3 is for the tens. And the answer is 473
Multiplying two numbers close to and below 100.
This technique allows easy multiplication of numbers close and below 100.(90-99) The variables will be the two numbers one multiplies.
The product of two variables ranging from 90-99 will result in a 4-digit number. The first step is to find the ones-digit and the tens digit.
Subtract both variables from 100 which will result in 2 one-digit number. The product of the 2 one-digit numbers will be the last two digits of one's final product.
Next, subtract one of the two variables from 100. Then subtract the difference from the other variable. That difference will be the first two digits of the final product, and the resulting 4 digit number will be the final product.
Example:
95
x 97
Last two digits: 100-95=5 (subtract first number from 100)
100-97=3 (subtract second number from 100)
5*3=15 (multiply the two differences)
Final Product- yx15
First two digits: 100-95=5 (Subtract the first number of the equation from 100)
97-5=92 (Subtract that answer from the second number of the equation)
"Now, the difference will be the first two digits"
Final Product- 9215
"Alternate for first two digits"
5+3=8 (Add the two single digits derived when calculating "Last two digits" in previous step)
100-8=92 (Subtract that answer from 100)
"Now, the difference will be the first two digits"
Final Product- 9215
Using square numbers.
The products of small numbers may be calculated by using the squares of integers; for example, to calculate 13 × 17, one can remark 15 is the mean of the two factors, and think of it as (15 − 2) × (15 + 2), "i.e." 152 − 22. Knowing that 152 is 225 and 22 is 4, simple subtraction shows that 225 − 4 = 221, which is the desired product.
This method requires knowing by heart a certain number of squares:
Squaring numbers.
It may be useful to be aware that the difference between two successive square numbers is the sum of their respective square roots. Hence, if one knows that 12 × 12 = 144 and wish to know 13 × 13, calculate 144 + 12 + 13 = 169.
This is because ("x" + 1)2 − "x"2 = "x"2 + 2"x" + 1 − "x"2 = "x" + ("x" + 1)
"x"2 = ("x" − 1)2 + (2"x" − 1)
Squaring any number.
Take a given number, and add and subtract a certain value to it that will make it easier to multiply. For example:
4922
492 is close to 500, which is easy to multiply by. Add and subtract 8 (the difference between 500 and 492) to get
492 -> 484, 500
Multiply these numbers together to get 242,000 (This can be done efficiently by dividing 484 by 2 = 242 and multiplying by 1000). Finally, add the difference (8) squared (82 = 64) to the result:
4922 = 242,064
The proof follows:
formula_17
formula_18
formula_19
formula_20
Squaring any 2-digit integer.
This method requires memorization of the squares of the one-digit numbers 1 to 9.
The square of "mn", "mn" being a two-digit integer, can be calculated as
10 × "m"("mn" + "n") + "n"2
Meaning the square of "mn" can be found by adding "n" to "mn", multiplied by "m", adding 0 to the end and finally adding the square of "n".
For example, 232:
232
= 10 × 2(23 + 3) + 32
= 10 × 2(26) + 9
= 520 + 9
= 529
So 232 = 529.
Squaring numbers very close to 50.
Suppose one needs to square a number "n" near 50.
The number may be expressed as "n" = 50 − "a" so its square is (50−"a")2 = 502 − 100"a" + "a"2. One knows that 502 is 2500. So one subtracts 100"a" from 2500, and then add "a"2.
For example, say one wants to square 48, which is 50 − 2. One subtracts 200 from 2500 and add 4, and get "n"2 = 2304. For numbers larger than 50 ("n" = 50 + "a"), add 100×"a" instead of subtracting it.
Squaring an integer from 26 to 74.
This method requires the memorization of squares from 1 to 24.
The square of "n" (most easily calculated when "n" is between 26 and 74 inclusive) is
(50 − "n")2 + 100("n" − 25)
In other words, the square of a number is the square of its difference from fifty added to one hundred times the difference of the number and twenty five. For example, to square 62:
(−12)2 + [(62-25) × 100]
= 144 + 3,700
= 3,844
Squaring an integer near 100 (e.g., from 76 to 124).
This method requires the memorization of squares from 1 to "a" where "a" is the absolute difference between "n" and 100. For example, students who have memorized their squares from 1 to 24 can apply this method to any integer from 76 to 124.
The square of "n" (i.e., 100 ± "a") is
100(100 ± 2"a") + "a"2
In other words, the square of a number is the square of its difference from 100 added to the product of one hundred and the difference of one hundred and the product of two and the difference of one hundred and the number. For example, to square 93:
100(100 − 2(7)) + 72
= 100 × 86 + 49
= 8,600 + 49
= 8,649
Another way to look at it would be like this:
932 = ? (is −7 from 100)
93 − 7 = 86 (this gives the first two digits)
(−7)2 = 49 (these are the second two digits)
932 = 8649
Another example:
822 = ? (is −18 from 100)
82 − 18 = 64 (subtract. First digits.)
(−18)2 = 324 (second pair of digits. One will need to carry the 3.)
822 = 6724
Squaring any integer near 10"n" (e.g., 976 to 1024, 9976 to 10024, etc.).
This method is a straightforward extension of the explanation given above for squaring an integer near 100.
10122 = ? (1012 is +12 from 1000)
(+12)2 = 144 ("n" trailing digits)
1012 + 12 = 1024 (leading digits)
10122 = 1024144
99972 = ? (9997 is -3 from 10000)
(-3)2 = 0009 ("n" trailing digits)
9997 - 3 = 9994 (leading digits)
99972 = 99940009
Squaring any integer near "m ×" 10"n" (e.g., 276 to 324, 4976 to 5024, 79976 to 80024).
This method is a straightforward extension of the explanation given above for integers near 10"n".
4072 = ? (407 is +7 from 400)
(+7)2 = 49 ("n" trailing digits)
407 + 7 = 414
414 × 4 = 1656 (leading digits; note this multiplication by "m" was not needed for integers from 76 to 124 because their "m" = 1)
4072 = 165649
799912 = ? (79991 is -9 from 80000)
(-9)2 = 0081 ("n" trailing digits)
79991 - 9
79982 × 8 = 639856 (leading digits)
799912 = 6398560081
Finding roots.
Approximating square roots.
An easy way to approximate the square root of a number is to use the following equation:
formula_21
The closer the known square is to the unknown, the more accurate the approximation. For instance, to estimate the square root of 15, one could start with the knowledge that the nearest perfect square is 16 (42).
formula_22
So the estimated square root of 15 is 3.875. The actual square root of 15 is 3.872983... One thing to note is that, no matter what the original guess was, the estimated answer will always be larger than the actual answer due to the inequality of arithmetic and geometric means. Thus, one should try rounding the estimated answer down.
Note that if "n"2 is the closest perfect square to the desired square "x" and "d" = "x" - "n"2 is their difference, it is more convenient to express this approximation in the form of mixed fraction as formula_23. Thus, in the previous example, the square root of 15 is formula_24 As another example, square root of 41 is formula_25 while the actual value is 6.4031...
It may simplify mental calculation to notice that this method is equivalent to the mean of the known square and the unknown square, divided by the known square root:
formula_26
Derivation.
By definition, if "r" is the square root of x, then
formula_27
One then redefines the root
formula_28
where "a" is a known root (4 from the above example) and "b" is the difference between the known root and the answer one seeks.
formula_29
Expanding yields
formula_30
If 'a' is close to the target, 'b' will be a small enough number to render the formula_31 element of the equation negligible. Thus, one can drop formula_31 out and rearrange the equation to
formula_32
and therefore
formula_33
that can be reduced to
formula_34
Extracting roots of perfect powers.
Extracting roots of perfect powers is often practiced. The difficulty of the task does not depend on the number of digits of the perfect power but on the precision, i.e. the number of digits of the root. In addition, it also depends on the order of the root; finding perfect roots, where the order of the root is coprime with 10 are somewhat easier since the digits are scrambled in consistent ways, as in the next section.
Extracting cube roots.
An easy task for the beginner is extracting cube roots from the cubes of 2-digit numbers. For example, given 74088, determine what two-digit number, when multiplied by itself once and then multiplied by the number again, yields 74088. One who knows the method will quickly know the answer is 42, as 423 = 74088.
Before learning the procedure, it is required that the performer memorize the cubes of the numbers 1-10:
Observe that there is a pattern in the rightmost digit: adding and subtracting with 1 or 3. Starting from zero:
There are two steps to extracting the cube root from the cube of a two-digit number. For example, extracting the cube root of 29791. Determine the one's place (units) of the two-digit number. Since the cube ends in 1, as seen above, it must be 1.
Note that every digit corresponds to itself except for 2, 3, 7 and 8, which are just subtracted from ten to obtain the corresponding digit.
The second step is to determine the first digit of the two-digit cube root by looking at the magnitude of the given cube. To do this, remove the last three digits of the given cube (29791 → 29) and find the greatest cube it is greater than (this is where knowing the cubes of numbers 1-10 is needed). Here, 29 is greater than 1 cubed, greater than 2 cubed, greater than 3 cubed, but not greater than 4 cubed. The greatest cube it is greater than is 3, so the first digit of the two-digit cube must be 3.
Therefore, the cube root of 29791 is 31.
Another example:
This process can be extended to find cube roots that are 3 digits long, by using arithmetic modulo 11.
These types of tricks can be used in any root where the order of the root is coprime with 10; thus it fails to work in square root, since the power, 2, divides into 10. 3 does not divide 10, thus cube roots work.
Approximating common logarithms (log base 10).
To approximate a common logarithm (to at least one decimal point accuracy), a few logarithm rules, and the memorization of a few logarithms is required. One must know:
From this information, one can find the logarithm of any number 1-9.
The first step in approximating the common logarithm is to put the number given in scientific notation. For example, the number 45 in scientific notation is 4.5 × 101, but one will call it a × 10b. Next, find the logarithm of a, which is between 1 and 10. Start by finding the logarithm of 4, which is .60, and then the logarithm of 5, which is .70 because 4.5 is between these two. Next, and skill at this comes with practice, place a 5 on a logarithmic scale between .6 and .7, somewhere around .653 (NOTE: the actual value of the extra places will always be greater than if it were placed on a regular scale. i.e., one would expect it to go at .650 because it is halfway, but instead, it will be a little larger, in this case, .653) Once one has obtained the logarithm of a, simply add b to it to get the approximation of the common logarithm. In this case, a + b = .653 + 1 = 1.653. The actual value of log(45) ~ 1.65321.
The same process applies for numbers between 0 and 1. For example, 0.045 would be written as 4.5 × 10−2. The only difference is that b is now negative, so when adding one is really subtracting. This would yield the result 0.653 − 2, or −1.347.
Mental arithmetic as a psychological skill.
Physical exertion of the proper level can lead to an increase in performance of a mental task, like doing mental calculations, performed afterward. It has been shown that during high levels of physical activity there is a negative effect on mental task performance. This means that too much physical work can decrease accuracy and output of mental math calculations. Physiological measures, specifically EEG, have been shown to be useful in indicating mental workload. Using an EEG as a measure of mental workload after different levels of physical activity can help determine the level of physical exertion that will be the most beneficial to mental performance. Previous work done at Michigan Technological University by Ranjana Mehta includes a recent study that involved participants engaging in concurrent mental and physical tasks. This study investigated the effects of mental demands on physical performance at different levels of physical exertion and ultimately found a decrease in physical performance when mental tasks were completed concurrently, with a more significant effect at the higher level of physical workload. The Brown–Peterson procedure is a widely known task using mental arithmetic. This procedure, mostly used in cognitive experiments, suggests mental subtraction is useful in testing the effects maintenance rehearsal can have on how long short-term memory lasts.
Mental Calculations World Championship.
The first Mental Calculations World Championship took place in 1998. This event repeats every year and now occurs online. It consists of a range of different tasks such as addition, subtraction, multiplication, division, irrational and exact square roots, cube roots and deeper roots, factorizations, fractions, and calendar dates.
Mental Calculation World Cup.
The first Mental Calculation World Cup (Mental Calculation World Cup) took place in 2004. It is an in-person competition that occurs every other year in Germany. It consists of four different standard tasks --- addition of ten ten-digit numbers, multiplication of two eight-digit numbers, calculation of square roots, and calculation of weekdays for given dates --- in addition to a variety of "surprise" tasks.
Memoriad – World Memory, Mental Calculation & Speed Reading Olympics.
The first international Memoriad was held in Istanbul, Turkey, in 2008.
The second Memoriad took place in Antalya, Turkey, on 24–25 November 2012. 89 competitors from 20 countries participated. Awards and money prizes were given for 10 categories in total; of which 5 categories had to do about Mental Calculation (Mental addition, Mental Multiplication, Mental Square Roots (non-integer), Mental Calendar Dates calculation and Flash Anzan). The third Memoriad was held in Las Vegas, USA, from November 8, 2016 through November 10, 2016.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{Ex}: 35 \\times 75"
},
{
"math_id": 1,
"text": "35 - 5 = 30 = X"
},
{
"math_id": 2,
"text": "75 + 5 = 80 = Y"
},
{
"math_id": 3,
"text": "(X \\times Y) + 50(t_1-t_2) + 25"
},
{
"math_id": 4,
"text": "= 30 \\times 80 + 50(7-3)+25 = 2625"
},
{
"math_id": 5,
"text": "(10a+b) \\cdot (10c+d)"
},
{
"math_id": 6,
"text": "= 100 (a\\cdot c) + 10 (b\\cdot c) + 10 (a\\cdot d)+ b\\cdot d"
},
{
"math_id": 7,
"text": "23\\cdot 47 = 100 (2\\cdot 4) + 10 (3\\cdot 4) + 10 (2\\cdot 7)+ 3\\cdot 7"
},
{
"math_id": 8,
"text": " (a\\cdot d+b\\cdot c)\\cdot 10 "
},
{
"math_id": 9,
"text": " {} + b\\cdot d"
},
{
"math_id": 10,
"text": " {} + a\\cdot c\\cdot 100"
},
{
"math_id": 11,
"text": "75\\cdot 23"
},
{
"math_id": 12,
"text": "ab\\cdot cd"
},
{
"math_id": 13,
"text": "a\\cdot c\\cdot 100 + (a\\cdot d+b\\cdot c)\\cdot 10 + b\\cdot d"
},
{
"math_id": 14,
"text": "a\\cdot c"
},
{
"math_id": 15,
"text": " (a\\cdot d+b\\cdot c)"
},
{
"math_id": 16,
"text": "b\\cdot d"
},
{
"math_id": 17,
"text": "n^2 = n^2"
},
{
"math_id": 18,
"text": "n^2 = (n^2 - a^2) + a^2"
},
{
"math_id": 19,
"text": "n^2 = (n^2 - an + an - a^2) + a^2"
},
{
"math_id": 20,
"text": "n^2 = (n-a)(n+a) + a^2"
},
{
"math_id": 21,
"text": "\\text{root }\\simeq\\text{ known square root} - \\frac{\\text{known square} - \\text{unknown square}}{2 \\times \\text{known square root}}\\,"
},
{
"math_id": 22,
"text": "\\begin{align}\n\\text{root} & \\simeq 4 - \\frac{16 - 15}{2 \\times 4} \\\\\n& \\simeq 4 - 0.125 \\\\\n& \\simeq 3.875 \\\\\n\\end{align}\\,\\!\n"
},
{
"math_id": 23,
"text": "n\\tfrac{d}{2n}"
},
{
"math_id": 24,
"text": "4\\tfrac{-1}{8}."
},
{
"math_id": 25,
"text": "6\\tfrac{5}{12} = 6.416"
},
{
"math_id": 26,
"text": "\\text{root }\\simeq \\frac{\\text{mean}(\\text{known square}, \\text{unknown square})}{\\text{known square root}}\\,"
},
{
"math_id": 27,
"text": "\\mathrm{r}^2 = x\\,\\!"
},
{
"math_id": 28,
"text": "\\mathrm{r} = a - b\\,\\!"
},
{
"math_id": 29,
"text": "(a-b)^2 = x\\,\\!"
},
{
"math_id": 30,
"text": "a^2 - 2ab + b^2 = x\\,\\!"
},
{
"math_id": 31,
"text": "{} + b^2\\,"
},
{
"math_id": 32,
"text": "b \\simeq \\frac{a^2 - x}{2a}\\,\\!"
},
{
"math_id": 33,
"text": "\\mathrm{root} \\simeq a - \\frac{a^2 - x}{2a}\\,\\!"
},
{
"math_id": 34,
"text": "\\mathrm{root} \\simeq \\frac{a^2 + x}{2a}\\,\\!"
}
] |
https://en.wikipedia.org/wiki?curid=567292
|
56731778
|
Dirac matter
|
The term Dirac matter refers to a class of condensed matter systems which can be effectively described by the Dirac equation. Even though the Dirac equation itself was formulated for fermions, the quasi-particles present within Dirac matter can be of any statistics. As a consequence, Dirac matter can be distinguished in fermionic, bosonic or anyonic Dirac matter. Prominent examples of Dirac matter are graphene and other Dirac semimetals, topological insulators, Weyl semimetals, various high-temperature superconductors with formula_0-wave pairing and liquid helium-3. The effective theory of such systems is classified by a specific choice of the Dirac mass, the Dirac velocity, the gamma matrices and the space-time curvature. The universal treatment of the class of Dirac matter in terms of an effective theory leads to a common features with respect to the density of states, the heat capacity and impurity scattering.
Definition.
Members of the class of Dirac matter differ significantly in nature. However, all examples of Dirac matter are unified by similarities within the algebraic structure of an effective theory describing them.
General.
The general definition of Dirac matter is a condensed matter system where the quasi-particle excitations can be described in curved spacetime by the generalised Dirac equation:
formula_1
In the above definition formula_2 denotes a covariant vector depending on the formula_3-dimensional momentum formula_4 (formula_0 space formula_5 time dimension), formula_6 is the vierbein describing the curvature of the space, formula_7 the quasi-particle mass and formula_8 the Dirac velocity. Note that since in Dirac matter the Dirac equation gives the effective theory of the quasiparticles, the energy from the mass term is formula_9, not the rest mass formula_10 of a massive particle. formula_11 refers to a set of Dirac matrices, where the defining for the construction is given by the anticommutation relation,
formula_12
formula_13 is the Minkowski metric with signature (+ - - -) and formula_14 is the formula_15-dimensional unit matrix.
In all equations, implicit summation over formula_16 and formula_17 is used (Einstein convention). Furthermore, formula_18 is the wavefunction. The unifying feature of all Dirac matter is the matrix structure of the equation describing the quasi-particle excitations.
In the limit where formula_19, i.e. the covariant derivative, conventional Dirac matter is obtained. However, this general definition allows the description of matter with higher order dispersion relations and in curved spacetime as long as the effective Hamiltonian exhibits the matrix structure specific to the Dirac equation.
Common (conventional).
The majority of experimental realisations of Dirac matter to date are in the limit of formula_20 which therefore defines conventional Dirac matter in which the quasiparticles are described by the Dirac equation in curved space-time,
formula_21
Here, formula_22 denotes the covariant derivative. As an example, for the flat metric, the energy of a free Dirac particle differs significantly from the classical kinetic energy where energy is proportional to momentum squared:
formula_23
The Dirac velocity formula_8 gives the gradient of the formula_24 dispersion at large momenta formula_25, formula_7 is the mass of particle or object. In the case of massless Dirac matter, such as the fermionic quasiparticles in graphene or Weyl semimetals, the energy-momentum relation is linear,
formula_26
Therefore, conventional Dirac matter includes all systems that have a linear crossing or linear behavior in some region of the energy-momentum relation. They are characterised by features that resemble an 'X', sometimes tilted or skewed and sometimes with a gap between the upper formula_27 and lower formula_28 parts (the turning points of which become rounded if the origin of the gap is a mass term).
The general features and some specific examples of conventional Dirac matter are discussed in the following sections.
General properties of Dirac matter.
Technological relevance and tuning of Dirac matter.
Dirac matter, especially fermionic Dirac matter has much potential for technological applications. For example, 2010's Nobel Prize in Physics was awarded to Andre Geim and Konstantin Novoselov "for groundbreaking experiments regarding the material graphene". Within the official press release of the Swedish Royal Academy of Science it is stated that
<templatestyles src="Template:Blockquote/styles.css" />[...] a vast variety of practical applications now appear possible including the creation of new materials and the manufacture of innovative electronics. Graphene transistors are predicted to be substantially faster than today’s silicon transistors and result in more efficient computers.
In general, the properties of massless fermionic Dirac matter can be controlled by shifting the chemical potential by means of doping or within a field effect setup. By tuning the chemical potential, it is possible to have a precise control of the number of states present, since the density of states varies in a well-defined way with energy.
Additionally, depending on the specific realization of the Dirac material, it may be possible to introduce a mass term formula_7 that opens a gap in the spectrum - a band gap. In general, the mass term is the result of breaking a specific symmetry of the system. The size of the band gap can be controlled precisely by controlling the strength of the mass term.
Density of states.
The density of states of formula_0-dimensional Dirac matter near the Dirac point scales as formula_29 where formula_30 is the particle energy. The vanishing density of states for quasiparticles in Dirac matter mimics semimetal physics for physical dimension formula_31. In the two-dimensional systems such as graphene and topological insulators, the density of states gives a V shape, compared with the constant value for massive particles with dispersion formula_32.
Experimental measurement of the density of states near the Dirac point by standard techniques such as scanning tunnelling microscopy often differ from the theoretical form due to the effects of disorder and interactions.
Specific heat.
Specific heat, the heat capacity per unit mass, describes the energy required to change the temperature of a sample. The low-temperature electronic specific heat of Dirac matter is formula_33 which is different from formula_34 encountered for normal metals. Therefore, for systems whose physical dimension is greater than 1, the specific heat can provide a clear signature of the underlying Dirac nature of the quasiparticles.
Landau quantization.
Landau quantization refers to the quantization of the cyclotron orbits of charged particles in magnetic fields. As a result, the charged particles can only occupy orbits with discrete energy values, called Landau levels. For 2-dimensional systems with a perpendicular magnetic field, the energy for Landau-levels for ordinary matter described the Schrödinger equation and Dirac matter are given by
formula_35
Here, formula_36 is the cyclotron frequency which is linearly dependent of the applied magnetic field and the charge of the particle. There are two distinct features between the Landau level quantization for 2D Schrödinger fermions (ordinary matter) and 2D Dirac fermions. First, the energy for Schrödinger fermions is linearly dependent with respect to the integer quantum number formula_37, whereas it exhibits a square-root dependence for the Dirac fermions. This key difference plays an important role in the experimental verification of Dirac matter. Furthermore, for formula_38 there exists a 0 energy level for Dirac fermions which is independent of the cyclotron frequency formula_36 and with that of the applied magnetic field. For example, the existence of the zeroth Landau level gives rise to a quantum Hall effect where the Hall conductance in quantized at half-integer values.
Fermionic Dirac matter.
In the context of Fermionic quasiparticles, the Dirac velocity is identical to the Fermi velocity; in bosonic systems, no Fermi velocity exists, so the Dirac velocity is a more general property of such systems.
Graphene.
Graphene is a 2-dimensional crystalline allotrope of carbon, where the carbon atoms are arranged in a honeycomb lattice.
Each carbon atom forms formula_39-bonds to the three neighboring atoms that lie in the graphene plane at angles of 120formula_40. These bonds are mediated by three of carbon's four electrons while the fourth electron, which occupies a formula_41 orbital, mediates an out-of-plane π-bond that leads to the electronic bands at the Fermi level. The unique transport properties and the semimetallic state of graphene are the result of the delocalized electrons occupying these pz orbitals.
The semimetallic state corresponds to a linear crossing of energy bands at the formula_42 and formula_43 points of graphene's hexagonal Brillouin zone. At these two points, the electronic structure can be effectively described by the Hamiltonian
formula_44
Here, formula_45 and formula_46 are two of the three Pauli matrices.
The factor formula_47 indicates whether the Hamiltonian describes is centred on the formula_42 or formula_43 valley at the corner of hexagonal Brillouin zone. For graphene, the Dirac velocity is about formula_48 eV formula_49. An energy gap in the dispersion of graphene can be obtained from a low-energy Hamiltonian of the form
formula_50
which now contains a mass term formula_51. There are several distinct ways of introducing a mass term, and the results have different characteristics. The most practical approach for creating a gap (introducing a mass term) is to break the sublattice symmetry of the lattice where each carbon atom is slightly different to its nearest but identical to its next-nearest neighbours; an effect that may result from substrate effects.
Topological insulators.
A topological insulator is a material that behaves as an insulator in its interior (bulk) but whose surface contains conducting states. This property represents a non-trivial, symmetry protected topological order. As a consequence, electrons in topological insulators can only move along the surface of the material. In the bulk of a non-interacting topological insulator, the Fermi level is positioned within the gap between the conduction and valence bands. On the surface, there are special states within the bulk energy gap which can be effectively described by a Dirac Hamiltonian:
formula_52
where formula_53 is normal to the surface and formula_54 is in the real spin basis. However, if we rotate spin by a unitary operator, formula_55, we will end up with the standard notation of Dirac Hamiltonian, formula_56.
Such Dirac cones emerging on the surface of 3-dimensional crystals were observed in experiment, e.g.: bismuth selenide (Biformula_57Seformula_58), tin telluride (SnTe) and many other materials.
Transition metal dichalcogenides (TMDCs).
The low-energy properties some semiconducting transition metal dichalcogenide monolayers, can be described by a two-dimensional massive (gapped) Dirac Hamiltonian with an additional term describing a strong spin–orbit coupling:
formula_60
The spin-orbit coupling formula_61 provides a large spin-splitting in the valence band and formula_62 indicates the spin degree of freedom. As for graphene, formula_63 gives the valley degree of freedom - whether near the formula_42 or formula_59 point of the hexagonal Brillouin zone. Transition metal dichalcogenide monolayers are often discussed in reference to potential applications in valleytronics.
Weyl semimetals.
Weyl semimetals, for example tantalum arsenide (TaAs) and related materials, strontium silicide (SrSiformula_57) have a Hamiltonian that is very similar to that of graphene, but now includes all three Pauli matrices and the linear crossings occur in 3D:
formula_64
Since all three Pauli matrices are present, there is no further Pauli matrix that could open a gap in the spectrum and Weyl points are therefore topologically protected. Tilting of the linear cones so the Dirac velocity varies leads to type II Weyl semimetals.
One distinct, experimentally observable feature of Weyl semimetals is that the surface states form Fermi arcs since the Fermi surface does not form a closed loop.
While the Weyl equation was originally derived for odd spatial dimensions, the generalization of a 3D Weyl fermion state in 2D leads to a distinct topological state of matter, labeled as 2D Weyl semimetals. 2D Weyl semimetals are spin-polarized analogues of graphene that promise access to topological properties of Weyl fermions in (2+1)-dim spacetime. In 2024, an intrinsic 2D Weyl semimetal with spin-polarized Weyl cones and topological Fermi strings (1D analog of Fermi arcs) was discovered in epitaxial monolayer bismuthene.
Dirac semimetals.
In crystals that are symmetric under inversion and time reversal, electronic energy bands are two-fold degenerate. This degeneracy is referred to as Kramers degeneracy. Therefore, semimetals with linear crossings of two energy bands (two-fold degeneracy) at the Fermi energy exhibit a four-fold degeneracy at the crossing point. The effective Hamiltonian for these states can be written as
formula_65
This has exactly the matrix structure of Dirac matter. Examples of experimentally realised Dirac semimetals are sodium bismuthide (Naformula_58Bi) and cadmium arsenide (Cdformula_58Asformula_57)
Bosonic Dirac matter.
While historic interest focussed on fermionic quasiparticles that have potential for technological applications, particularly in electronics, the mathematical structure of the Dirac equation is not restricted to the statistics of the particles. This has led to recent development of the concept of bosonic Dirac matter.
In the case of bosons, there is no Pauli exclusion principle to confine excitations close to the chemical potential (Fermi energy for fermions) so the entire Brillouin zone must be included. At low temperatures, the bosons will collect at the lowest energy point, the formula_66-point of the lower band. Energy must be added to excite the quasiparticles to the vicinity of the linear crossing point.
Several examples of Dirac matter with fermionic quasi-particles occur in systems where there is a hexagonal crystal lattice; so bosonic quasiparticles on an hexagonal lattice are the natural candidates for bosonic Dirac matter. In fact, the underlying symmetry of a crystal structure strongly constrains and protects the emergence of linear band crossings. Typical bosonic quasiparticles in condensed matter are magnons, phonons, polaritons and plasmons.
Existing examples of bosonic Dirac matter include transition metal halides such as CrXformula_58 (X= Cl, Br, I), where the magnon spectrum exhibits linear crossings, granular superconductors in a honeycomb lattice and hexagonal arrays of semiconductor microcavities hosting microcavity polaritons with linear crossings. Like graphene, all these systems have an hexagonal lattice structure.
Anyonic Dirac materials.
Anyonic Dirac matter is a hypothetical field which is rather unexplored to date. An anyon is a type of quasiparticle that can only occur in two-dimensional systems. Considering bosons and fermions, the interchange of two particles contributes a factor of 1 or -1 to the wave function. In contrast, the operation of exchanging two identical anyons causes a global phase shift. Anyons are generally classified as "abelian" or "non-abelian," according to whether the elementary excitations of the theory transform under an abelian representation of the braid group or a non-abelian one. Abelian anyons have been detected in connection to the fractional quantum Hall effect. The possible construction of anyonic Dirac matter relies on the symmetry protection of crossings of anyonic energy bands. In comparison to bosons and fermions the situation gets more complicated as translations in space do not necessarily commute. Additionally, for given spatial symmetries, the group structure describing the anyon strongly depends on the specific phase of the anyon interchange. For example, for bosons, a rotation of a particle about 2π i.e., 360formula_40, will not change its wave function. For fermions, a rotation of a particle about 2π, will contribute a factor of formula_67 to its wave function, whereas a 4π rotation, i.e., a rotation about 720formula_40, will give the same wave function as before. For anyons, an even higher degree of rotation can be necessary, e.g., 6π, 8π, etc., to leave the wave function invariant.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "d"
},
{
"math_id": 1,
"text": "\n \\left[i \\hbar v_{\\rm D} \\gamma^a e_a^\\mu d_\\mu(p) - m v_{\\rm D}^2\\right]\\Psi = 0.\n"
},
{
"math_id": 2,
"text": "d_\\mu"
},
{
"math_id": 3,
"text": "(d+1)"
},
{
"math_id": 4,
"text": "p"
},
{
"math_id": 5,
"text": "+ 1"
},
{
"math_id": 6,
"text": "e_a^\\mu"
},
{
"math_id": 7,
"text": "m"
},
{
"math_id": 8,
"text": "v_{\\rm D}"
},
{
"math_id": 9,
"text": "m v_{\\rm D}^2"
},
{
"math_id": 10,
"text": "m c^2"
},
{
"math_id": 11,
"text": "\\gamma^\\mu"
},
{
"math_id": 12,
"text": "\n \\left\\{\\gamma^\\mu,\\gamma^\\nu \\right\\} = \\gamma^\\mu\\gamma^\\nu + \\gamma^\\nu\\gamma^\\mu = \\eta^{\\mu\\nu} I_d.\n"
},
{
"math_id": 13,
"text": "\\eta^{\\mu\\nu}"
},
{
"math_id": 14,
"text": "I_d"
},
{
"math_id": 15,
"text": "d\\times d"
},
{
"math_id": 16,
"text": "a"
},
{
"math_id": 17,
"text": "\\mu"
},
{
"math_id": 18,
"text": "\\Psi"
},
{
"math_id": 19,
"text": "d_\\mu(p) = D_\\mu"
},
{
"math_id": 20,
"text": "d_\\mu (p) = D_\\mu"
},
{
"math_id": 21,
"text": "\n \\left[i \\hbar v_{\\rm D} \\gamma^a e_a^\\mu D_\\mu - m v_{\\rm D}^2\\right]\\Psi = 0.\n"
},
{
"math_id": 22,
"text": "D_\\mu"
},
{
"math_id": 23,
"text": "\n\\begin{align}\n\\mathrm{Free~Dirac~particle: \\;} E &= \\pm \\sqrt{\\hbar^2 v_{\\rm D}^2\\mathbf{k}^2+m^2c^4} \\\\\n\\mathrm{Kinetic\\; energy: \\;} E &= \\frac{m |\\mathbf{v}|^2}{2} = \\frac{|\\mathbf{k}|^2}{2m}.\n\\end{align}\n"
},
{
"math_id": 24,
"text": "E-k"
},
{
"math_id": 25,
"text": "k"
},
{
"math_id": 26,
"text": "\n E(\\mathbf{k}) = \\hbar v_{\\rm D} |\\mathbf{k}|\n"
},
{
"math_id": 27,
"text": "\\vee"
},
{
"math_id": 28,
"text": "\\wedge"
},
{
"math_id": 29,
"text": "N(\\epsilon)\\propto |\\epsilon|^{d-1}"
},
{
"math_id": 30,
"text": "\\epsilon"
},
{
"math_id": 31,
"text": "d>1"
},
{
"math_id": 32,
"text": "E=\\hbar^2k^2/2m"
},
{
"math_id": 33,
"text": "C(T\\to 0)\\sim T^d"
},
{
"math_id": 34,
"text": "C(T\\to 0)\\sim T"
},
{
"math_id": 35,
"text": "\n\\begin{align}\n\\mathrm{Ordinary\\;matter: \\;} E &= \\hbar \\omega_c \\left(n+\\frac{1}{2} \\right), \\\\\n\\mathrm{Dirac\\;Matter: \\;} E &= \\hbar \\omega_c \\sqrt{|n|}.\n\\end{align}\n"
},
{
"math_id": 36,
"text": "\\omega_c"
},
{
"math_id": 37,
"text": "n"
},
{
"math_id": 38,
"text": "n=0"
},
{
"math_id": 39,
"text": "\\sigma"
},
{
"math_id": 40,
"text": "^\\circ"
},
{
"math_id": 41,
"text": "\\mathrm{p}_z"
},
{
"math_id": 42,
"text": "K"
},
{
"math_id": 43,
"text": "K'"
},
{
"math_id": 44,
"text": "\n{\\cal H} = \\hbar v_{\\rm D} \\left(\\tau k_x \\sigma_x + k_y \\sigma_y \\right). \n"
},
{
"math_id": 45,
"text": "\\sigma_x"
},
{
"math_id": 46,
"text": "\\sigma_y"
},
{
"math_id": 47,
"text": "\\tau=+/-"
},
{
"math_id": 48,
"text": "\\hbar v_{\\rm D} \\approx 5.8"
},
{
"math_id": 49,
"text": "\\AA"
},
{
"math_id": 50,
"text": "\n\\begin{align}\n{\\cal H} = \\hbar v_{\\rm D} (\\tau k_x \\sigma_x + k_y\\sigma_y) + M \\sigma_z,\n\\end{align}\n"
},
{
"math_id": 51,
"text": "M"
},
{
"math_id": 52,
"text": "\n\\begin{align}\n{\\cal H} = \\hbar v_{\\rm D} (\\mathbf{k}\\times \\boldsymbol{\\sigma})\\cdot\\hat{\\mathbf{z}}\n\\end{align}\n"
},
{
"math_id": 53,
"text": "\\hat{\\mathbf{z}}"
},
{
"math_id": 54,
"text": "{\\mathbf{\\sigma}}"
},
{
"math_id": 55,
"text": "U={\\rm diag}[1,i]"
},
{
"math_id": 56,
"text": "{\\cal H} = \\hbar v_{\\rm D} {\\boldsymbol{\\sigma}}\\cdot {\\mathbf{k}}"
},
{
"math_id": 57,
"text": "_2"
},
{
"math_id": 58,
"text": "_3"
},
{
"math_id": 59,
"text": "K^\\prime"
},
{
"math_id": 60,
"text": "\n\\begin{align}\n{\\cal H} = \\hbar v_{\\rm D} (\\tau k_x \\sigma_x+k_y\\sigma_y)+\\Delta\\sigma_z+\\lambda(1-\\sigma_z)\\tau s+(\\alpha+\\beta\\sigma_z)(k_x^2+k_y^2).\n\\end{align}\n"
},
{
"math_id": 61,
"text": "\\lambda"
},
{
"math_id": 62,
"text": "s"
},
{
"math_id": 63,
"text": "\\tau"
},
{
"math_id": 64,
"text": "\n{\\cal H} = \\hbar v_{\\rm D} (k_x\\sigma_x + k_y\\sigma_y + k_z\\sigma_z).\n"
},
{
"math_id": 65,
"text": "\n{\\cal H} = \\hbar v_{\\rm D} \\left(\n\\begin{array}{cc}\n \\mathbf{k}\\cdot\\boldsymbol{\\sigma} & 0 \\\\\n 0 & -\\mathbf{k}\\cdot\\boldsymbol{\\sigma}\n\\end{array}\n\\right).\n"
},
{
"math_id": 66,
"text": "\\Gamma"
},
{
"math_id": 67,
"text": "-1"
}
] |
https://en.wikipedia.org/wiki?curid=56731778
|
56733354
|
Experiment to Detect the Global EoR Signature
|
Experimental telescope in Australia
The Experiment to Detect the Global EoR Signature (EDGES) is an experiment and radio telescope located in a radio quiet zone at the Murchison Radio-astronomy Observatory in Western Australia. It is a collaboration between Arizona State University and Haystack Observatory, with infrastructure provided by CSIRO. EoR stands for epoch of reionization, a time in cosmic history when neutral atomic hydrogen gas became ionised due to ultraviolet light from the first stars.
Low-band instruments.
The experiment has two low-band instruments, each of which has a dipole antenna pointed to the zenith and observing a single polarisation. The antenna is around in size, sat on a ground shield. It is coupled with a radio receiver, with a 100m cable run to a digital spectrometer. The instruments operate at , and are separated by 150m. Observations started in August 2015.
In 2023, a new version of the low-band antenna in which the electronics are built into the antenna was installed on a larger ground plane of 50 x 50 metres (164 ft x 164 ft) to further reduce the effects of scattering from nearby objects and observations started in June 2023.
78 MHz absorption profile.
In March 2018, the collaboration published a paper in "Nature" announcing the discovery of a broad absorption profile centered at a frequency of formula_0MHz in the sky-averaged signal after subtracting Galactic synchrotron emission. The absorption profile has a width of formula_1MHz and an amplitude of formula_2K, against a background RMS of 0.025K, giving it a signal-to-noise ratio of 37. The equivalent redshift is centered at formula_3, spanning z=20–15. The signal is possibly due to ultraviolet light from the first stars in the Universe altering the emission of the 21cm line by lowering the temperature of the hydrogen relative to the cosmic microwave background (the mechanism is Wouthuysen–Field coupling). A "more exotic scenario," encouraged by the unexpected strength of the absorption, is that the signal is due to interactions between dark matter and baryons.
In 2021, Melia reported that the deeper absorption is compatible with the alternative Friedmann–Lemaître–Robertson–Walker (FLRW) cosmology known as the "Rh = ct" universe.
In 2022, an experiment called Shaped Antenna Measurement of the Background Radio Spectrum (SARAS) led by the Raman Research Institute reported that their measurements didn't replicate EDGES results rejecting them at 95.3% confidence level.
High-band instruments.
The high-band instrument is of similar design, and operates at .
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "78\\pm1"
},
{
"math_id": 1,
"text": "19_{-2}^{+4}"
},
{
"math_id": 2,
"text": "0.5_{-0.2}^{+0.5}"
},
{
"math_id": 3,
"text": "z\\approx17"
}
] |
https://en.wikipedia.org/wiki?curid=56733354
|
56733844
|
Lorenz energy cycle
|
The Lorenz energy cycle describes the generation, conversion and dissipation of energy in the general atmospheric circulation. It is named after the meteorologist Edward N. Lorenz who worked on its mathematical formulation in the 1950s.
Description.
Introduction.
Any atmospheric circulation system, whether it is a small-scale weather system or a large-scale zonal wind system, is maintained by the supply of kinetic energy. The development of such a system requires either a transformation of some other form of energy into kinetic energy, or the conversion of the kinetic energy of another system into that of the developing system.
On a global scale, the atmospheric circulation must carry energy polewards, because there is a net gain of energy in the tropics through incoming solar radiation and net loss of energy in high latitudes through thermal emission. At low latitudes, where the Hadley cell takes shape, the poleward transport of energy is done by the mean meridional circulation. At mid-latitudes in contrast, the influence of longitudinally asymmetric features, referred to as eddies, is dominant over the mean flow. For a closer examination, it is useful to split all parameters (e.g. P) into their zonal-mean (denoted by an overline, e.g. P) and their departures from the zonal mean due to orography, land-sea contrasts, weather systems and any other eddy-like features (denoted by a prime, e.g. P').
Energy reservoirs.
The available potential energy is the amount of potential energy in the atmosphere that can be converted into kinetic energy. In a statically stable atmosphere, the zonal-mean available potential energy P is approximated as:
formula_0
where formula_1 is the integral over the Earth's entire atmosphere, ρ0 is the mean density of air, N is the buoyancy frequency, a measure of static stability, Φ is the geopotential and z* denotes a log-pressure coordinate.
Eddy available potential energy P' is approximated as:
formula_2
Zonal-mean kinetic energy K is approximated as:
formula_3
where u and v are the zonal and meridional components of air velocity.
Eddy kinetic energy K' is approximated as:
formula_4
Sources, sinks and conversion of energy.
The description of the Lorenz Energy Cycle is completed by a mathematical formalism for the generation of potential energy through diabatic heating, its conversion to kinetic energy through vertical motion of air and the dissipation of kinetic energy through friction. A conversion of zonal-mean energy to eddy energy and vice versa is possible where eddies interact with the mean flow and displace warm/cold air.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\overline{P}=\\frac{1}{2}\\int_A \\frac{\\rho_0}{N^2}\\left(\\frac{\\partial\\overline{\\Phi}}{\\partial z^{*}}\\right)^2\\,\\mathrm{d}V"
},
{
"math_id": 1,
"text": "\\int_A\\mathrm{d}V"
},
{
"math_id": 2,
"text": "P'=\\frac{1}{2}\\int_A \\frac{\\rho_0}{N^2}\\overline{\\left(\\frac{\\partial\\Phi'}{\\partial z^{*}}\\right)^2}\\,\\mathrm{d}V"
},
{
"math_id": 3,
"text": "\\overline{K}=\\int_A\\rho_0\\frac{\\overline{u}^2+\\overline{v}^2}{2}\\,\\mathrm{d}V"
},
{
"math_id": 4,
"text": "K'=\\int_A\\rho_0\\frac{\\overline{u'^2}+\\overline{v'^2}}{2}\\,\\mathrm{d}V"
}
] |
https://en.wikipedia.org/wiki?curid=56733844
|
56734151
|
HR 2562 B
|
Brown dwarf
HR 2562 B is a substellar companion orbiting the star HR 2562. Discovered in 2016 by a team led by Quinn M. Konopacky by direct imaging, HR 2562 B orbits within the inner edge of HR 2562's circumstellar disc—as of April 2023, it is one of only two known brown dwarfs to do so. Separated by roughly from its primary companion, HR 2562 B has drawn interest for its potential dynamical interactions with the outer circumstellar disc.
Discovery.
HR 2562 B was discovered using the Gemini Planet Imager (GPI), which first observed the star HR 2562 in January 2016. In the initial data set, Konopacky and collaborators identified a candidate companion object. As a result, followup observations were conducted within the following month in the infrared K1-, K2-, and J-bands. Within the processed data set, HR 2562 B was confirmed to share a common proper motion with HR 2562, with Konopacky and collaborators announcing its discovery in a paper published on 14 September 2016.
Host star.
HR 2562 B's parent star, HR 2562 (alternatively designated HD 50571 or HIP32775), has a mass of M☉ and a radius of R☉. With an estimated effective temperature of 6597 ± 81K, it is a main-sequence star with the spectral type F5V. It is located from the Sun in the constellation Pictor. HR 2562 is not known to belong to a moving group or stellar cluster.
As with many mid F-type stars, the age of HR 2562 is poorly constrained. Between 1999 and 2011, estimates from various teams of astronomers determined ages ranging from roughly 300 Myr to 1.6 Gyr. In 2018, a team of astronomers led by D. Mesa derived an age of Myr using measurements of the star's lithium-temperature relationship.
Properties.
Orbital properties.
Initial observations of HR 2562 B by Konopacky and collaborators yielded a separation of , placing it interior to and coplanar with the inner edge of HR 2562's observed debris disc. Further observations of HR 2562 B by the Atacama Large Millimeter Array (ALMA) supported this, yielding a semi-major axis of AU, an orbital period of yr, and an orbital eccentricity of . With a probable orbital inclination of °, HR 2562 B's misalignment angle with the debris disc is either ° or °. However, the limited coverage of observations still leaves a wide range of possible orbits; both low-eccentricity, coplanar orbits and high-eccentricity, misaligned orbits would be consistent with observation data. However, a highly-misaligned orbit would significantly perturb the disc, suggesting that a low-eccentricity, coplanar solutions are likelier.
Any additional companions around HR 2562 with a mass on the order of 10 MJ should be visible at separations larger than 10 AU, and any companion a few times more massive than Jupiter should be visible to SPHERE's infrared dual-band spectrograph (IRDIS) instrument—thus placing mass restrictions on any additional companions.
Physical properties.
HR 2562 B's exact mass is unknown. The brown dwarf was estimated to be 29 ± 15 MJ in 2021. However, subsequent observations placed an upper mass limit of < 18.5 MJ. Its luminosity is about formula_0 solar luminosity. Its spectral type is L7±3.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{1}{50 000}"
}
] |
https://en.wikipedia.org/wiki?curid=56734151
|
56737160
|
Józef Schreier
|
Józef Schreier (; 18 February 1909, Drohobycz, Austria-Hungary – April 1943, Drohobycz, Occupied Poland) was a Polish mathematician of Jewish origin, known for his work in functional analysis, group theory and combinatorics. He was a member of the Lwów School of Mathematics and a victim of the Holocaust.
Józef Schreier was born on 18 February 1909 in Drohobycz. His father was a rabbi and doctor of philosophy. His cousin was the musician Alfred Schreyer. From 1927-31 he studied at the Jan Kazimierz University in Lwów.
In his first published paper, he defined what later came to be known as Schreier sets in order to show that not all Banach spaces possess the weak Banach-Saks property, disproving a conjecture of Stefan Banach and Stanisław Saks. Schreier sets were later discovered independently by researchers in Ramsey theory.
Schreier completed his master's degree "On tournament elimination systems" in 1932 under the direction of Hugo Steinhaus. Schreier correctly conjectured that to determine the second largest number in an unordered list requires at least formula_0 comparisons. In 1934, he completed his doctorate, "On finite base in topological groups" under Banach.
In 1932 he married Zofia Rosenblatt. Schreier often played blindfold chess.
He was a friend of Stanisław Ulam and co-authored eight papers with him. They were the only two undergraduates who attended the meetings at the Scottish Café in Lwów. (Schreier contributed ten questions to the Scottish Book.) Together they proved the Baire–Schreier–Ulam theorem and Schreier–Ulam theorem.
According to Ulam,
<templatestyles src="Template:Blockquote/styles.css" />We would meet almost every day, occasionally at the coffee house but more often at my house. His home was in Drohobycz, a little town and petroleum center south of Lwów. What a variety of problems and methods we discussed together! Our work, while still inspired by the methods then current in Lwów, branched into new fields: groups of topological transformations, groups of permutations, pure set theory, general algebra. I believe that some of our papers were among the first to show applications to a wider class of mathematical objects of modern set theoretical methods combined with a more algebraic point of view. We started work on the theory of groupoids, as we called them, or semi-groups, as they are called now.
With the outbreak of World War II, Eastern Poland including Drohobycz was occupied by the USSR in accordance with the Molotov–Ribbentrop Pact. After Operation Barbarossa, this territory was invaded by Nazi Germany. The Jews of Drohobycz were confined to the Drohobycz Ghetto. In April 1943, the Germans discovered—or were informed of—an underground bunker in which Schreier was hiding with other Jews. It took three days for them to force their way in. Schreier committed suicide by cyanide rather than be captured. Of a prewar Jewish population of 10,000 in Drohobycz, approximately 400 survived the war. Schreier's wife was one of them and later moved to Israel, where she remarried.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n + \\lceil \\log_2 n \\rceil -2"
}
] |
https://en.wikipedia.org/wiki?curid=56737160
|
567391
|
Trachtenberg system
|
System of rapid mental calculation
The Trachtenberg system is a system of rapid mental calculation. The system consists of a number of readily memorized operations that allow one to perform arithmetic computations very quickly. It was developed by the Russian engineer Jakow Trachtenberg in order to keep his mind occupied while being in a Nazi concentration camp.
The rest of this article presents some methods devised by Trachtenberg. Some of the algorithms Trachtenberg developed are ones for general multiplication, division and addition. Also, the Trachtenberg system includes some specialised methods for multiplying small numbers between 5 and 13 (but shown here is 2–12).
The section on addition demonstrates an effective method of checking calculations that can also be applied to multiplication.
General multiplication.
The method for general multiplication is a method to achieve multiplications formula_0 with low space complexity, i.e. as few temporary results as possible to be kept in memory. This is achieved by noting that the final digit is completely determined by multiplying the last digit of the multiplicands. This is held as a temporary result. To find the next to last digit, we need everything that influences this digit: The temporary result, the last digit of formula_1 times the next-to-last digit of formula_2, as well as the next-to-last digit of formula_1 times the last digit of formula_2. This calculation is performed, and we have a temporary result that is correct in the final two digits.
In general, for each position formula_3 in the final result, we sum for all formula_4:
formula_5
People can learn this algorithm and thus multiply four-digit numbers in their head – writing down only the final result. They would write it out starting with the rightmost digit and finishing with the leftmost.
Trachtenberg defined this algorithm with a kind of pairwise multiplication where two digits are multiplied by one digit, essentially only keeping the middle digit of the result. By performing the above algorithm with this pairwise multiplication, even fewer temporary results need to be held.
Example: formula_6
To find the first (rightmost) digit of the answer, start at the first digit of the multiplicand
The units digit of formula_7 is formula_8
The first digit of the answer is formula_9. The tens digit formula_10 is ignored.
To find the second digit of the answer, start at the second digit of the multiplicand:
The units digit of formula_11 plus the tens digit of formula_12 plus
The units digit of formula_13.
formula_14.
The second digit of the answer is formula_15 and carry formula_16 to the third digit.
To find the third digit of the answer, start at the third digit of the multiplicand:
The units digit of formula_17 plus the tens digit of formula_11 plus
The units digit of formula_18 plus the tens digit of formula_13 plus
The units digit of formula_19
formula_20
The third digit of the answer is formula_21 and carry formula_16 to the next digit.
To find the fourth digit of the answer, start at the fourth digit of the multiplicand:
The units digit of formula_22 plus the tens digit of formula_17 plus
The units digit of formula_23 plus the tens digit of formula_18 plus
The units digit of formula_24 plus the tens digit of formula_19.
formula_25 carried from the third digit.
The fourth digit of the answer is formula_26 and carry formula_27 to the next digit.
Continue with the same method to obtain the remaining digits.
Trachtenberg called this the 2 Finger Method. The calculations for finding the fourth digit from the example above are illustrated at right. The arrow from the nine will always point to the digit of the multiplicand directly above the digit of the answer you wish to find, with the other arrows each pointing one digit to the right. Each arrow head points to a UT Pair, or Product Pair. The vertical arrow points to the product where we will get the Units digit, and the sloping arrow points to the product where we will get the Tens digits of the Product Pair. If an arrow points to a space with no digit there is no calculation for that arrow. As you solve for each digit you will move each of the arrows over the multiplicand one digit to the left until all of the arrows point to prefixed zeros.
Division in the Trachtenberg System is done much the same as in multiplication but with subtraction instead of addition. Splitting the dividend into smaller Partial Dividends, then dividing this Partial Dividend by only the left-most digit of the divisor will provide the answer one digit at a time. As you solve each digit of the answer you then subtract Product Pairs (UT pairs) and also NT pairs (Number-Tens) from the Partial Dividend to find the next Partial Dividend. The Product Pairs are found between the digits of the answer so far and the divisor. If a subtraction results in a negative number you have to back up one digit and reduce that digit of the answer by one. With enough practice this method can be done in your head.
General addition.
A method of adding columns of numbers and accurately checking the result without repeating the first operation. An intermediate sum, in the form of two rows of digits, is produced. The answer is obtained by taking the sum of the intermediate results with an L-shaped algorithm. As a final step, the checking method that is advocated both removes the risk of repeating any original errors and identifies the precise column in which an error occurs all at once. It is based on check (or digit) sums, such as the nines-remainder method.
For the procedure to be effective, the different operations used in each stage must be kept distinct, otherwise there is a risk of interference.
Other multiplication algorithms.
When performing any of these multiplication algorithms the following "steps" should be applied.
The answer must be found one digit at a time starting at the least significant digit and moving left. The last calculation is on the leading zero of the multiplicand.
Each digit has a "neighbor", i.e., the digit on its right. The rightmost digit's neighbor is the trailing zero.
The 'halve' operation has a particular meaning to the Trachtenberg system. It is intended to mean "half the digit, rounded down" but for speed reasons people following the Trachtenberg system are encouraged to make this halving process instantaneous. So instead of thinking "half of seven is three and a half, so three" it's suggested that one thinks "seven, three". This speeds up calculation considerably. In this same way the tables for subtracting digits from 10 or 9 are to be memorized.
And whenever the rule calls for adding half of the neighbor, always add 5 if the current digit is odd. This makes up for dropping 0.5 in the next digit's calculation.
Numbers and digits (base 10).
Digits and numbers are two different notions. The number T consists of n digits cn ... c1.
formula_28
Multiplying by 2.
Proof
formula_29
Rule:
Example: 8624 × 2
Working from left to right:
8+8=16,
6+6=12 (carry the 1),
2+2=4
4+4=8;
8624 × 2 = 17248
Example: 76892 × 2
Working from left to right:
7+7=14
6+6=12
8+8=16
9+9=18
2+2=4;
76892 × 2 =153784
Multiplying by 3.
Proof
formula_30
Rule:
Example: 492 × 3 = 1476
Working from right to left:
(10 − 2) × 2 + Half of 0 (0) = 16. Write 6, carry 1.
(9 − 9) × 2 + Half of 2 (1) + 5 (since 9 is odd) + 1 (carried) = 7. Write 7.
(9 − 4) × 2 + Half of 9 (4) = 14. Write 4, carry 1.
Half of 4 (2) − 2 + 1 (carried) = 1. Write 1.
Multiplying by 4.
Proof
formula_31
Rule:
Example: 346 × 4 = 1384
Working from right to left:
(10 − 6) + Half of 0 (0) = 4. Write 4.
(9 − 4) + Half of 6 (3) = 8. Write 8.
(9 − 3) + Half of 4 (2) + 5 (since 3 is odd) = 13. Write 3, carry 1.
Half of 3 (1) − 1 + 1 (carried) = 1. Write 1.
Multiplying by 5.
Proof
formula_32
Rule:
Example: 42×5=210
Half of 2's neighbor, the trailing zero, is 0.
Half of 4's neighbor is 1.
Half of the leading zero's neighbor is 2.
43×5 = 215
Half of 3's neighbor is 0, plus 5 because 3 is odd, is 5.
Half of 4's neighbor is 1.
Half of the leading zero's neighbor is 2.
93×5=465
Half of 3's neighbor is 0, plus 5 because 3 is odd, is 5.
Half of 9's neighbor is 1, plus 5 because 9 is odd, is 6.
Half of the leading zero's neighbor is 4.
Multiplying by 6.
Proof
formula_33
Rule:
Example: 357 × 6 = 2142
Working right to left:
7 has no neighbor, add 5 (since 7 is odd) = 12. Write 2, carry the 1.
5 + half of 7 (3) + 5 (since the starting digit 5 is odd) + 1 (carried) = 14. Write 4, carry the 1.
3 + half of 5 (2) + 5 (since 3 is odd) + 1 (carried) = 11. Write 1, carry 1.
0 + half of 3 (1) + 1 (carried) = 2. Write 2.
Multiplying by 7.
Proof
formula_34
Rule:
Example: 693 × 7 = 4,851
Working from right to left:
(3×2) + 0 + 5 + 0 = 11 = carryover 1, result 1.
(9×2) + 1 + 5 + 1 = 25 = carryover 2, result 5.
(6×2) + 4 + 0 + 2 = 18 = carryover 1, result 8.
(0×2) + 3 + 0 + 1 = 4 = result 4.
Multiplying by 8.
Proof
formula_35
Rule:
Example: 456 × 8 = 3648
Working from right to left:
(10 − 6) × 2 + 0 = 8. Write 8.
(9 − 5) × 2 + 6 = 14, Write 4, carry 1.
(9 − 4) × 2 + 5 + 1 (carried) = 16. Write 6, carry 1.
4 − 2 + 1 (carried) = 3. Write 3.
Multiplying by 9.
Proof
formula_36
Rule:
For rules 9, 8, 4, and 3 only the first digit is subtracted from 10. After that each digit is subtracted from nine instead.
Example: 2,130 × 9 = 19,170
Working from right to left:
(10 − 0) + 0 = 10. Write 0, carry 1.
(9 − 3) + 0 + 1 (carried) = 7. Write 7.
(9 − 1) + 3 = 11. Write 1, carry 1.
(9 − 2) + 1 + 1 (carried) = 9. Write 9.
2 − 1 = 1. Write 1.
Multiplying by 10.
Add 0 (zero) as the rightmost digit.
Proof
formula_37
Multiplying by 11.
Proof
formula_38
Rule:
Example: formula_39
(0 + 3) (3 + 4) (4 + 2) (2 + 5) (5 + 0)
3 7 6 7 5
To illustrate:
11=10+1
Thus,
formula_40
formula_41
Multiplying by 12.
Proof
formula_42
Rule: to multiply by 12:Starting from the rightmost digit, double each digit and add the neighbor. (The "neighbor" is the digit on the right.)
If the answer is greater than a single digit, simply carry over the extra digit (which will be a 1 or 2) to the next operation.
The remaining digit is one digit of the final result.
Example: formula_43
Determine neighbors in the multiplicand 0316:
formula_44
Multiplying by 13.
Proof
formula_45
Publications.
The book contains specific algebraic explanations for each of the above operations.
Most of the information in this article is from the original book.
The algorithms/operations for multiplication, etc., can be expressed in other more compact ways that the book does not specify, despite the chapter on algebraic description.
Other systems.
There are many other methods of calculation in mental mathematics. The list below shows a few other methods of calculating, though they may not be entirely mental.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " a\\times b "
},
{
"math_id": 1,
"text": "a"
},
{
"math_id": 2,
"text": "b"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "i"
},
{
"math_id": 5,
"text": "a \\text{ (digit at } i\\text{ )} \\times b \\text{ (digit at } (n-i)\\text{)}."
},
{
"math_id": 6,
"text": "123456 \\times 789"
},
{
"math_id": 7,
"text": "9 \\times 6 "
},
{
"math_id": 8,
"text": " 4. "
},
{
"math_id": 9,
"text": "4"
},
{
"math_id": 10,
"text": "5"
},
{
"math_id": 11,
"text": "9 \\times 5"
},
{
"math_id": 12,
"text": "9 \\times 6"
},
{
"math_id": 13,
"text": "8 \\times 6"
},
{
"math_id": 14,
"text": "5 + 5 + 8 = 18"
},
{
"math_id": 15,
"text": "8"
},
{
"math_id": 16,
"text": "1"
},
{
"math_id": 17,
"text": "9 \\times 4"
},
{
"math_id": 18,
"text": "8 \\times 5"
},
{
"math_id": 19,
"text": "7 \\times 6"
},
{
"math_id": 20,
"text": "1 + 6 + 4 + 0 + 4 + 2 = 17"
},
{
"math_id": 21,
"text": "7"
},
{
"math_id": 22,
"text": "9 \\times 3"
},
{
"math_id": 23,
"text": "8 \\times 4"
},
{
"math_id": 24,
"text": "7 \\times 5"
},
{
"math_id": 25,
"text": "1 + 7 + 3 + 2 + 4 + 5 + 4 = 26"
},
{
"math_id": 26,
"text": "6"
},
{
"math_id": 27,
"text": "2"
},
{
"math_id": 28,
"text": "\nT = 10^{n-1}*c_n + ... + 10^0*c_1\n"
},
{
"math_id": 29,
"text": "\n\\begin{align}\nR & = T*2 \\Leftrightarrow \\\\\nR & = 2*(10^{n-1}*c_{n} + \\ldots + 10^0*c_{1}) \\Leftrightarrow \\\\\nR & = 10^{n-1}*2*c_{n} + \\ldots + 10^0*2*c_{1} \\\\\n\\\\\n & QED\n\\end{align}\n"
},
{
"math_id": 30,
"text": "\n\\begin{align}\nR &= T*3 \\Leftrightarrow \\\\\n\nR &= 3*(10^{n-1}*c_n + \\ldots + 10^0*c_1) \\Leftrightarrow \\\\\n\nR &= (10/2 - 2)*(10^{n-1}*c_n + 10^{n-2}*c_{n-1} + \\ldots + 10^0*c_1) \\Leftrightarrow \\\\\n\nR &= 10^n*(c_n/2 - 2) + 10^n*2 + 10^{n-1}*(c_{n-1}/2 - 2) + 10^{n-1}*2 + \\ldots + 10^1*(c_1/2 - 2) + 10^1*2 \\\\\n &-2*(10^{n-1}*c_n + 10^{n-2}*c_{n-1} + \\ldots + 10^1*c_2 + 10^0*c_1) \\Leftrightarrow \\\\\n\nR &= 10^n*(c_n/2 - 2) + 10^{n-1}*(c_{n-1}/2 + 20 - 2 - 2*c_n) + 10^{n-2}*(c_{n-2}/2 + 20 - 2 - 2*c_{n-1}) \\\\\n &+ \\ldots + 10^1*(c_1/2 + 20 - 2 - 2*c_2) + 10^0*(20 - 2*c_1) \\Leftrightarrow \\\\\n\nR &= 10^n*(c_n/2 - 2) + 10^{n-1}*(2*(9 - c_n) + c_{n-1}/2) + 10^{n-2}*(2*(9 - c_{n-1}) + c_{n-2}/2) \\\\\n &+ \\ldots + 10^1*(2*(9 - c_2) + c_1/2) + 10^0*(2*(10 - c_1)) \\Leftrightarrow\n\\vdots \\Re \\to \\aleph\\text{: a } = (a \\text{ div } b)*b + (a \\bmod b) \\\\\n\nR &= 10^n*(((c_n \\text{ div } 2)*2 + (c_n \\bmod 2))/2 - 2) + 10^{n-1}*(2*(9 - c_n) + c_{n-1}/2) + 10^{n-2}*(2*(9 - c_{n-1}) + c_{n-2}/2) \\\\\n &+ \\ldots + 10^1*(2*(9 - c_2) + c_1/2) + 10^0*(2*(10 - c_1)) \\Leftrightarrow \\\\\n\nR &= 10^n*((c_n \\text{ div } 2) - 2) + 10^{n-1}*(10*(c_n \\bmod 2)/2 + 2*(9 - c_n) + c_{n-1}/2) + 10^{n-2}*(2*(9 - c_{n-1}) + c_{n-2}/2) \\\\\n &+ \\ldots + 10^1*(2*(9 - c_2) + c_1/2) + 10^0*(2*(10 - c_1)) \\Leftrightarrow \\\\\n\nR &= 10^n*((c_n \\text{ div } 2) - 2) + 10^{n-1}*(2*(9 - c_n) + c_{n-1}/2 + (c_n \\bmod 2)*5) + 10^{n-2}*(2*(9 - c_{n-1}) + c_{n-2}/2) \\\\\n &+ \\ldots + 10^1*(2*(9 - c_2) + c_1/2) + 10^0*(2*(10 - c_1)) \\Leftrightarrow \\\\\n\nR &= 10^n*((c_n \\text{ div } 2) - 2) + 10^{n-1}*(2*(9 - c_n) + (c_{n-1} \\text{ div } 2) + \\text{ if}(c_n \\bmod 2 <> 0; 5;0)) \\\\\n &+ \\ldots + 10^1*(2*(9 - c_2) + (c_1 \\text{ div } 2) + \\text{ if}(c_2 \\bmod 2 <> 0; 5;0)) \\\\\n &+ 10^0*(2*(10 - c_1) + \\text{ if}(c_1 \\bmod 2 <> 0; 5;0)) \\\\\n\\\\\n &QED\n\\end{align}\n"
},
{
"math_id": 31,
"text": "\n\\begin{align}\nR &= T*4 \\Leftrightarrow \\\\\n\nR &= 4*(10^{n-1}*c_n + \\ldots + 10^0*c_1) \\Leftrightarrow \\\\\n\nR &= (10/2 - 1)*(10^{n-1}*c_n + 10^{n-2}*c_{n-1} + \\ldots + 10^0*c_1) \\Leftrightarrow \\vdots \\mbox{ see proof of method 3} \\\\\n\nR &= 10^n*((c_n \\text{ div } 2) - 1) + 10^{n-1}*((9 - c_n) + (c_{n-1} \\text{ div } 2) + \\text{ if}(c_n \\bmod 2 <> 0; 5;0)) \\\\\n &+ \\ldots + 10^1*((9 - c_2) + (c_1 \\text{ div } 2) + \\text{ if}(c_2 \\bmod 2 <> 0; 5;0)) \\\\\n &+ 10^0*((10 - c_1) + \\text{ if}(c_1 \\bmod 2 <> 0; 5;0)) \\\\\n\\\\\n &QED\n\\end{align}\n"
},
{
"math_id": 32,
"text": "\n\\begin{align}\nR &= T*5 \\Leftrightarrow \\\\\n\nR &= 5*(10^{n-1}*c_n + \\ldots + 10^0*c_1) \\Leftrightarrow \\\\\n\nR &= (10/2)*(10^{n-1}*c_n + 10^{n-2}*c_{n-1} + \\ldots + 10^0*c_1) \\Leftrightarrow \\\\\n\nR &= 10^n*(c_n/2) + 10^{n-1}*(c_{n-1}/2) + \\ldots + 10^1*(c_1/2) \\Leftrightarrow \\vdots \\Re \\to \\aleph\\text{: a } = (a \\text{ div } b)*b + (a \\bmod b) \\\\\n\nR &= 10^n*((c_n \\text{ div } 2) * 2 + (c_n \\bmod 2))/2 + 10^{n-1}*((c_{n - 1} \\text{ div } 2) * 2 + (c_{n - 1} \\bmod 2))/2 \\\\\n &+ \\ldots + 10^2*((c_2 \\text{ div } 2) * 2 + (c_2 \\bmod 2))/2 + 10^1*((c_1 \\text{ div } 2) * 2 + (c_1 \\bmod 2))/2 \\Leftrightarrow \\\\\n\nR &= 10^n*((c_n \\text{ div } 2) + (c_n \\bmod 2)/2) + 10^{n-1}*((c_{n - 1} \\text{ div } 2) + (c_{n - 1} \\bmod 2)/2) \\\\\n &+ \\ldots + 10^2*((c_2 \\text{ div } 2) + (c_2 \\bmod 2)/2) + 10^1*((c_1 \\text{ div } 2) + (c_1 \\bmod 2)/2) \\Leftrightarrow \\\\\n\nR &= 10^n*(c_n \\text{ div } 2) + 10^{n-1}*10*(c_n \\bmod 2)/2 + 10^{n-1}*(c_{n - 1} \\text{ div } 2) + 10^{n-2}*10*(c_{n-1} \\bmod 2)/2 + 10^{n-2}*(c_{n-2} \\text{ div } 2) \\\\\n &+ \\ldots + 10^2*(c_2 \\text{ div } 2) + 10^1*10*(c_2 \\bmod 2)/2 + (c_1 \\text{ div } 2)) + 10^0*10*(c_1 \\bmod 2)/2 \\Leftrightarrow \\\\\n\nR &= 10^n*(c_n \\text{ div } 2) + 10^{n-1}*(c_{n - 1} \\text{ div } 2) + 10^{n-1}*(c_n \\bmod 2)*5 + 10^{n-2}*(c_{n-2} \\text{ div } 2) + 10^{n-2}*(c_{n-1} \\bmod 2)*5 \\\\\n &+ \\ldots + 10^2*(c_2 \\text{ div } 2) + 10^2*(c_3 \\bmod 2)*5 + 10^1*(c_1 \\text{ div } 2) + 10^1*(c_2 \\bmod 2)*5 + 10^0*(c_1 \\bmod 2)*5 \\Leftrightarrow \\\\\n\nR &= 10^n*(c_n \\text{ div } 2) + 10^{n-1}*((c_{n - 1} \\text{ div } 2) + \\text{ if}(c_n \\bmod 2 <> 0; 5; 0)) + 10^{n-2}*((c_{n-2} \\text{ div } 2) + \\text{ if}(c_{n-1} \\bmod 2 <> 0; 5; 0)) \\\\\n &+ \\ldots + 10^2*((c_2 \\text{ div } 2) + \\text{ if}(c_3 \\bmod 2 <> 0; 5; 0)) + 10^1*((c_1 \\text{ div } 2) + \\text{ if}(c_2 \\bmod 2 <> 0; 5; 0)) + 10^0*\\text{ if}(c_1 \\bmod 2 <> 0; 5; 0)\n \\\\\n \\\\\n &QED\n\\end{align}\n"
},
{
"math_id": 33,
"text": "\n\\begin{align}\nR &= T*6 \\Leftrightarrow \\\\\n\nR &= 6*(10^{n-1}*c_n + \\ldots + 10^0*c_1) \\Leftrightarrow \\\\\n\nR &= (10/2 + 1)*(10^{n-1}*c_n + 10^{n-2}*c_{n-1} + \\ldots + 10^0*c_1) \\Leftrightarrow \\\\\n\nR &= 10^n*c_n/2 + 1*10^{n-1}*c_n + 10^{n-1}*c_{n-1}/2 + 1*10^{n-2}*c_{n-1} + \\ldots + 10^1*c_1/2 + 1*10^0*c_1 \\Leftrightarrow \\\\\n\nR &= 10^n*c_n/2 + 10^{n-1}*(c_n + c_{n-1}/2) + \\ldots + 10^1*c_1/2 + c_1 \\Leftrightarrow \\vdots \\Re \\to \\aleph\\text{: a } = (a \\text{ div } b)*b + (a \\bmod b) \\\\\n\nR &= 10^n*((c_n \\text{ div } 2)*2 + (c_n \\bmod 2))/2 + 10^{n-1}*(c_n + c_{n-1}/2) + \\ldots + 10^1*c_1/2 + c_1 \\Leftrightarrow \\\\\n\nR &= 10^n*(c_n \\text{ div } 2) + 10^{n-1}*(c_n \\bmod 2)*5 + 10^{n-1}*c_n + 10^{n-1}*((c_{n-1} \\text{ div } 2)*2 + (c_{n-1} \\bmod 2))/2 + \\ldots + 10^1*c_1/2 + c_1 \\Leftrightarrow \\\\\n\nR &= 10^n*(c_n \\text{ div } 2) + 10^{n-1}*(c_n + (c_{n-1} \\text{ div } 2) + \\text{ if}((c_n \\bmod 2) <> 0; 5;0)) + 10^{n-2}*(c_{n-1} \\bmod 2)*5 + \\ldots + 10^1*c_1/2 + c_1 \\Leftrightarrow \\\\\n\nR &= 10^n*(c_n \\text{ div } 2) + 10^{n-1}*(c_n + (c_{n-1} \\text{ div } 2) + \\text{ if}((c_n \\bmod 2) <> 0; 5;0)) \\\\\n &+ 10^{n-2}*(c_{n-1} + (c_{n-2} \\text{ div } 2) + \\text{ if}((c_{n-1} \\bmod 2) <> 0; 5;0)) \\\\\n &+ \\ldots + 10^0*(c_1 + \\text{ if}((c_1 \\bmod 2) <> 0; 5;0)) \\\\\n\\\\\n &QED\n\\end{align}\n"
},
{
"math_id": 34,
"text": "\n\\begin{align}\nR &= T*7 \\Leftrightarrow \\\\\n\nR &= 7*(10^{n-1}*c_{n} + \\ldots + 10^0*c_{1}) \\Leftrightarrow \\\\\n\nR &= (10/2 + 2)*(10^{n-1}*c_{n} + \\ldots + 10^0*c_{1}) \\Leftrightarrow \\vdots \\mbox{ see proof of method 6} \\\\\n\nR &= 10^n*(c_n \\text{ div } 2) + 10^{n-1}*(2*c_n + (c_{n-1} \\text{ div } 2) + \\text{ if}(c_n \\bmod 2 <> 0; 5; 0)) \\\\\n &+ 10^{n-2}*(2*c_{n-1} + (c_{n-2} \\text{ div } 2) + \\text{ if}(c_{n-1} \\bmod 2 <> 0; 5; 0)) \\\\\n &+ \\ldots + 10^1*(2*c_2 + (c_1 \\text{ div } 2) + \\text{ if}(c_2 \\bmod 2 <> 0; 5; 0)) + 2*c_1 + \\text{ if}(c_1 \\bmod 2 <> 0; 5; 0) \\\\\n\\\\\n &QED\n\\end{align}\n"
},
{
"math_id": 35,
"text": "\n\\begin{align}\nR &= T*8 \\Leftrightarrow \\\\\nR &= T*4*2 \\Leftrightarrow \\vdots \\mbox{ see proof of method 4} \\\\\nR &= 10^n*2*(c_n/2 - 1) + 10^{n-1}*2*((9 - c_n) + c_{n-1}/2) + 10^{n-2}*2*((9 - c_{n-1}) + c_{n-2}/2) \\\\\n& + \\ldots + 10^1*2*((9 - c_2) + c_1/2) + 10^0*2*(10 - c_1) \\Leftrightarrow \\\\\nR &= 10^n*(c_n - 2) + 10^{n-1}*(2*(9 - c_n) + c_{n-1}) + \\ldots + 10^2*(2*(9 - c_3) + c_2) + 10^1*(2*(9 - c_2) + c_1) + 2*(10 - c_1) \\\\\n\\\\\n &QED\n\\end{align}\n"
},
{
"math_id": 36,
"text": "\n\\begin{align}\nR &= T*9 \\Leftrightarrow \\\\\nR &= (10 - 1)*T \\Leftrightarrow \\\\\nR &= 10^n*(c_n - 1) + 10^n + 10^{n-1}*(c_{n-1} - 1) + 10^{n-1} + \\ldots + 10^1*(c_1 - 1) + 10^1 \\\\\n &- (10^{n-1}*c_n + 10^{n-2}*c_{n-1} + \\ldots + 10^1*c_2 + 10^0*c_1) \\Leftrightarrow \\vdots \\mbox{ see proof of method 4} \\\\\n\nR &= 10^n*(c_n - 1) + 10^{n-1}*(9 - c_n + c_{n-1}) + 10^{n-2}*(9 - c_{n-1} + c_{n-2}) + \\ldots + 10^1*(9 - c_2 + c_1) + 10^0*(10 - c_1) \\\\\n\\\\\n &QED\n\\end{align}\n"
},
{
"math_id": 37,
"text": "\n\\begin{align}\nR &= T*10 \\Leftrightarrow \\\\\nR &= 10*(10^{n-1}*c_{n} + \\ldots + 10^0*c_{1}) \\Leftrightarrow \\\\\nR &= 10^{n}*c_{n} + \\ldots + 10^1*c_{1} \\\\\n\\\\\n &QED\n\\end{align}\n"
},
{
"math_id": 38,
"text": "\n\\begin{align}\nR &= T*11 \\Leftrightarrow \\\\\nR &= T*(10 + 1) \\\\\nR &= 10*(10^{n-1}*c_{n} + \\ldots + 10^0*c_{1}) + (10^{n-1}*c_{n} + \\ldots + 10^0*c_{1})\\Leftrightarrow \\\\\nR &= 10^{n}*c_{n} + 10^{n-1}*(c_{n} + c_{n-1}) + \\ldots + 10^1*(c_{2} + c_{1}) + c_{1} \\\\\n\\\\\n &QED\n\\end{align}\n"
},
{
"math_id": 39,
"text": "3,425 \\times 11 = 37,675"
},
{
"math_id": 40,
"text": "3425 \\times 11 = 3425 \\times (10+1)"
},
{
"math_id": 41,
"text": "\\rightarrow 37675 = 34250 + 3425"
},
{
"math_id": 42,
"text": "\n\\begin{align}\nR &= T*12 \\Leftrightarrow \\\\\nR &= T*(10 + 2) \\\\\nR &= 10*(10^{n-1}*c_{n} + \\ldots + 10^0*c_{1}) + 2*(10^{n-1}*c_{n} + \\ldots + 10^0*c_{1})\\Leftrightarrow \\\\\nR &= 10^{n}*c_{n} + 10^{n-1}*(2*c_{n} + c_{n-1}) + \\ldots + 10^1*(2*c_{2} + c_{1}) + 2*c_{1} \\\\\n\\\\\n &QED\n\\end{align}\n"
},
{
"math_id": 43,
"text": "316 \\times 12"
},
{
"math_id": 44,
"text": "\n\\begin{align}\n6 \\times 2 & = 12 \\text{ (2 carry 1) } \\\\\n1 \\times 2 + 6 + 1 & = 9 \\\\\n3 \\times 2 + 1 & = 7 \\\\\n0 \\times 2 + 3 & = 3 \\\\\n0 \\times 2 + 0 & = 0 \\\\[10pt]\n316 \\times 12 & = 3,792\n\\end{align}\n"
},
{
"math_id": 45,
"text": "\n\\begin{align}\nR &= T*13 \\Leftrightarrow \\\\\nR &= T*(10 + 3) \\\\\nR &= 10*(10^{n-1}*c_{n} + \\ldots + 10^0*c_{1}) + 3*(10^{n-1}*c_{n} + \\ldots + 10^0*c_{1})\\Leftrightarrow \\\\\nR &= 10^{n}*c_{n} + 10^{n-1}*(3*c_{n} + c_{n-1}) + \\ldots + 10^1*(3*c_{2} + c_{1}) + 3*c_{1} \\\\\n\\\\\n &QED\n\\end{align}\n"
}
] |
https://en.wikipedia.org/wiki?curid=567391
|
56747378
|
Resistive pulse sensing
|
Resistive pulse sensing (RPS) is the generic, non-commercial term given for the well-developed technology used to detect, and measure the size of, individual particles in fluid. First invented by Wallace H. Coulter in 1953, the RPS technique is the basic principle behind the Coulter Principle, which is a trademark term. Resistive pulse sensing is also known as the electrical zone sensing technique, reflecting its fundamentally electrical nature, which distinguishes it from other particle sizing technologies such as the optically-based dynamic light scattering (DLS) and nanoparticle tracking analysis (NTA). An international standard has been developed for the use of the resistive pulse sensing technique by the International Organization for Standardization.
Construction and operation.
The basic design principle underlying resistive pulse sensing is shown in Fig. 1. Individual particles, suspended in a conductive fluid, flow one-at-a-time through a constriction. The fluids most commonly used are water containing some amount of dissolved salts, sufficient to carry an electrical current. The salinity levels of seawater or of a wide range of concentrations of phosphate-buffered saline are easily sufficient for this purpose, with electrical conductivity in the mS-S range and salt concentrations of order 1 percent. Typical tap water often contains sufficient dissolved minerals to conduct sufficiently for this application as well.
Electrical contact is made with the fluid using metal electrodes, in the best case using platinum or other low electrode potential metals, as are found in electrochemical cell constructions. Biasing the electrodes with an electrical potential of order 1 volt will cause an electrical current to flow through the fluid. If properly designed, the electrical resistance of the constriction will dominate in the total electrical resistance of the circuit. Particles that flow through the constriction while the electrical current is being monitored will cause an obscuration of that current, resulting in an increase in the voltage drop between the two electrodes. In other words, the particle causes a change in the electrical resistance of the constriction. The change in the electrical resistance as a particle passes through a constriction is shown schematically in Fig. 2.
Theory of operation.
The quantitative relationship between the measured change in electrical resistance and the size of the particle that caused that change was worked out by De Blois and Bean in 1970. De Blois and Bean found the very simple result that the resistance change formula_0 is proportional to the ratio of particle volume formula_1 to the effective volume formula_2 of the constriction:
formula_3,
where formula_4 is a factor that depends on the detailed geometry of the constriction and the electrical conductivity of the working fluid.
Hence, by monitoring the electrical resistance as indicated by changes in the voltage drop across the constriction, one can count particles, as each increase in resistance indicates passage of a particle through the constriction, and one can measure the size of that particle, as the magnitude of the resistance change during the particle passage is proportional to that particle's volume. As one can usually calculate the volumetric flow rate of fluid through the constriction, controlled externally by setting the pressure difference across the constriction, one can then calculate the concentration of particles. With a large enough number of particle transients to provide adequate statistical significance, the concentration as a function of particle size, also known as the concentration spectral density, with units of per volume fluid per volume particle, can be calculated.
Minimum detectable size and dynamic range.
Two important considerations when evaluating a resistive pulse sensing (RPS) instrument are the minimum detectable particle size and the dynamic range of the instrument. The minimum detectable size is determined by the volume formula_2 of the constriction, the voltage difference applied across that constriction, and the noise of the first-stage amplifier used to detect the particle signal. In other words, one must evaluate the minimum signal-to-noise ratio of the system. The minimum particle size can be defined as the size of the particle that generates a signal whose magnitude is equal to the noise, integrated over the same frequency bandwidth as generated by the signal. The dynamic range of an RPS instrument is set at its upper end by the diameter of the constriction, as that is the maximum size particle that can pass through the constriction. One can also instead choose a somewhat smaller maximum, perhaps setting it to 70 percent of this maximum volume. The dynamic range is then equal to the ratio of the maximum particle size to the minimum detectable size. This ratio can be quoted either as the ratio of the maximum to minimum particle volume, or as the ratio of the maximum to minimum particle diameter (the cube of the first method).
Microfluidic resistive pulse sensing (MRPS).
The original Coulter counter was originally designed using a special technology to fabricate small pores in glass volumes, but the expense and complexity of fabricating these elements means they become a semi-permanent part of the analytic RPS instrument. This also limited the minimum diameter constrictions that could be reliably fabricated, making the RPS technique challenging to use for particles below roughly 1 micron in diameter.
There was therefore significant interest in applying the fabrication techniques developed for microfluidic circuits to RPS sensing. This translation of RPS technology to the microfluidic domain enables very small constrictions, well below effective diameters of 1 micron; this therefore extends the minimum detectable particle size to the deep sub-micron range. Using microfluidics technology also allows the use of inexpensive cast plastic or elastomer parts for defining the critical constriction component, which also become disposable. The use of a disposable element eliminates concerns about sample cross-contamination as well as obviating the need for time-consuming cleaning of the RPS instrument. Scientific advances demonstrating these capabilities have been published in the scientific literature, such as by Kasianowicz et al., Saleh and Sohn, and Fraikin et al. These together illustrate a variety of methods to fabricate microfluidic or lab-on-a-chip versions of the Coulter counter technology.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Delta R"
},
{
"math_id": 1,
"text": "V_p"
},
{
"math_id": 2,
"text": "V_c"
},
{
"math_id": 3,
"text": "\\Delta R = A \\frac{V_p}{V_c}"
},
{
"math_id": 4,
"text": "A"
}
] |
https://en.wikipedia.org/wiki?curid=56747378
|
5675
|
Curium
|
Chemical element with atomic number 96 (Cm)
Curium is a synthetic chemical element; it has symbol Cm and atomic number 96. This transuranic actinide element was named after eminent scientists Marie and Pierre Curie, both known for their research on radioactivity. Curium was first intentionally made by the team of Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso in 1944, using the cyclotron at Berkeley. They bombarded the newly discovered element plutonium (the isotope 239Pu) with alpha particles. This was then sent to the Metallurgical Laboratory at University of Chicago where a tiny sample of curium was eventually separated and identified. The discovery was kept secret until after the end of World War II. The news was released to the public in November 1947. Most curium is produced by bombarding uranium or plutonium with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains ~20 grams of curium.
Curium is a hard, dense, silvery metal with a high melting and boiling point for an actinide. It is paramagnetic at ambient conditions, but becomes antiferromagnetic upon cooling, and other magnetic transitions are also seen in many curium compounds. In compounds, curium usually has valence +3 and sometimes +4; the +3 valence is predominant in solutions. Curium readily oxidizes, and its oxides are a dominant form of this element. It forms strongly fluorescent complexes with various organic compounds, but there is no evidence of its incorporation into bacteria and archaea. If it gets into the human body, curium accumulates in bones, lungs, and liver, where it promotes cancer.
All known isotopes of curium are radioactive and have small critical mass for a nuclear chain reaction. The most stable isotope, 247Cm, has a half-life of 15.6 million years; the longest-lived curium isotopes predominantly emit alpha particles. Radioisotope thermoelectric generators can use the heat from this process, but this is hindered by the rarity and high cost of curium. Curium is used in making heavier actinides and the 238Pu radionuclide for power sources in artificial cardiac pacemakers and RTGs for spacecraft. It served as the α-source in the alpha particle X-ray spectrometers of several space probes, including the "Sojourner", "Spirit", "Opportunity", and "Curiosity" Mars rovers and the Philae lander on comet 67P/Churyumov–Gerasimenko, to analyze the composition and structure of the surface. Researchers have proposed using curium as fuel in nuclear reactors.
History.
Though curium had likely been produced in previous nuclear experiments as well as the natural nuclear fission reactor at Oklo, Gabon, it was first intentionally synthesized, isolated and identified in 1944, at University of California, Berkeley, by Glenn T. Seaborg, Ralph A. James, and Albert Ghiorso. In their experiments, they used a cyclotron.
Curium was chemically identified at the Metallurgical Laboratory (now Argonne National Laboratory), University of Chicago. It was the third transuranium element to be discovered even though it is the fourth in the series – the lighter element americium was still unknown.
The sample was prepared as follows: first plutonium nitrate solution was coated on a platinum foil of ~0.5 cm2 area, the solution was evaporated and the residue was converted into plutonium(IV) oxide (PuO2) by annealing. Following cyclotron irradiation of the oxide, the coating was dissolved with nitric acid and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The residue was dissolved in perchloric acid, and further separation was done by ion exchange to yield a certain isotope of curium. The separation of curium and americium was so painstaking that the Berkeley group initially called those elements "pandemonium" (from Greek for "all demons" or "hell") and "delirium" (from Latin for "madness").
Curium-242 was made in July–August 1944 by bombarding 239Pu with α-particles to produce curium with the release of a neutron:
<chem>^{239}_{94}Pu + ^{4}_{2}He -> ^{242}_{96}Cm + ^{1}_{0}n</chem>
Curium-242 was unambiguously identified by the characteristic energy of the α-particles emitted during the decay:
<chem>^{242}_{96}Cm -> ^{238}_{94}Pu + ^{4}_{2}He</chem>
The half-life of this alpha decay was first measured as 150 days and then corrected to 162.8 days.
Another isotope 240Cm was produced in a similar reaction in March 1945:
<chem>^{239}_{94}Pu + ^{4}_{2}He -> ^{240}_{96}Cm + 3^{1}_{0}n</chem>
The α-decay half-life of 240Cm was correctly determined as 26.7 days.
The discovery of curium and americium in 1944 was closely related to the Manhattan Project, so the results were confidential and declassified only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children, the "Quiz Kids", five days before the official presentation at an American Chemical Society meeting on November 11, 1945, when one listener asked if any new transuranic element beside plutonium and neptunium had been discovered during the war. The discovery of curium (242Cm and 240Cm), its production, and its compounds was later patented listing only Seaborg as the inventor.
The element was named after Marie Curie and her husband Pierre Curie, who are known for discovering radium and for their work in radioactivity. It followed the example of gadolinium, a lanthanide element above curium in the periodic table, which was named after the explorer of rare-earth elements Johan Gadolin:
As the name for the element of atomic number 96 we should like to propose "curium", with symbol Cm. The evidence indicates that element 96 contains seven 5f electrons and is thus analogous to the element gadolinium, with its seven 4f electrons in the regular rare earth series. On this basis element 96 is named after the Curies in a manner analogous to the naming of gadolinium, in which the chemist Gadolin was honored.
The first curium samples were barely visible, and were identified by their radioactivity. Louis Werner and Isadore Perlman made the first substantial sample of 30 μg curium-242 hydroxide at University of California, Berkeley in 1947 by bombarding americium-241 with neutrons. Macroscopic amounts of curium(III) fluoride were obtained in 1950 by W. W. T. Crane, J. C. Wallmann and B. B. Cunningham. Its magnetic susceptibility was very close to that of GdF3 providing the first experimental evidence for the +3 valence of curium in its compounds. Curium metal was produced only in 1951 by reduction of CmF3 with barium.
Characteristics.
Physical.
A synthetic, radioactive element, curium is a hard, dense metal with a silvery-white appearance and physical and chemical properties resembling gadolinium. Its melting point of 1344 °C is significantly higher than that of the previous elements neptunium (637 °C), plutonium (639 °C) and americium (1176 °C). In comparison, gadolinium melts at 1312 °C. Curium boils at 3556 °C. With a density of 13.52 g/cm3, curium is lighter than neptunium (20.45 g/cm3) and plutonium (19.8 g/cm3), but heavier than most other metals. Of two crystalline forms of curium, α-Cm is more stable at ambient conditions. It has a hexagonal symmetry, space group P63/mmc, lattice parameters "a" = 365 pm and "c" = 1182 pm, and four formula units per unit cell. The crystal consists of double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum. At pressure >23 GPa, at room temperature, α-Cm becomes β-Cm, which has face-centered cubic symmetry, space group Fm3m and lattice constant "a" = 493 pm. On further compression to 43 GPa, curium becomes an orthorhombic γ-Cm structure similar to α-uranium, with no further transitions observed up to 52 GPa. These three curium phases are also called Cm I, II and III.
Curium has peculiar magnetic properties. Its neighbor element americium shows no deviation from Curie-Weiss paramagnetism in the entire temperature range, but α-Cm transforms to an antiferromagnetic state upon cooling to 65–52 K, and β-Cm exhibits a ferrimagnetic transition at ~205 K. Curium pnictides show ferromagnetic transitions upon cooling: 244CmN and 244CmAs at 109 K, 248CmP at 73 K and 248CmSb at 162 K. The lanthanide analog of curium, gadolinium, and its pnictides, also show magnetic transitions upon cooling, but the transition character is somewhat different: Gd and GdN become ferromagnetic, and GdP, GdAs and GdSb show antiferromagnetic ordering.
In accordance with magnetic data, electrical resistivity of curium increases with temperature – about twice between 4 and 60 K – and then is nearly constant up to room temperature. There is a significant increase in resistivity over time (~) due to self-damage of the crystal lattice by alpha decay. This makes uncertain the true resistivity of curium (~). Curium's resistivity is similar to that of gadolinium, and the actinides plutonium and neptunium, but significantly higher than that of americium, uranium, polonium and thorium.
Under ultraviolet illumination, curium(III) ions show strong and stable yellow-orange fluorescence with a maximum in the range of 590–640 nm depending on their environment. The fluorescence originates from the transitions from the first excited state 6D7/2 and the ground state 8S7/2. Analysis of this fluorescence allows monitoring interactions between Cm(III) ions in organic and inorganic complexes.
Chemical.
Curium ion in solution almost always has a +3 oxidation state, the most stable oxidation state for curium. A +4 oxidation state is seen mainly in a few solid phases, such as CmO2 and CmF4. Aqueous curium(IV) is only known in the presence of strong oxidizers such as potassium persulfate, and is easily reduced to curium(III) by radiolysis and even by water itself. Chemical behavior of curium is different from the actinides thorium and uranium, and is similar to americium and many lanthanides. In aqueous solution, the Cm3+ ion is colorless to pale green; Cm4+ ion is pale yellow. The optical absorption of Cm3+ ion contains three sharp peaks at 375.4, 381.2 and 396.5 nm and their strength can be directly converted into the concentration of the ions. The +6 oxidation state has only been reported once in solution in 1978, as the curyl ion (CmO22+): this was prepared from beta decay of americium-242 in the americium(V) ion 242AmO2+. Failure to get Cm(VI) from oxidation of Cm(III) and Cm(IV) may be due to the high Cm4+/Cm3+ ionization potential and the instability of Cm(V).
Curium ions are hard Lewis acids and thus form most stable complexes with hard bases. The bonding is mostly ionic, with a small covalent component. Curium in its complexes commonly exhibits a 9-fold coordination environment, with a tricapped trigonal prismatic molecular geometry.
Isotopes.
About 19 radioisotopes and 7 nuclear isomers, 233Cm to 251Cm, are known; none are stable. The longest half-lives are 15.6 million years (247Cm) and 348,000 years (248Cm). Other long-lived ones are 245Cm (8500 years), 250Cm (8300 years) and 246Cm (4760 years). Curium-250 is unusual: it mostly (~86%) decays by spontaneous fission. The most commonly used isotopes are 242Cm and 244Cm with the half-lives 162.8 days and 18.1 years, respectively.
All isotopes ranging from 242Cm to 248Cm, as well as 250Cm, undergo a self-sustaining nuclear chain reaction and thus in principle can be a nuclear fuel in a reactor. As in most transuranic elements, nuclear fission cross section is especially high for the odd-mass curium isotopes 243Cm, 245Cm and 247Cm. These can be used in thermal-neutron reactors, whereas a mixture of curium isotopes is only suitable for fast breeder reactors since the even-mass isotopes are not fissile in a thermal reactor and accumulate as burn-up increases. The mixed-oxide (MOX) fuel, which is to be used in power reactors, should contain little or no curium because neutron activation of 248Cm will create californium. Californium is a strong neutron emitter, and would pollute the back end of the fuel cycle and increase the dose to reactor personnel. Hence, if minor actinides are to be used as fuel in a thermal neutron reactor, the curium should be excluded from the fuel or placed in special fuel rods where it is the only actinide present.
The adjacent table lists the critical masses for curium isotopes for a sphere, without moderator or reflector. With a metal reflector (30 cm of steel), the critical masses of the odd isotopes are about 3–4 kg. When using water (thickness ~20–30 cm) as the reflector, the critical mass can be as small as 59 grams for 245Cm, 155 grams for 243Cm and 1550 grams for 247Cm. There is significant uncertainty in these critical mass values. While it is usually on the order of 20%, the values for 242Cm and 246Cm were listed as large as 371 kg and 70.1 kg, respectively, by some research groups.
Curium is not currently used as nuclear fuel due to its low availability and high price. 245Cm and 247Cm have very small critical mass and so could be used in tactical nuclear weapons, but none are known to have been made. Curium-243 is not suitable for such, due to its short half-life and strong α emission, which would cause excessive heat. Curium-247 would be highly suitable due to its long half-life, which is 647 times longer than plutonium-239 (used in many existing nuclear weapons).
Occurrence.
The longest-lived isotope, 247Cm, has half-life 15.6 million years; so any primordial curium, that is, present on Earth when it formed, should have decayed by now. Its past presence as an extinct radionuclide is detectable as an excess of its primordial, long-lived daughter 235U. Traces of 242Cm may occur naturally in uranium minerals due to neutron capture and beta decay (238U → 239Pu → 240Pu → 241Am → 242Cm), though the quantities would be tiny and this has not been confirmed: even with "extremely generous" estimates for neutron absorption possibilities, the quantity of 242Cm present in 1 × 108 kg of 18% uranium pitchblende would not even be one atom. Traces of 247Cm are also probably brought to Earth in cosmic rays, but this also has not been confirmed. There is also the possibility of 244Cm being produced as the double beta decay daughter of natural 244Pu.
Curium is made artificially in small amounts for research purposes. It also occurs as one of the waste products in spent nuclear fuel. Curium is present in nature in some areas used for nuclear weapons testing. Analysis of the debris at the test site of the United States' first thermonuclear weapon, Ivy Mike (1 November 1952, Enewetak Atoll), besides einsteinium, fermium, plutonium and americium also revealed isotopes of berkelium, californium and curium, in particular 245Cm, 246Cm and smaller quantities of 247Cm, 248Cm and 249Cm.
Atmospheric curium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 4,000 times higher concentration of curium at the sandy soil particles than in water present in the soil pores. An even higher ratio of about 18,000 was measured in loam soils.
The transuranium elements from americium to fermium, including curium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so.
Curium, and other non-primordial actinides, have also been suspected to exist in the spectrum of Przybylski's Star.
Synthesis.
Isotope preparation.
Curium is made in small amounts in nuclear reactors, and by now only kilograms of 242Cm and 244Cm have been accumulated, and grams or even milligrams for heavier isotopes. Hence the high price of curium, which has been quoted at 160–185 USD per milligram, with a more recent estimate at US$2,000/g for 242Cm and US$170/g for 244Cm. In nuclear reactors, curium is formed from 238U in a series of nuclear reactions. In the first chain, 238U captures a neutron and converts into 239U, which via β− decay transforms into 239Np and 239Pu.
Further neutron capture followed by β−-decay gives americium (241Am) which further becomes 242Cm:
For research purposes, curium is obtained by irradiating not uranium but plutonium, which is available in large amounts from spent nuclear fuel. A much higher neutron flux is used for the irradiation that results in a different reaction chain and formation of 244Cm:
Curium-244 alpha decays to 240Pu, but it also absorbs neutrons, hence a small amount of heavier curium isotopes. Of those, 247Cm and 248Cm are popular in scientific research due to their long half-lives. But the production rate of 247Cm in thermal neutron reactors is low because it is prone to fission due to thermal neutrons. Synthesis of 250Cm by neutron capture is unlikely due to the short half-life of the intermediate 249Cm (64 min), which β− decays to the berkelium isotope 249Bk.
The above cascade of (n,γ) reactions gives a mix of different curium isotopes. Their post-synthesis separation is cumbersome, so a selective synthesis is desired. Curium-248 is favored for research purposes due to its long half-life. The most efficient way to prepare this isotope is by α-decay of the californium isotope 252Cf, which is available in relatively large amounts due to its long half-life (2.65 years). About 35–50 mg of 248Cm is produced thus, per year. The associated reaction produces 248Cm with isotopic purity of 97%.
Another isotope, 245Cm, can be obtained for research, from α-decay of 249Cf; the latter isotope is produced in small amounts from β−-decay of 249Bk.
Metal preparation.
Most synthesis routines yield a mix of actinide isotopes as oxides, from which a given isotope of curium needs to be separated. An example procedure could be to dissolve spent reactor fuel (e.g. MOX fuel) in nitric acid, and remove the bulk of the uranium and plutonium using a PUREX (Plutonium – URanium EXtraction) type extraction with tributyl phosphate in a hydrocarbon. The lanthanides and the remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction to give, after stripping, a mixture of trivalent actinides and lanthanides. A curium compound is then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. "Bis"-triazinyl bipyridine complex has been recently proposed as such reagent which is highly selective to curium. Separation of curium from the very chemically similar americium can also be done by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone at elevated temperature. Both americium and curium are present in solutions mostly in the +3 valence state; americium oxidizes to soluble Am(IV) complexes, but curium stays unchanged and so can be isolated by repeated centrifugation.
Metallic curium is obtained by reduction of its compounds. Initially, curium(III) fluoride was used for this purpose. The reaction was done in an environment free of water and oxygen, in an apparatus made of tantalum and tungsten, using elemental barium or lithium as reducing agents.
formula_0
Another possibility is reduction of curium(IV) oxide using a magnesium-zinc alloy in a melt of magnesium chloride and magnesium fluoride.
Compounds and reactions.
Oxides.
Curium readily reacts with oxygen forming mostly Cm2O3 and CmO2 oxides, but the divalent oxide CmO is also known. Black CmO2 can be obtained by burning curium oxalate (Cm2(C2O4)3), nitrate (Cm(NO3)3), or hydroxide in pure oxygen. Upon heating to 600–650 °C in vacuum (about 0.01 Pa), it transforms into the whitish Cm2O3:
<chem>4CmO2 ->[\Delta T] 2Cm2O3 + O2</chem>.
Or, Cm2O3 can be obtained by reducing CmO2 with molecular hydrogen:
<chem>2CmO2 + H2 -> Cm2O3 + H2O</chem>
Also, a number of ternary oxides of the type M(II)CmO3 are known, where M stands for a divalent metal, such as barium.
Thermal oxidation of trace quantities of curium hydride (CmH2–3) has been reported to give a volatile form of CmO2 and the volatile trioxide CmO3, one of two known examples of the very rare +6 state for curium. Another observed species was reported to behave similar to a supposed plutonium tetroxide and was tentatively characterized as CmO4, with curium in the extremely rare +8 state; but new experiments seem to indicate that CmO4 does not exist, and have cast doubt on the existence of PuO4 as well.
Halides.
The colorless curium(III) fluoride (CmF3) can be made by adding fluoride ions into curium(III)-containing solutions. The brown tetravalent curium(IV) fluoride (CmF4) on the other hand is only obtained by reacting curium(III) fluoride with molecular fluorine:
formula_1
A series of ternary fluorides are known of the form A7Cm6F31 (A = alkali metal).
The colorless curium(III) chloride (CmCl3) is made by reacting curium hydroxide (Cm(OH)3) with anhydrous hydrogen chloride gas. It can be further turned into other halides such as curium(III) bromide (colorless to light green) and curium(III) iodide (colorless), by reacting it with the ammonia salt of the corresponding halide at temperatures of ~400–450 °C:
formula_2
Or, one can heat curium oxide to ~600°C with the corresponding acid (such as hydrobromic for curium bromide). Vapor phase hydrolysis of curium(III) chloride gives curium oxychloride:
formula_3
Chalcogenides and pnictides.
Sulfides, selenides and tellurides of curium have been obtained by treating curium with gaseous sulfur, selenium or tellurium in vacuum at elevated temperature. Curium pnictides of the type CmX are known for nitrogen, phosphorus, arsenic and antimony. They can be prepared by reacting either curium(III) hydride (CmH3) or metallic curium with these elements at elevated temperature.
Organocurium compounds and biological aspects.
Organometallic complexes analogous to uranocene are known also for other actinides, such as thorium, protactinium, neptunium, plutonium and americium. Molecular orbital theory predicts a stable "curocene" complex (η8-C8H8)2Cm, but it has not been reported experimentally yet.
Formation of the complexes of the type Cm(n-C3H7-BTP)3 (BTP = 2,6-di(1,2,4-triazin-3-yl)pyridine), in solutions containing n-C3H7-BTP and Cm3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with curium and thus are useful for separating it from lanthanides and another actinides. Dissolved Cm3+ ions bind with many organic compounds, such as hydroxamic acid, urea, fluorescein and adenosine triphosphate. Many of these compounds are related to biological activity of various microorganisms. The resulting complexes show strong yellow-orange emission under UV light excitation, which is convenient not only for their detection, but also for studying interactions between the Cm3+ ion and the ligands via changes in the half-life (of the order ~0.1 ms) and spectrum of the fluorescence.
Curium has no biological significance. There are a few reports on biosorption of Cm3+ by bacteria and archaea, but no evidence for incorporation of curium into them.
Applications.
Radionuclides.
Curium is one of the most radioactive isolable elements. Its two most common isotopes 242Cm and 244Cm are strong alpha emitters (energy 6 MeV); they have fairly short half-lives, 162.8 days and 18.1 years, and give as much as 120 W/g and 3 W/g of heat, respectively. Therefore, curium can be used in its common oxide form in radioisotope thermoelectric generators like those in spacecraft. This application has been studied for the 244Cm isotope, while 242Cm was abandoned due to its prohibitive price, around 2000 USD/g. 243Cm with a ~30-year half-life and good energy yield of ~1.6 W/g could be a suitable fuel, but it gives significant amounts of harmful gamma and beta rays from radioactive decay products. As an α-emitter, 244Cm needs much less radiation shielding, but it has a high spontaneous fission rate, and thus a lot of neutron and gamma radiation. Compared to a competing thermoelectric generator isotope such as 238Pu, 244Cm emits 500 times more neutrons, and its higher gamma emission requires a shield that is 20 times thicker— of lead for a 1 kW source, compared to for 238Pu. Therefore, this use of curium is currently considered impractical.
A more promising use of 242Cm is for making 238Pu, a better radioisotope for thermoelectric generators such as in heart pacemakers. The alternate routes to 238Pu use the (n,γ) reaction of 237Np, or deuteron bombardment of uranium, though both reactions always produce 236Pu as an undesired by-product since the latter decays to 232U with strong gamma emission. Curium is a common starting material for making higher transuranic and superheavy elements. Thus, bombarding 248Cm with neon (22Ne), magnesium (26Mg), or calcium (48Ca) yields isotopes of seaborgium (265Sg), hassium (269Hs and 270Hs), and livermorium (292Lv, 293Lv, and possibly 294Lv). Californium was discovered when a microgram-sized target of curium-242 was irradiated with 35 MeV alpha particles using the cyclotron at Berkeley:
Cm + He → Cf + n
Only about 5,000 atoms of californium were produced in this experiment.
The odd-mass curium isotopes 243Cm, 245Cm, and 247Cm are all highly fissile and can release additional energy in a thermal spectrum nuclear reactor. All curium isotopes are fissionable in fast-neutron reactors. This is one of the motives for minor actinide separation and transmutation in the nuclear fuel cycle, helping to reduce the long-term radiotoxicity of used, or spent nuclear fuel.
X-ray spectrometer.
The most practical application of 244Cm—though rather limited in total volume—is as α-particle source in alpha particle X-ray spectrometers (APXS). These instruments were installed on the Sojourner, Mars, Mars 96, Mars Exploration Rovers and Philae comet lander, as well as the Mars Science Laboratory to analyze the composition and structure of the rocks on the surface of planet Mars. APXS was also used in the Surveyor 5–7 moon probes but with a 242Cm source.
An elaborate APXS setup has a sensor head containing six curium sources with a total decay rate of several tens of millicuries (roughly one gigabecquerel). The sources are collimated on a sample, and the energy spectra of the alpha particles and protons scattered from the sample are analyzed (proton analysis is done only in some spectrometers). These spectra contain quantitative information on all major elements in the sample except for hydrogen, helium and lithium.
Safety.
Due to its radioactivity, curium and its compounds must be handled in appropriate labs under special arrangements. While curium itself mostly emits α-particles which are absorbed by thin layers of common materials, some of its decay products emit significant fractions of beta and gamma rays, which require a more elaborate protection. If consumed, curium is excreted within a few days and only 0.05% is absorbed in the blood. From there, ~45% goes to the liver, 45% to the bones, and the remaining 10% is excreted. In bone, curium accumulates on the inside of the interfaces to the bone marrow and does not significantly redistribute with time; its radiation destroys bone marrow and thus stops red blood cell creation. The biological half-life of curium is about 20 years in the liver and 50 years in the bones. Curium is absorbed in the body much more strongly via inhalation, and the allowed total dose of 244Cm in soluble form is 0.3 μCi. Intravenous injection of 242Cm- and 244Cm-containing solutions to rats increased the incidence of bone tumor, and inhalation promoted lung and liver cancer.
Curium isotopes are inevitably present in spent nuclear fuel (about 20 g/tonne). The isotopes 245Cm–248Cm have decay times of thousands of years and must be removed to neutralize the fuel for disposal. Such a procedure involves several steps, where curium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure, nuclear transmutation, while well documented for other elements, is still being developed for curium.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{CmF_3\\ +\\ 3\\ Li\\ \\longrightarrow \\ Cm\\ +\\ 3\\ LiF}"
},
{
"math_id": 1,
"text": "\\mathrm{2\\ CmF_3\\ +\\ F_2\\ \\longrightarrow\\ 2\\ CmF_4}"
},
{
"math_id": 2,
"text": "\\mathrm{CmCl_3\\ +\\ 3\\ NH_4I\\ \\longrightarrow \\ CmI_3\\ +\\ 3\\ NH_4Cl}"
},
{
"math_id": 3,
"text": "\\mathrm{CmCl_3\\ +\\ \\ H_2O\\ \\longrightarrow \\ CmOCl\\ +\\ 2\\ HCl}"
}
] |
https://en.wikipedia.org/wiki?curid=5675
|
56754296
|
Fuglede's conjecture
|
Mathematical problem
Fuglede's conjecture is an open problem in mathematics proposed by Bent Fuglede in 1974. It states that every domain of formula_0 (i.e. subset of formula_0 with positive finite Lebesgue measure) is a spectral set if and only if it tiles formula_0 by translation.
Spectral sets and translational tiles.
Spectral sets in formula_1
A set formula_2 formula_3 formula_0 with positive finite Lebesgue measure is said to be a spectral set if there exists a formula_4 formula_3 formula_1 such that formula_5is an orthogonal basis of formula_6. The set formula_4 is then said to be a spectrum of formula_2 and formula_7 is called a spectral pair.
Translational tiles of formula_1
A set formula_8 is said to tile formula_1 by translation (i.e. formula_2 is a translational tile) if there exist a discrete set formula_9 such that formula_10 and the Lebesgue measure of formula_11 is zero for all formula_12in formula_9.
|
[
{
"math_id": 0,
"text": "\\mathbb{R}^{d}"
},
{
"math_id": 1,
"text": "\\mathbb{R}^d"
},
{
"math_id": 2,
"text": "\\Omega"
},
{
"math_id": 3,
"text": "\\subset"
},
{
"math_id": 4,
"text": "\\Lambda"
},
{
"math_id": 5,
"text": "\\left \\{ e^{2\\pi i\\left \\langle \\lambda, \\cdot \\right \\rangle} \\right \\}_{\\lambda\\in\\Lambda}"
},
{
"math_id": 6,
"text": "L^2(\\Omega)"
},
{
"math_id": 7,
"text": "(\\Omega, \\Lambda)"
},
{
"math_id": 8,
"text": "\\Omega\\subset\\mathbb{R}^d"
},
{
"math_id": 9,
"text": "\\Tau"
},
{
"math_id": 10,
"text": "\\bigcup_{t\\in\\Tau}(\\Omega + t)=\\mathbb{R}^d"
},
{
"math_id": 11,
"text": "(\\Omega + t) \\cap (\\Omega + t')"
},
{
"math_id": 12,
"text": "t\\neq t'"
},
{
"math_id": 13,
"text": "d\\geq5"
},
{
"math_id": 14,
"text": "d=3 "
},
{
"math_id": 15,
"text": "4"
},
{
"math_id": 16,
"text": "d=1,2"
},
{
"math_id": 17,
"text": "\\mathbb{Z}_{p}\\times\\mathbb{Z}_{p}"
},
{
"math_id": 18,
"text": "\\mathbb{Z}_{p}"
},
{
"math_id": 19,
"text": "\\mathbb{R}^3"
}
] |
https://en.wikipedia.org/wiki?curid=56754296
|
567555
|
Tree-adjoining grammar
|
Grammar formalism
Tree-adjoining grammar (TAG) is a grammar formalism defined by Aravind Joshi. Tree-adjoining grammars are somewhat similar to context-free grammars, but the elementary unit of rewriting is the tree rather than the symbol. Whereas context-free grammars have rules for rewriting symbols as strings of other symbols, tree-adjoining grammars have rules for rewriting the nodes of trees as other trees (see tree (graph theory) and tree (data structure)).
History.
TAG originated in investigations by Joshi and his students into the family of adjunction grammars (AG),
the "string grammar" of Zellig Harris. AGs handle exocentric properties of language in a natural and effective way, but do not have a good characterization of endocentric constructions; the converse is true of rewrite grammars, or phrase-structure grammar (PSG). In 1969, Joshi introduced a family of grammars that exploits this complementarity by mixing the two types of rules. A few very simple rewrite rules suffice to generate the vocabulary of strings for adjunction rules. This family is distinct from the Chomsky-Schützenberger hierarchy but intersects it in interesting and linguistically relevant ways. The center strings and adjunct strings can also be generated by a dependency grammar, avoiding the limitations of rewrite systems entirely.
Description.
The rules in a TAG are trees with a special leaf node known as the "foot node", which is anchored to a word.
There are two types of basic trees in TAG: "initial" trees (often represented as 'formula_0') and "auxiliary" trees ('formula_1'). Initial trees represent basic valency relations, while auxiliary trees allow for recursion.
Auxiliary trees have the root (top) node and foot node labeled with the same symbol.
A derivation starts with an initial tree, combining via either "substitution" or "adjunction". Substitution replaces a frontier node with another tree whose top node has the same label. The root/foot label of the auxiliary tree must match the label of the node at which it adjoins. Adjunction can thus have the effect of inserting an auxiliary tree into the center of another tree.
Other variants of TAG allow multi-component trees, trees with multiple foot nodes, and other extensions.
Complexity and application.
Tree-adjoining grammars are more powerful (in terms of weak generative capacity) than context-free grammars, but less powerful than linear context-free rewriting systems, indexed or context-sensitive grammars.
A TAG can describe the language of squares (in which some arbitrary string is repeated), and the language formula_2. This type of processing can be represented by an embedded pushdown automaton.
Languages with cubes (i.e. triplicated strings) or with more than four distinct character strings of equal length cannot be generated by tree-adjoining grammars.
For these reasons, tree-adjoining grammars are often described as mildly context-sensitive.
These grammar classes are conjectured to be powerful enough to model natural languages while remaining efficiently parsable in the general case.
Equivalences.
Vijay-Shanker and Weir (1994) demonstrate that linear indexed grammars, combinatory categorial grammar, tree-adjoining grammars, and head grammars are weakly equivalent formalisms, in that they all define the same string languages.
Lexicalized.
Lexicalized tree-adjoining grammars (LTAG) are a variant of TAG in which each elementary tree (initial or auxiliary) is associated with a lexical item. A lexicalized grammar for English has been developed by the XTAG Research Group of the Institute for Research in Cognitive Science at the University of Pennsylvania.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\alpha"
},
{
"math_id": 1,
"text": "\\beta"
},
{
"math_id": 2,
"text": "\\{a^n b^n c^n d^n | 1 \\le n \\}"
}
] |
https://en.wikipedia.org/wiki?curid=567555
|
567567
|
Whirlpool (hash function)
|
Cryptographic hash function
In computer science and cryptography, Whirlpool (sometimes styled WHIRLPOOL) is a cryptographic hash function. It was designed by Vincent Rijmen (co-creator of the Advanced Encryption Standard) and Paulo S. L. M. Barreto, who first described it in 2000.
The hash has been recommended by the NESSIE project. It has also been adopted by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) as part of the joint ISO/IEC 10118-3 international standard.
Design features.
Whirlpool is a hash designed after the Square block cipher, and is considered to be in that family of block cipher functions.
Whirlpool is a Miyaguchi-Preneel construction based on a substantially modified Advanced Encryption Standard (AES).
Whirlpool takes a message of any length less than 2256 bits and returns a 512-bit message digest.
The authors have declared that
"WHIRLPOOL is not (and will never be) patented. It may be used free of charge for any purpose."
Version changes.
The original Whirlpool will be called "Whirlpool-0", the first revision of Whirlpool will be called "Whirlpool-T" and the latest version will be called "Whirlpool" in the following test vectors.
Internal structure.
The Whirlpool hash function is a Merkle–Damgård construction based on an AES-like block cipher W in Miyaguchi–Preneel mode.
The block cipher W consists of an 8×8 state matrix formula_0 of bytes, for a total of 512 bits.
The encryption process consists of updating the state with four round functions over 10 rounds. The four round functions are SubBytes (SB), ShiftColumns (SC), MixRows (MR) and AddRoundKey (AK). During each round the new state is computed as
formula_1.
SubBytes.
The SubBytes operation applies a non-linear permutation (the S-box) to each byte of the state independently. The 8-bit S-box is composed of 3 smaller 4-bit S-boxes.
ShiftColumns.
The ShiftColumns operation cyclically shifts each byte in each column of the state. Column "j" has its bytes shifted downwards by "j" positions.
MixRows.
The MixRows operation is a right-multiplication of each row by an 8×8 matrix over formula_2. The matrix is chosen such that the branch number (an important property when looking at resistance to differential cryptanalysis) is 9, which is maximal.
AddRoundKey.
The AddRoundKey operation uses bitwise xor to add a key calculated by the key schedule to the current state. The key schedule is identical to the encryption itself, except the AddRoundKey function is replaced by an AddRoundConstant function that adds a predetermined constant in each round.
Whirlpool hashes.
The Whirlpool algorithm has undergone two revisions since its original 2000 specification.
People incorporating Whirlpool will most likely use the most recent revision of Whirlpool; while there are no known security weaknesses in earlier versions of Whirlpool, the most recent revision has better hardware implementation efficiency characteristics, and is also likely to be more secure. As mentioned earlier, it is also the version adopted in the ISO/IEC 10118-3 international standard.
The 512-bit (64-byte) Whirlpool hashes (also termed "message digests") are typically represented as 128-digit hexadecimal numbers.
The following demonstrates a 43-byte ASCII input (not including quotes) and the corresponding Whirlpool hashes:
Implementations.
The authors provide reference implementations of the Whirlpool algorithm, including a version written in C and a version written in Java. These reference implementations have been released into the public domain.
Research on the security analysis of the Whirlpool function however, has revealed that on average, the introduction of 8 random faults is sufficient to compromise the 512-bit Whirlpool hash message being processed and the secret key of HMAC-Whirlpool within the context of Cloud of Things (CoTs). This emphasizes the need for increased security measures in its implementation.
Adoption.
Two of the first widely used mainstream cryptographic programs that started using Whirlpool were FreeOTFE, followed by TrueCrypt in 2005.
VeraCrypt (a fork of TrueCrypt) included Whirlpool (the final version) as one of its supported hash algorithms.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "S=AK \\circ MR \\circ SC \\circ SB(S) "
},
{
"math_id": 2,
"text": "GF({2^8})"
}
] |
https://en.wikipedia.org/wiki?curid=567567
|
567580
|
Gaussian integral
|
Integral of the Gaussian function, equal to sqrt(π)
The Gaussian integral, also known as the Euler–Poisson integral, is the integral of the Gaussian function formula_0 over the entire real line. Named after the German mathematician Carl Friedrich Gauss, the integral is
formula_1
Abraham de Moivre originally discovered this type of integral in 1733, while Gauss published the precise integral in 1809. The integral has a wide range of applications. For example, with a slight change of variables it is used to compute the normalizing constant of the normal distribution. The same integral with finite limits is closely related to both the error function and the cumulative distribution function of the normal distribution. In physics this type of integral appears frequently, for example, in quantum mechanics, to find the probability density of the ground state of the harmonic oscillator. This integral is also used in the path integral formulation, to find the propagator of the harmonic oscillator, and in statistical mechanics, to find its partition function.
Although no elementary function exists for the error function, as can be proven by the Risch algorithm, the Gaussian integral can be solved analytically through the methods of multivariable calculus. That is, there is no elementary "indefinite integral" for
formula_2
but the definite integral
formula_3
can be evaluated. The definite integral of an arbitrary Gaussian function is
formula_4
Computation.
By polar coordinates.
A standard way to compute the Gaussian integral, the idea of which goes back to Poisson, is to make use of the property that:
formula_5
Consider the function formula_6on the plane formula_7, and compute its integral two ways:
Comparing these two computations yields the integral, though one should take care about the improper integrals involved.
formula_10
where the factor of r is the Jacobian determinant which appears because of the transform to polar coordinates ("r" "dr" "dθ" is the standard measure on the plane, expressed in polar coordinates ), and the substitution involves taking "s" = −"r"2, so "ds" = −2"r" "dr".
Combining these yields
formula_11
so
formula_12
Complete proof.
To justify the improper double integrals and equating the two expressions, we begin with an approximating function:
formula_13
If the integral
formula_14
were absolutely convergent we would have that its Cauchy principal value, that is, the limit
formula_15
would coincide with
formula_16
To see that this is the case, consider that
formula_17
So we can compute
formula_14
by just taking the limit
formula_18
Taking the square of formula_19 yields
formula_20
Using Fubini's theorem, the above double integral can be seen as an area integral
formula_21
taken over a square with vertices {(−"a", "a"), ("a", "a"), ("a", −"a"), (−"a", −"a")} on the "xy"-plane.
Since the exponential function is greater than 0 for all real numbers, it then follows that the integral taken over the square's incircle must be less than formula_22, and similarly the integral taken over the square's circumcircle must be greater than formula_22. The integrals over the two disks can easily be computed by switching from Cartesian coordinates to polar coordinates:
formula_23
formula_24
formula_25
formula_26
Integrating,
formula_27
By the squeeze theorem, this gives the Gaussian integral
formula_28
By Cartesian coordinates.
A different technique, which goes back to Laplace (1812), is the following. Let
formula_29
Since the limits on s as "y" → ±∞ depend on the sign of x, it simplifies the calculation to use the fact that "e"−"x"2 is an even function, and, therefore, the integral over all real numbers is just twice the integral from zero to infinity. That is,
formula_30
Thus, over the range of integration, "x" ≥ 0, and the variables y and s have the same limits. This yields:
formula_31
Then, using Fubini's theorem to switch the order of integration:
formula_32
Therefore, formula_33, as expected.
By Laplace's method.
In Laplace approximation, we deal only with up to second-order terms in Taylor expansion, so we consider formula_34.
In fact, since formula_35 for all formula_36, we have the exact bounds:formula_37Then we can do the bound at Laplace approximation limit:formula_38
That is,
formula_39
By trigonometric substitution, we exactly compute those two bounds: formula_40 and formula_41
By taking the square root of the Wallis formula, formula_42we have formula_43, the desired lower bound limit. Similarly we can get the desired upper bound limit.
Conversely, if we first compute the integral with one of the other methods above, we would obtain a proof of the Wallis formula.
Relation to the gamma function.
The integrand is an even function,
formula_44
Thus, after the change of variable formula_45, this turns into the Euler integral
formula_46
where formula_47 is the gamma function. This shows why the factorial of a half-integer is a rational multiple of formula_48. More generally,
formula_49
which can be obtained by substituting formula_50 in the integrand of the gamma function to get formula_51.
Generalizations.
The integral of a Gaussian function.
The integral of an arbitrary Gaussian function is
formula_4
An alternative form is
formula_52
This form is useful for calculating expectations of some continuous probability distributions related to the normal distribution, such as the log-normal distribution, for example.
Complex form.
formula_53and more generally,formula_54for any positive-definite symmetric matrix formula_55.
"n"-dimensional and functional generalization.
Suppose "A" is a symmetric positive-definite (hence invertible) "n" × "n" precision matrix, which is the matrix inverse of the covariance matrix. Then,
formula_56By completing the square, this generalizes toformula_57
This fact is applied in the study of the multivariate normal distribution.
Also,
formula_58
where "σ" is a permutation of {1, …, 2"N"} and the extra factor on the right-hand side is the sum over all combinatorial pairings of {1, …, 2"N"} of "N" copies of "A"−1.
Alternatively,
formula_59
for some analytic function "f", provided it satisfies some appropriate bounds on its growth and some other technical criteria. (It works for some functions and fails for others. Polynomials are fine.) The exponential over a differential operator is understood as a power series.
While functional integrals have no rigorous definition (or even a nonrigorous computational one in most cases), we can "define" a Gaussian functional integral in analogy to the finite-dimensional case. There is still the problem, though, that formula_60 is infinite and also, the functional determinant would also be infinite in general. This can be taken care of if we only consider ratios:
formula_61
In the DeWitt notation, the equation looks identical to the finite-dimensional case.
"n"-dimensional with linear term.
If A is again a symmetric positive-definite matrix, then (assuming all are column vectors)
formula_62
Integrals of similar form.
formula_63
formula_64
formula_65
formula_66
formula_67
where formula_68 is a positive integer
An easy way to derive these is by differentiating under the integral sign.
formula_69
One could also integrate by parts and find a recurrence relation to solve this.
Higher-order polynomials.
Applying a linear change of basis shows that the integral of the exponential of a homogeneous polynomial in "n" variables may depend only on SL("n")-invariants of the polynomial. One such invariant is the discriminant,
zeros of which mark the singularities of the integral. However, the integral may also depend on other invariants.
Exponentials of other even polynomials can numerically be solved using series. These may be interpreted as formal calculations when there is no convergence. For example, the solution to the integral of the exponential of a quartic polynomial is
formula_70
The "n" + "p" = 0 mod 2 requirement is because the integral from −∞ to 0 contributes a factor of (−1)"n"+"p"/2 to each term, while the integral from 0 to +∞ contributes a factor of 1/2 to each term. These integrals turn up in subjects such as quantum field theory.
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "f(x) = e^{-x^2}"
},
{
"math_id": 1,
"text": "\\int_{-\\infty}^\\infty e^{-x^2}\\,dx = \\sqrt{\\pi}."
},
{
"math_id": 2,
"text": "\\int e^{-x^2}\\,dx,"
},
{
"math_id": 3,
"text": "\\int_{-\\infty}^\\infty e^{-x^2}\\,dx"
},
{
"math_id": 4,
"text": "\\int_{-\\infty}^{\\infty} e^{-a(x+b)^2}\\,dx= \\sqrt{\\frac{\\pi}{a}}."
},
{
"math_id": 5,
"text": "\\left(\\int_{-\\infty}^{\\infty} e^{-x^2}\\,dx\\right)^2 = \\int_{-\\infty}^{\\infty} e^{-x^2}\\,dx \\int_{-\\infty}^{\\infty} e^{-y^2}\\,dy = \\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} e^{-\\left(x^2+y^2\\right)}\\, dx\\,dy. "
},
{
"math_id": 6,
"text": "e^{-\\left(x^2 + y^2\\right)} = e^{-r^{2}}"
},
{
"math_id": 7,
"text": "\\mathbb{R}^2"
},
{
"math_id": 8,
"text": "\\left(\\int e^{-x^2}\\,dx\\right)^2;"
},
{
"math_id": 9,
"text": "\\pi"
},
{
"math_id": 10,
"text": "\\begin{align}\n \\iint_{\\R^2} e^{-\\left(x^2 + y^2\\right)}dx\\,dy\n &= \\int_0^{2\\pi} \\int_0^{\\infty} e^{-r^2}r\\,dr\\,d\\theta\\\\[6pt]\n &= 2\\pi \\int_0^\\infty re^{-r^2}\\,dr\\\\[6pt]\n &= 2\\pi \\int_{-\\infty}^0 \\tfrac{1}{2} e^s\\,ds && s = -r^2\\\\[6pt]\n &= \\pi \\int_{-\\infty}^0 e^s\\,ds \\\\[6pt]\n &= \\lim_{x\\to-\\infty}\\pi \\left(e^0 - e^x\\right) \\\\[6pt]\n &=\\pi,\n \\end{align}"
},
{
"math_id": 11,
"text": "\\left ( \\int_{-\\infty}^\\infty e^{-x^2}\\,dx \\right )^2=\\pi,"
},
{
"math_id": 12,
"text": "\\int_{-\\infty}^\\infty e^{-x^2} \\, dx = \\sqrt{\\pi}."
},
{
"math_id": 13,
"text": "I(a) = \\int_{-a}^a e^{-x^2}dx."
},
{
"math_id": 14,
"text": "\\int_{-\\infty}^\\infty e^{-x^2} \\, dx"
},
{
"math_id": 15,
"text": "\\lim_{a\\to\\infty} I(a) "
},
{
"math_id": 16,
"text": "\\int_{-\\infty}^\\infty e^{-x^2}\\,dx."
},
{
"math_id": 17,
"text": "\\int_{-\\infty}^\\infty \\left|e^{-x^2}\\right| dx < \\int_{-\\infty}^{-1} -x e^{-x^2}\\, dx + \\int_{-1}^1 e^{-x^2}\\, dx+ \\int_{1}^{\\infty} x e^{-x^2}\\, dx < \\infty ."
},
{
"math_id": 18,
"text": "\\lim_{a\\to\\infty} I(a)."
},
{
"math_id": 19,
"text": "I(a)"
},
{
"math_id": 20,
"text": "\\begin{align}\nI(a)^2 & = \\left ( \\int_{-a}^a e^{-x^2}\\, dx \\right ) \\left ( \\int_{-a}^a e^{-y^2}\\, dy \\right ) \\\\[6pt]\n& = \\int_{-a}^a \\left ( \\int_{-a}^a e^{-y^2}\\, dy \\right )\\,e^{-x^2}\\, dx \\\\[6pt]\n& = \\int_{-a}^a \\int_{-a}^a e^{-\\left(x^2+y^2\\right)}\\,dy\\,dx.\n\\end{align}"
},
{
"math_id": 21,
"text": "\\iint_{[-a, a] \\times [-a, a]} e^{-\\left(x^2+y^2\\right)}\\,d(x,y),"
},
{
"math_id": 22,
"text": "I(a)^2"
},
{
"math_id": 23,
"text": "\\begin{align}\nx & = r \\cos \\theta \\\\\ny & = r \\sin\\theta\n\\end{align}"
},
{
"math_id": 24,
"text": "\n\\mathbf J(r, \\theta) = \n\\begin{bmatrix}\n \\dfrac{\\partial x}{\\partial r} & \\dfrac{\\partial x}{\\partial\\theta}\\\\[1em]\n \\dfrac{\\partial y}{\\partial r} & \\dfrac{\\partial y}{\\partial\\theta} \\end{bmatrix}\n= \\begin{bmatrix}\n \\cos\\theta & - r\\sin \\theta \\\\\n \\sin\\theta & r\\cos \\theta\n\\end{bmatrix}\n"
},
{
"math_id": 25,
"text": "d(x,y) = |J(r, \\theta)|d(r,\\theta) = r\\, d(r,\\theta)."
},
{
"math_id": 26,
"text": "\\int_0^{2\\pi} \\int_0^a re^{-r^2} \\, dr \\, d\\theta < I^2(a) < \\int_0^{2\\pi} \\int_0^{a\\sqrt{2}} re^{-r^2} \\, dr\\, d\\theta."
},
{
"math_id": 27,
"text": "\\pi \\left(1-e^{-a^2}\\right) < I^2(a) < \\pi \\left(1 - e^{-2a^2}\\right). "
},
{
"math_id": 28,
"text": "\\int_{-\\infty}^\\infty e^{-x^2}\\, dx = \\sqrt{\\pi}."
},
{
"math_id": 29,
"text": "\\begin{align}\ny & = xs \\\\\ndy & = x\\,ds.\n\\end{align}"
},
{
"math_id": 30,
"text": "\\int_{-\\infty}^{\\infty} e^{-x^2} \\, dx = 2\\int_{0}^{\\infty} e^{-x^2}\\,dx."
},
{
"math_id": 31,
"text": "\\begin{align}\nI^2 &= 4 \\int_0^\\infty \\int_0^\\infty e^{-\\left(x^2 + y^2\\right)} dy\\,dx \\\\[6pt]\n&= 4 \\int_0^\\infty \\left( \\int_0^\\infty e^{-\\left(x^2 + y^2\\right)} \\, dy \\right) \\, dx \\\\[6pt]\n&= 4 \\int_0^\\infty \\left( \\int_0^\\infty e^{-x^2\\left(1+s^2\\right)} x\\,ds \\right) \\, dx \\\\[6pt]\n\\end{align}"
},
{
"math_id": 32,
"text": "\\begin{align}\nI^2 &= 4 \\int_0^\\infty \\left( \\int_0^\\infty e^{-x^2\\left(1 + s^2\\right)} x \\, dx \\right) \\, ds \\\\[6pt]\n&= 4 \\int_0^\\infty \\left[ \\frac{e^{-x^2\\left(1+s^2\\right)} }{-2 \\left(1+s^2\\right)} \\right]_{x=0}^{x=\\infty} \\, ds \\\\[6pt]\n&= 4 \\left (\\frac{1}{2} \\int_0^\\infty \\frac{ds}{1+s^2} \\right) \\\\[6pt]\n&= 2 \\arctan(s)\\Big |_0^\\infty \\\\[6pt]\n&= \\pi.\n\\end{align}"
},
{
"math_id": 33,
"text": "I = \\sqrt{\\pi}"
},
{
"math_id": 34,
"text": "e^{-x^2}\\approx 1-x^2 \\approx (1+x^2)^{-1}"
},
{
"math_id": 35,
"text": "(1+t)e^{-t} \\leq 1"
},
{
"math_id": 36,
"text": "t"
},
{
"math_id": 37,
"text": "1-x^2 \\leq e^{-x^2} \\leq (1+x^2)^{-1}"
},
{
"math_id": 38,
"text": "\\int_{[-1, 1]}(1-x^2)^n dx \\leq \\int_{[-1, 1]}e^{-nx^2} dx \\leq \\int_{[-1, 1]}(1+x^2)^{-n} dx"
},
{
"math_id": 39,
"text": "2\\sqrt n\\int_{[0, 1]}(1-x^2)^n dx \\leq \\int_{[-\\sqrt n, \\sqrt n]}e^{-x^2} dx \\leq 2\\sqrt n\\int_{[0, 1]}(1+x^2)^{-n} dx"
},
{
"math_id": 40,
"text": "2\\sqrt n(2n)!!/(2n+1)!!"
},
{
"math_id": 41,
"text": "2\\sqrt n (\\pi/2)(2n-3)!!/(2n-2)!!"
},
{
"math_id": 42,
"text": "\\frac \\pi 2 = \\prod_{n=1} \\frac{(2n)^2}{(2n-1)(2n+1)}"
},
{
"math_id": 43,
"text": "\\sqrt \\pi = \\lim_{n\\to \\infty} 2\\sqrt{n} \\frac{(2n)!!}{(2n+1)!!}"
},
{
"math_id": 44,
"text": "\\int_{-\\infty}^{\\infty} e^{-x^2} dx = 2 \\int_0^\\infty e^{-x^2} dx"
},
{
"math_id": 45,
"text": "x = \\sqrt{t}"
},
{
"math_id": 46,
"text": "2 \\int_0^\\infty e^{-x^2} dx=2\\int_0^\\infty \\frac{1}{2}\\ e^{-t} \\ t^{-\\frac{1}{2}} dt = \\Gamma\\left(\\frac{1}{2}\\right) = \\sqrt{\\pi}"
},
{
"math_id": 47,
"text": " \\Gamma(z) = \\int_{0}^{\\infty} t^{z-1} e^{-t} dt "
},
{
"math_id": 48,
"text": "\\sqrt \\pi"
},
{
"math_id": 49,
"text": "\\int_0^\\infty x^n e^{-ax^b} dx = \\frac{\\Gamma\\left((n+1)/b\\right)}{ba^{(n+1)/b}}, "
},
{
"math_id": 50,
"text": "t=a x^b"
},
{
"math_id": 51,
"text": " \\Gamma(z) = a^z b \\int_0^{\\infty} x^{bz-1} e^{-a x^b} dx "
},
{
"math_id": 52,
"text": "\\int_{-\\infty}^{\\infty}e^{- (a x^2 + b x + c)}\\,dx=\\sqrt{\\frac{\\pi}{a}}\\,e^{\\frac{b^2}{4a}-c}."
},
{
"math_id": 53,
"text": "\\int_{-\\infty}^{\\infty} e^{\\frac 12 it^2} dt = e^{i\\pi/4} \\sqrt{2\\pi}"
},
{
"math_id": 54,
"text": "\\int_{\\mathbb{R}^N} e^{\\frac 12 i x^T A x}dx = \\det(A)^{-\\frac 12 } (e^{i\\pi/4} \\sqrt{2\\pi})^N"
},
{
"math_id": 55,
"text": "A"
},
{
"math_id": 56,
"text": "\\int_{\\mathbb{R}^n} \\exp{\\left(-\\frac 1 2 \\sum\\limits_{i,j=1}^{n}A_{ij} x_i x_j \\right)} \\, d^n x = \\int_{\\mathbb{R}^n} \\exp{\\left(-\\frac 1 2 x^\\mathsf{T} A x \\right)} \\, d^n x = \\sqrt{\\frac{(2\\pi)^n}{\\det A}} =\\sqrt{\\frac{1}{\\det (A / 2\\pi)}} =\\sqrt{\\det (2 \\pi A^{-1})}"
},
{
"math_id": 57,
"text": "\\int_{\\mathbb{R}^n} \\exp{\\left(-\\frac 1 2 x^\\mathsf{T} A x + b^\\mathsf{T} x + c\\right)} \\, d^n x = \\sqrt{\\det (2 \\pi A^{-1})} e^{\\frac 12 b^\\mathsf{T} A^{-1}b + c}"
},
{
"math_id": 58,
"text": "\\int x_{k_1}\\cdots x_{k_{2N}} \\, \\exp{\\left( -\\frac{1}{2} \\sum\\limits_{i,j=1}^{n}A_{ij} x_i x_j \\right)} \\, d^nx =\\sqrt{\\frac{(2\\pi)^n}{\\det A}} \\, \\frac{1}{2^N N!} \\, \\sum_{\\sigma \\in S_{2N}}(A^{-1})_{k_{\\sigma(1)}k_{\\sigma(2)}} \\cdots (A^{-1})_{k_{\\sigma(2N-1)}k_{\\sigma(2N)}}"
},
{
"math_id": 59,
"text": "\\int f(\\vec x) \\exp{\\left( - \\frac 1 2 \\sum_{i,j=1}^{n}A_{ij} x_i x_j \\right)} d^nx=\\sqrt{(2\\pi)^n\\over \\det A} \\, \\left. \\exp{\\left({1\\over 2} \\sum_{i,j=1}^{n}\\left(A^{-1}\\right)_{ij}{\\partial \\over \\partial x_i}{\\partial \\over \\partial x_j}\\right)} f(\\vec{x})\\right|_{\\vec{x}=0}"
},
{
"math_id": 60,
"text": "(2\\pi)^\\infty"
},
{
"math_id": 61,
"text": "\\begin{align}\n& \\frac{\\displaystyle\\int f(x_1)\\cdots f(x_{2N}) \\exp\\left[{-\\iint \\frac{1}{2}A(x_{2N+1},x_{2N+2}) f(x_{2N+1}) f(x_{2N+2}) \\, d^dx_{2N+1} \\, d^dx_{2N+2}}\\right] \\mathcal{D}f}{\\displaystyle\\int \\exp\\left[{-\\iint \\frac{1}{2} A(x_{2N+1}, x_{2N+2}) f(x_{2N+1}) f(x_{2N+2}) \\, d^dx_{2N+1} \\, d^dx_{2N+2}}\\right] \\mathcal{D}f} \\\\[6pt]\n= {} & \\frac{1}{2^N N!}\\sum_{\\sigma \\in S_{2N}}A^{-1}(x_{\\sigma(1)},x_{\\sigma(2)})\\cdots A^{-1}(x_{\\sigma(2N-1)},x_{\\sigma(2N)}).\n\\end{align}"
},
{
"math_id": 62,
"text": "\\int \\exp\\left(-\\frac{1}{2}\\sum_{i,j=1}^{n}A_{ij} x_i x_j+\\sum_{i=1}^{n}B_i x_i\\right) d^n x\n=\\int e^{-\\frac{1}{2}\\vec{x}^\\mathsf{T} \\mathbf{A} \\vec{x}+\\vec{B}^\\mathsf{T} \\vec{x}} d^n x\n= \\sqrt{ \\frac{(2\\pi)^n}{\\det{A}} }e^{\\frac{1}{2}\\vec{B}^\\mathsf{T}\\mathbf{A}^{-1}\\vec{B}}."
},
{
"math_id": 63,
"text": "\\int_0^\\infty x^{2n} e^{-\\frac{x^2}{a^2}}\\,dx = \\sqrt{\\pi}\\frac{a^{2n+1} (2n-1)!!}{2^{n+1}}"
},
{
"math_id": 64,
"text": "\\int_0^\\infty x^{2n+1} e^{-\\frac{x^2}{a^2}}\\,dx = \\frac{n!}{2} a^{2n+2}"
},
{
"math_id": 65,
"text": "\\int_0^\\infty x^{2n}e^{-bx^2}\\,dx = \\frac{(2n-1)!!}{b^n 2^{n+1}} \\sqrt{\\frac{\\pi}{b}}"
},
{
"math_id": 66,
"text": "\\int_0^\\infty x^{2n+1}e^{-bx^2}\\,dx = \\frac{n!}{2b^{n+1}}"
},
{
"math_id": 67,
"text": "\\int_0^\\infty x^{n}e^{-bx^2}\\,dx = \\frac{\\Gamma(\\frac{n+1}{2})}{2b^{\\frac{n+1}{2}}}"
},
{
"math_id": 68,
"text": "n"
},
{
"math_id": 69,
"text": "\\begin{align}\n\\int_{-\\infty}^\\infty x^{2n} e^{-\\alpha x^2}\\,dx\n&= \\left(-1\\right)^n\\int_{-\\infty}^\\infty \\frac{\\partial^n}{\\partial \\alpha^n} e^{-\\alpha x^2}\\,dx \\\\\n&= \\left(-1\\right)^n\\frac{\\partial^n}{\\partial \\alpha^n} \\int_{-\\infty}^\\infty e^{-\\alpha x^2}\\,dx\\\\[6pt]\n&= \\sqrt{\\pi} \\left(-1\\right)^n\\frac{\\partial^n}{\\partial \\alpha^n}\\alpha^{-\\frac{1}{2}} \\\\\n&= \\sqrt{\\frac{\\pi}{\\alpha}}\\frac{(2n-1)!!}{\\left(2\\alpha\\right)^n}\n\\end{align}"
},
{
"math_id": 70,
"text": "\\int_{-\\infty}^{\\infty} e^{a x^4+b x^3+c x^2+d x+f}\\,dx = \\frac{1}{2} e^f \\sum_{\\begin{smallmatrix}n,m,p=0 \\\\ n+p=0 \\mod 2\\end{smallmatrix}}^{\\infty} \\frac{b^n}{n!} \\frac{c^m}{m!} \\frac{d^p}{p!} \\frac{\\Gamma \\left (\\frac{3n+2m+p+1}{4} \\right)}{(-a)^{\\frac{3n+2m+p+1}4}}."
}
] |
https://en.wikipedia.org/wiki?curid=567580
|
5676427
|
Estimation lemma
|
In mathematics the estimation lemma, also known as the ML inequality, gives an upper bound for a contour integral. If f is a complex-valued, continuous function on the contour Γ and if its absolute value is bounded by a constant M for all z on Γ, then
formula_0
where "l"(Γ) is the arc length of Γ. In particular, we may take the maximum
formula_1
as upper bound. Intuitively, the lemma is very simple to understand. If a contour is thought of as many smaller contour segments connected together, then there will be a maximum for each segment. Out of all the maximum s for the segments, there will be an overall largest one. Hence, if the overall largest is summed over the entire path then the integral of "f" ("z") over the path must be less than or equal to it.
Formally, the inequality can be shown to hold using the definition of contour integral, the absolute value inequality for integrals and the formula for the length of a curve as follows:
formula_2
The estimation lemma is most commonly used as part of the methods of contour integration with the intent to show that the integral over part of a contour goes to zero as goes to infinity. An example of such a case is shown below.
Example.
Problem.
Find an upper bound for
formula_3
where Γ is the upper half-circle with radius "a" > 1 traversed once in the counterclockwise direction.
Solution.
First observe that the length of the path of integration is half the circumference of a circle with radius a, hence
formula_4
Next we seek an upper bound M for the integrand when . By the triangle inequality we see that
formula_5
therefore
formula_6
because on Γ. Hence
formula_7
Therefore, we apply the estimation lemma with "M"
. The resulting bound is
formula_8
|
[
{
"math_id": 0,
"text": "\\left|\\int_\\Gamma f(z) \\, dz\\right| \\le M\\, l(\\Gamma), "
},
{
"math_id": 1,
"text": "M:= \\sup_{z\\in\\Gamma}|f(z)|"
},
{
"math_id": 2,
"text": "\\left|\\int_\\Gamma f(z)\\, dz \\right| \n= \\left|\\int_\\alpha^\\beta f(\\gamma(t))\\gamma'(t)\\, dt \\right| \n\\leq \\int_\\alpha^\\beta \\left|f(\\gamma(t))\\right|\\left|\\gamma'(t)\\right|\\, dt \n\\leq M \\int_\\alpha^\\beta \\left|\\gamma'(t)\\right|\\, dt = M\\, l(\\Gamma)"
},
{
"math_id": 3,
"text": "\\left|\\int_\\Gamma \\frac{1}{(z^2+1)^2} \\, dz\\right|,"
},
{
"math_id": 4,
"text": "l(\\Gamma)=\\tfrac{1}{2}(2\\pi a)=\\pi a."
},
{
"math_id": 5,
"text": "|z|^2=\\left|z^2\\right| = \\left|z^2+1-1\\right| \\le \\left|z^2+1\\right|+1,"
},
{
"math_id": 6,
"text": "\\left|z^2+1\\right|\\ge |z|^2 - 1 = a^2 - 1>0"
},
{
"math_id": 7,
"text": "\\left|\\frac{1}{\\left(z^2+1\\right)^2}\\right| \\le \\frac{1}{\\left(a^2-1\\right)^2}."
},
{
"math_id": 8,
"text": "\\left|\\int_\\Gamma \\frac{1}{\\left(z^2+1\\right)^2}\\,dz\\right| \\le \\frac{\\pi a}{\\left(a^2-1\\right)^2}. "
}
] |
https://en.wikipedia.org/wiki?curid=5676427
|
567667
|
Lexicographic order
|
Generalization of the alphabetical order of dictionaries to sequences of elements of an ordered set
In mathematics, the lexicographic or lexicographical order (also known as lexical order, or dictionary order) is a generalization of the alphabetical order of the dictionaries to sequences of ordered symbols or, more generally, of elements of a totally ordered set.
There are several variants and generalizations of the lexicographical ordering. One variant applies to sequences of different lengths by comparing the lengths of the sequences before considering their elements.
Another variant, widely used in combinatorics, orders subsets of a given finite set by assigning a total order to the finite set, and converting subsets into increasing sequences, to which the lexicographical order is applied.
A generalization defines an order on an "n"-ary Cartesian product of partially ordered sets; this order is a total order if and only if all factors of the Cartesian product are totally ordered.
Definition.
The words in a lexicon (the set of words used in some language) have a conventional ordering, used in dictionaries and encyclopedias, that depends on the underlying ordering of the alphabet of symbols used to build the words. The lexicographical order is one way of formalizing word order given the order of the underlying symbols.
The formal notion starts with a finite set "A", often called the alphabet, which is totally ordered. That is, for any two symbols "a" and "b" in "A" that are not the same symbol, either "a" < "b" or "b" < "a".
The "words" of "A" are the finite sequences of symbols from "A", including words of length 1 containing a single symbol, words of length 2 with 2 symbols, and so on, even including the empty sequence formula_0 with no symbols at all. The lexicographical order on the set of all these finite words orders the words as follows:
"a"1"a"2..."a""k" and "b"
"b"1"b"2..."b""k", the order of the two words depends on the alphabetic order of the symbols in the first place "i" where the two words differ (counting from the beginning of the words): "a" < "b" if and only if "a""i" < "b""i" in the underlying order of the alphabet "A".
However, in combinatorics, another convention is frequently used for the second case, whereby a shorter sequence is always smaller than a longer sequence. This variant of the lexicographical order is sometimes called shortlex order.
In lexicographical order, the word "Thomas" appears before "Thompson" because they first differ at the fifth letter ('a' and 'p'), and letter 'a' comes before the letter 'p' in the alphabet. Because it is the first difference, in this case the 5th letter is the "most significant difference" for alphabetical ordering.
An important property of the lexicographical order is that for each "n", the set of words of length "n" is well-ordered by the lexicographical order (provided the alphabet is finite); that is, every decreasing sequence of words of length "n" is finite (or equivalently, every non-empty subset has a least element). It is not true that the set of "all" finite words is well-ordered; for example, the infinite set of words {b, ab, aab, aaab, ... } has no lexicographically earliest element.
Numeral systems and dates.
The lexicographical order is used not only in dictionaries, but also commonly for numbers and dates.
One of the drawbacks of the Roman numeral system is that it is not always immediately obvious which of two numbers is the smaller. On the other hand, with the positional notation of the Hindu–Arabic numeral system, comparing numbers is easy, because the natural order on natural numbers is the same as the variant shortlex of the lexicographic order. In fact, with positional notation, a natural number is represented by a sequence of numerical digits, and a natural number is larger than another one if either it has more digits (ignoring leading zeroes) or the number of digits is the same and the first (most significant) digit which differs is larger.
For real numbers written in decimal notation, a slightly different variant of the lexicographical order is used: the parts on the left of the decimal point are compared as before; if they are equal, the parts at the right of the decimal point are compared with the lexicographical order. The padding 'blank' in this context is a trailing "0" digit.
When negative numbers are also considered, one has to reverse the order for comparing negative numbers. This is not usually a problem for humans, but it may be for computers (testing the sign takes some time). This is one of the reasons for adopting two's complement representation for representing signed integers in computers.
Another example of a non-dictionary use of lexicographical ordering appears in the ISO 8601 standard for dates, which expresses a date as YYYY-MM-DD. This formatting scheme has the advantage that the lexicographical order on sequences of characters that represent dates coincides with the chronological order: an earlier CE date is smaller in the lexicographical order than a later date up to year 9999. This date ordering makes computerized sorting of dates easier by avoiding the need for a separate sorting algorithm.
Monoid of words.
The monoid of words over an alphabet "A" is the free monoid over "A". That is, the elements of the monoid are the finite sequences (words) of elements of "A" (including the empty sequence, of length 0), and the operation (multiplication) is the concatenation of words. A word "u" is a prefix (or 'truncation') of another word "v" if there exists a word "w" such that "v" = "uw". By this definition, the empty word (formula_0) is a prefix of every word, and every word is a prefix of itself (with "w" formula_1); care must be taken if these cases are to be excluded.
With this terminology, the above definition of the lexicographical order becomes more concise: Given a partially or totally ordered set "A", and two words "a" and "b" over "A" such that "b" is non-empty, then one has "a" < "b" under lexicographical order, if at least one of the following conditions is satisfied:
"x" < "y"
"a" = "uxv"
"b" = "uyw"
Notice that, due to the prefix condition in this definition, formula_2 where formula_0 is the empty word.
If formula_3 is a total order on formula_4 then so is the lexicographic order on the words of formula_5 However, in general this is not a well-order, even if the alphabet formula_6 is well-ordered. For instance, if "A" = {"a", "b"}, the language {"a""n""b" | "n" ≥ 0, "b" > "ε"} has no least element in the lexicographical order: ... < "aab" < "ab" < "b".
Since many applications require well orders, a variant of the lexicographical orders is often used. This well-order, sometimes called shortlex or quasi-lexicographical order, consists in considering first the lengths of the words (if length("a") < length("b"), then formula_7), and, if the lengths are equal, using the lexicographical order. If the order on "A" is a well-order, the same is true for the shortlex order.
Cartesian products.
The lexicographical order defines an order on an "n"-ary Cartesian product of ordered sets, which is a total order when all these sets are themselves totally ordered. An element of a Cartesian product formula_8 is a sequence whose formula_9th element belongs to formula_10 for every formula_11 As evaluating the lexicographical order of sequences compares only elements which have the same rank in the sequences, the lexicographical order extends to Cartesian products of ordered sets.
Specifically, given two partially ordered sets formula_6 and formula_12 the <templatestyles src="Template:Visible anchor/styles.css" />lexicographical order on the Cartesian product formula_13 is defined as
formula_14
The result is a partial order. If formula_6 and formula_15 are each totally ordered, then the result is a total order as well. The lexicographical order of two totally ordered sets is thus a linear extension of their product order.
One can define similarly the lexicographic order on the Cartesian product of an infinite family of ordered sets, if the family is indexed by the natural numbers, or more generally by a well-ordered set. This generalized lexicographical order is a total order if each factor set is totally ordered.
Unlike the finite case, an infinite product of well-orders is not necessarily well-ordered by the lexicographical order. For instance, the set of countably infinite binary sequences (by definition, the set of functions from natural numbers to formula_16 also known as the Cantor space formula_17) is not well-ordered; the subset of sequences that have precisely one formula_18 (that is, { 100000..., 010000..., 001000..., ... }) does not have a least element under the lexicographical order induced by formula_19 because 100000... > 010000... > 001000... > ... is an infinite descending chain. Similarly, the infinite lexicographic product is not Noetherian either because 011111... < 101111... < 110111 ... < ... is an infinite ascending chain.
Functions over a well-ordered set.
The functions from a well-ordered set formula_20 to a totally ordered set formula_21 may be identified with sequences indexed by formula_20 of elements of formula_22 They can thus be ordered by the lexicographical order, and for two such functions formula_23 and formula_24 the lexicographical order is thus determined by their values for the smallest formula_25 such that formula_26
If formula_21 is also well-ordered and formula_20 is finite, then the resulting order is a well-order. As shown above, if formula_20 is infinite this is not the case.
Finite subsets.
In combinatorics, one has often to enumerate, and therefore to order the finite subsets of a given set formula_27 For this, one usually chooses an order on formula_27 Then, sorting a subset of formula_28 is equivalent to convert it into an increasing sequence. The lexicographic order on the resulting sequences induces thus an order on the subsets, which is also called the lexicographical order.
In this context, one generally prefer to sort first the subsets by cardinality, such as in the shortlex order. Therefore, in the following, we will consider only orders on subsets of fixed cardinal.
For example, using the natural order of the integers, the lexicographical ordering on the subsets of three elements of formula_29 is
123 < 124 < 125 < 126 < 134 < 135 < 136 < 145 < 146 < 156 <
234 < 235 < 236 < 245 < 246 < 256 < 345 < 346 < 356 < 456.
For ordering finite subsets of a given cardinality of the natural numbers, the colexicographical order (see below) is often more convenient, because all initial segments are finite, and thus the colexicographical order defines an order isomorphism between the natural numbers and the set of sets of formula_30 natural numbers. This is not the case for the lexicographical order, as, with the lexicographical order, we have, for example, formula_31 for every formula_32
Group orders of Z"n".
Let formula_33 be the free Abelian group of rank formula_34 whose elements are sequences of formula_30 integers, and operation is the addition. A group order on formula_33 is a total order, which is compatible with addition, that is
formula_35
The lexicographical ordering is a group order on formula_36
The lexicographical ordering may also be used to characterize all group orders on formula_36 In fact, formula_30 linear forms with real coefficients, define a map from formula_33 into formula_37 which is injective if the forms are linearly independent (it may be also injective if the forms are dependent, see below). The lexicographic order on the image of this map induces a group order on formula_36 Robbiano's theorem is that every group order may be obtained in this way.
More precisely, given a group order on formula_38 there exist an integer formula_39 and formula_40 linear forms with real coefficients, such that the induced map formula_41 from formula_33 into formula_42 has the following properties;
Colexicographic order.
The colexicographic or colex order is a variant of the lexicographical order that is obtained by reading finite sequences from the right to the left instead of reading them from the left to the right. More precisely, whereas the lexicographical order between two sequences is defined by
"a"1"a"2..."a""k" <lex "b"1"b"2 ... "b""k" if "ai" < "bi" for the first "i" where "ai" and "bi" differ,
the colexicographical order is defined by
"a"1"a"2..."a""k" <colex "b"1"b"2..."b""k" if "ai" < "bi" for the last "i" where "ai" and "bi" differ
In general, the difference between the colexicographical order and the lexicographical order is not very significant. However, when considering increasing sequences, typically for coding subsets, the two orders differ significantly.
For example, for ordering the increasing sequences (or the sets) of two natural integers, the lexicographical order begins by
12 < 13 < 14 < 15 < ... < 23 < 24 < 25 < ... < 34 < 35 < ... < 45 < ...,
and the colexicographic order begins by
12 < 13 < 23 < 14 < 24 < 34 < 15 < 25 < 35 < 45 < ...
The main property of the colexicographical order for increasing sequences of a given length is that every initial segment is finite. In other words, the colexicographical order for increasing sequences of a given length induces an order isomorphism with the natural numbers, and allows enumerating these sequences. This is frequently used in combinatorics, for example in the proof of the Kruskal–Katona theorem.
Monomials.
When considering polynomials, the order of the terms does not matter in general, as the addition is commutative. However, some algorithms, such as polynomial long division, require the terms to be in a specific order. Many of the main algorithms for multivariate polynomials are related with Gröbner bases, concept that requires the choice of a monomial order, that is a total order, which is compatible with the monoid structure of the monomials. Here "compatible" means that formula_44 if the monoid operation is denoted multiplicatively. This compatibility implies that the product of a polynomial by a monomial does not change the order of the terms. For Gröbner bases, a further condition must be satisfied, namely that every non-constant monomial is greater than the monomial 1. However this condition is not needed for other related algorithms, such as the algorithms for the computation of the tangent cone.
As Gröbner bases are defined for polynomials in a fixed number of variables, it is common to identify monomials (for example formula_45) with their exponent vectors (here [1, 3, 0, 1, 2]). If "n" is the number of variables, every monomial order is thus the restriction to formula_46 of a monomial order of formula_33 (see above formula_38 for a classification).
One of these admissible orders is the lexicographical order. It is, historically, the first to have been used for defining Gröbner bases, and is sometimes called pure lexicographical order for distinguishing it from other orders that are also related to a lexicographical order.
Another one consists in comparing first the total degrees, and then resolving the conflicts by using the lexicographical order. This order is not widely used, as either the lexicographical order or the degree reverse lexicographical order have generally better properties.
The degree reverse lexicographical order consists also in comparing first the total degrees, and, in case of equality of the total degrees, using the reverse of the colexicographical order. That is, given two exponent vectors, one has
formula_47
if either
formula_48
or
formula_49
For this ordering, the monomials of degree one have the same order as the corresponding indeterminates (this would not be the case if the reverse lexicographical order would be used). For comparing monomials in two variables of the same total degree, this order is the same as the lexicographic order. This is not the case with more variables. For example, for exponent vectors of monomials of degree two in three variables, one has for the degree reverse lexicographic order:
formula_50
For the lexicographical order, the same exponent vectors are ordered as
formula_51
A useful property of the degree reverse lexicographical order is that a homogeneous polynomial is a multiple of the least indeterminate if and only if its leading monomial (its greater monomial) is a multiple of this least indeterminate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\varepsilon"
},
{
"math_id": 1,
"text": " = \\varepsilon"
},
{
"math_id": 2,
"text": "\\varepsilon < b\\,\\, \\text{ for all } b \\neq \\varepsilon,"
},
{
"math_id": 3,
"text": "\\,<\\,"
},
{
"math_id": 4,
"text": "A,"
},
{
"math_id": 5,
"text": "A."
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "a < b"
},
{
"math_id": 8,
"text": "E_1 \\times \\cdots \\times E_n"
},
{
"math_id": 9,
"text": "i"
},
{
"math_id": 10,
"text": "E_i"
},
{
"math_id": 11,
"text": "i."
},
{
"math_id": 12,
"text": "B,"
},
{
"math_id": 13,
"text": "A \\times B"
},
{
"math_id": 14,
"text": "(a, b) \\leq \\left(a^{\\prime}, b^{\\prime}\\right) \\text{ if and only if } a < a^{\\prime} \\text{ or } \\left(a = a^{\\prime} \\text{ and } b \\leq b^{\\prime}\\right),"
},
{
"math_id": 15,
"text": "B"
},
{
"math_id": 16,
"text": "\\{ 0, 1 \\},"
},
{
"math_id": 17,
"text": "\\{ 0, 1 \\}^{\\omega}"
},
{
"math_id": 18,
"text": "1"
},
{
"math_id": 19,
"text": "0 < 1,"
},
{
"math_id": 20,
"text": "X"
},
{
"math_id": 21,
"text": "Y"
},
{
"math_id": 22,
"text": "Y."
},
{
"math_id": 23,
"text": "f"
},
{
"math_id": 24,
"text": "g,"
},
{
"math_id": 25,
"text": "x"
},
{
"math_id": 26,
"text": "f(x) \\neq g(x)."
},
{
"math_id": 27,
"text": "S."
},
{
"math_id": 28,
"text": "S"
},
{
"math_id": 29,
"text": "S = \\{1, 2, 3, 4, 5, 6\\}"
},
{
"math_id": 30,
"text": "n"
},
{
"math_id": 31,
"text": "12 n < 134"
},
{
"math_id": 32,
"text": "n > 2."
},
{
"math_id": 33,
"text": "\\Z^n"
},
{
"math_id": 34,
"text": "n,"
},
{
"math_id": 35,
"text": "a < b \\quad \\text{ if and only if } \\quad a+c < b+c."
},
{
"math_id": 36,
"text": "\\Z^n."
},
{
"math_id": 37,
"text": "\\R^n,"
},
{
"math_id": 38,
"text": "\\Z^n,"
},
{
"math_id": 39,
"text": "s \\leq n"
},
{
"math_id": 40,
"text": "s"
},
{
"math_id": 41,
"text": "\\varphi"
},
{
"math_id": 42,
"text": "\\R^s"
},
{
"math_id": 43,
"text": "\\R^s."
},
{
"math_id": 44,
"text": "a < b \\text{ implies } ac < bc,"
},
{
"math_id": 45,
"text": "x_1 x_2^3 x_4 x_5^2"
},
{
"math_id": 46,
"text": "\\N^n"
},
{
"math_id": 47,
"text": "[a_1, \\ldots, a_n] < [b_1, \\ldots, b_n]"
},
{
"math_id": 48,
"text": "a_1+ \\cdots+ a_n < b_1+ \\cdots+ b_n,"
},
{
"math_id": 49,
"text": " a_1+ \\cdots+ a_n = b_1+\\cdots+ b_n \\quad \\text{ and }\\quad a_i >b_i \\text{ for the largest } i \\text{ for which } a_i \\neq b_i."
},
{
"math_id": 50,
"text": "[0, 0, 2] < [0, 1, 1] < [1, 0, 1] < [0, 2, 0] < [1, 1, 0] < [2, 0, 0]"
},
{
"math_id": 51,
"text": "[0, 0, 2] < [0, 1, 1] < [0, 2, 0] < [1, 0, 1] < [1, 1, 0] < [2, 0, 0]."
}
] |
https://en.wikipedia.org/wiki?curid=567667
|
56767513
|
Zariski's finiteness theorem
|
In algebra, Zariski's finiteness theorem gives a positive answer to Hilbert's 14th problem for the polynomial ring in two variables, as a special case. Precisely, it states:
Given a normal domain "A", finitely generated as an algebra over a field "k", if "L" is a subfield of the field of fractions of "A" containing "k" such that formula_0, then the "k"-subalgebra formula_1 is finitely generated.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "<math>\\operatorname{tr.deg}_k(L) \\le 2</Math>"
},
{
"math_id": 1,
"text": "L \\cap A"
}
] |
https://en.wikipedia.org/wiki?curid=56767513
|
56768957
|
Abundance conjecture
|
In algebraic geometry, the abundance conjecture is a conjecture in
birational geometry, more precisely in the minimal model program,
stating that for every projective variety formula_0 with Kawamata log terminal singularities over a field formula_1 if the canonical bundle formula_2 is nef, then formula_2 is semi-ample.
Important cases of the abundance conjecture have been proven by Caucher Birkar.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": "K_X"
}
] |
https://en.wikipedia.org/wiki?curid=56768957
|
567743
|
Riemann–Liouville integral
|
In mathematics, the Riemann–Liouville integral associates with a real function formula_0 another function "I""α" "f" of the same kind for each value of the parameter "α" > 0. The integral is a manner of generalization of the repeated antiderivative of f in the sense that for positive integer values of α, "I""α" "f" is an iterated antiderivative of f of order α. The Riemann–Liouville integral is named for Bernhard Riemann and Joseph Liouville, the latter of whom was the first to consider the possibility of fractional calculus in 1832. The operator agrees with the Euler transform, after Leonhard Euler, when applied to analytic functions. It was generalized to arbitrary dimensions by Marcel Riesz, who introduced the Riesz potential.
Motivation.
The Riemann-Liouville integral is motivated from Cauchy formula for repeated integration. For a function f continuous on the interval [a,x], the Cauchy repeated integration formula states that
formula_1
Now, this formula can be generalized to any positive real number by replacing positive integer n with α, Therefore we obtain the definition of Riemann-Liouville fractional Integral by
formula_2
Definition.
The Riemann–Liouville integral is defined by
formula_2
where Γ is the gamma function and a is an arbitrary but fixed base point. The integral is well-defined provided f is a locally integrable function, and α is a complex number in the half-plane Re("α") > 0. The dependence on the base-point a is often suppressed, and represents a freedom in constant of integration. Clearly "I"1 "f" is an antiderivative of f (of first order), and for positive integer values of α, "I""α" "f" is an antiderivative of order α by Cauchy formula for repeated integration. Another notation, which emphasizes the base point, is
formula_3
This also makes sense if "a" = −∞, with suitable restrictions on f.
The fundamental relations hold
formula_4
the latter of which is a semigroup property. These properties make possible not only the definition of fractional integration, but also of fractional differentiation, by taking enough derivatives of "I""α" "f".
Properties.
Fix a bounded interval ("a","b"). The operator "I""α" associates to each integrable function f on ("a","b") the function "I""α" "f" on ("a","b") which is also integrable by Fubini's theorem. Thus "I""α" defines a linear operator on "L"1("a","b"):
formula_5
Fubini's theorem also shows that this operator is continuous with respect to the Banach space structure on L1, and that the following inequality holds:
formula_6
Here ‖ · ‖1 denotes the norm on "L"1("a","b").
More generally, by Hölder's inequality, it follows that if "f" ∈ "L""p"("a", "b"), then "I""α" "f" ∈ "L""p"("a", "b") as well, and the analogous inequality holds
formula_7
where ‖ · ‖"p" is the Lp norm on the interval ("a","b"). Thus we have a bounded linear operator "I""α" : "L""p"("a", "b") → "L""p"("a", "b"). Furthermore, "I""α" "f" → "f" in the "L""p" sense as "α" → 0 along the real axis. That is
formula_8
for all "p" ≥ 1. Moreover, by estimating the maximal function of I, one can show that the limit "I""α" "f" → "f" holds pointwise almost everywhere.
The operator "I""α" is well-defined on the set of locally integrable function on the whole real line formula_9. It defines a bounded transformation on any of the Banach spaces of functions of exponential type formula_10 consisting of locally integrable functions for which the norm
formula_11
is finite. For "f" ∈ "X""σ", the Laplace transform of "I""α" "f" takes the particularly simple form
formula_12
for Re("s") > "σ". Here "F"("s") denotes the Laplace transform of f, and this property expresses that "I""α" is a Fourier multiplier.
Fractional derivatives.
One can define fractional-order derivatives of f as well by
formula_13
where ⌈ · ⌉ denotes the ceiling function. One also obtains a differintegral interpolating between differentiation and integration by defining
formula_14
An alternative fractional derivative was introduced by Caputo in 1967, and produces a derivative that has different properties: it produces zero from constant functions and, more importantly, the initial value terms of the Laplace Transform are expressed by means of the values of that function and of its derivative of integer order rather than the derivatives of fractional order as in the Riemann–Liouville derivative. The Caputo fractional derivative with base point x, is then:
formula_15
Another representation is:
formula_16
Fractional derivative of a basic power function.
Let us assume that "f"("x") is a monomial of the form
formula_17
The first derivative is as usual
formula_18
Repeating this gives the more general result that
formula_19
which, after replacing the factorials with the gamma function, leads to
formula_20
For "k" = 1 and "a" =, we obtain the half-derivative of the function formula_21 as
formula_22
To demonstrate that this is, in fact, the "half derivative" (where "H""f"("x") = "Df"("x")), we repeat the process to get:
formula_23
(because formula_24 and Γ(1) = 1) which is indeed the expected result of
formula_25
For negative integer power "k", 1/formula_26 is 0, so it is convenient to use the following relation:
formula_27
This extension of the above differential operator need not be constrained only to real powers; it also applies for complex powers. For example, the (1 + "i")-th derivative of the (1 − "i")-th derivative yields the second derivative. Also setting negative values for a yields integrals.
For a general function "f"("x") and 0 < "α" < 1, the complete fractional derivative is
formula_28
For arbitrary α, since the gamma function is infinite for negative (real) integers, it is necessary to apply the fractional derivative after the integer derivative has been performed. For example,
formula_29
Laplace transform.
We can also come at the question via the Laplace transform. Knowing that
formula_30
and
formula_31
and so on, we assert
formula_32.
For example,
formula_33
as expected. Indeed, given the convolution rule
formula_34
and shorthanding "p"("x") = "x""α" − 1 for clarity, we find that
formula_35
which is what Cauchy gave us above.
Laplace transforms "work" on relatively few functions, but they "are" often useful for solving fractional differential equations.
|
[
{
"math_id": 0,
"text": "f: \\mathbb{R} \\rightarrow \\mathbb{R}"
},
{
"math_id": 1,
"text": "f^{(-n)}(x) = \\frac{1}{(n-1)!} \\int_a^x\\left(x-t\\right)^{n-1} f(t)\\,\\mathrm{d}t."
},
{
"math_id": 2,
"text": "I^\\alpha f(x) = \\frac{1}{\\Gamma(\\alpha)}\\int_a^xf(t)(x-t)^{\\alpha-1}\\,dt"
},
{
"math_id": 3,
"text": "{}_aD_x^{-\\alpha}f(x) = \\frac{1}{\\Gamma(\\alpha)}\\int_a^x f(t)(x-t)^{\\alpha-1}\\,dt."
},
{
"math_id": 4,
"text": "\\frac{d}{dx}I^{\\alpha+1} f(x) = I^\\alpha f(x),\\quad I^\\alpha(I^\\beta f) = I^{\\alpha+\\beta}f,"
},
{
"math_id": 5,
"text": "I^\\alpha : L^1(a,b) \\to L^1(a,b)."
},
{
"math_id": 6,
"text": "\\left \\|I^\\alpha f \\right \\|_1 \\le \\frac{|b-a|^{\\Re(\\alpha)}}{\\Re(\\alpha)|\\Gamma(\\alpha)|}\\|f\\|_1."
},
{
"math_id": 7,
"text": "\\left \\|I^\\alpha f \\right \\|_p \\le \\frac{|b-a|^{\\Re(\\alpha)/p}}{\\Re(\\alpha)|\\Gamma(\\alpha)|}\\|f\\|_p"
},
{
"math_id": 8,
"text": "\\lim_{\\alpha\\to 0^+} \\|I^\\alpha f - f\\|_p = 0"
},
{
"math_id": 9,
"text": "\\mathbb{R}"
},
{
"math_id": 10,
"text": "X_{\\sigma} = L^1(e^{-\\sigma|t|}dt),"
},
{
"math_id": 11,
"text": "\\|f\\| = \\int_{-\\infty}^\\infty |f(t)|e^{-\\sigma|t|}\\,dt"
},
{
"math_id": 12,
"text": "(\\mathcal{L}I^\\alpha f)(s) = s^{-\\alpha}F(s)"
},
{
"math_id": 13,
"text": "\\frac{d^\\alpha}{dx^\\alpha} f \\, \\overset{\\text{def}}{=} \\frac{d^{\\lceil\\alpha\\rceil}}{dx^{\\lceil\\alpha\\rceil}} I^{\\lceil \\alpha \\rceil-\\alpha}f"
},
{
"math_id": 14,
"text": "D^\\alpha_x f(x) = \\begin{cases} \\frac{d^{\\lceil\\alpha\\rceil}}{dx^{\\lceil\\alpha\\rceil}} I^{\\lceil\\alpha\\rceil-\\alpha}f(x)& \\alpha>0\\\\ f(x) & \\alpha=0\\\\ I^{-\\alpha}f(x) & \\alpha<0. \\end{cases}"
},
{
"math_id": 15,
"text": "D_x^{\\alpha}f(y)=\\frac{1}{\\Gamma(1-\\alpha)}\\int_x^y f'(y-u)(u-x)^{-\\alpha}du."
},
{
"math_id": 16,
"text": "{}_a\\tilde{D}^\\alpha_x f(x)=I^{\\lceil \\alpha\\rceil-\\alpha}\\left(\\frac{d^{\\lceil \\alpha\\rceil}f}{dx^{\\lceil \\alpha\\rceil}}\\right)."
},
{
"math_id": 17,
"text": "f(x) = x^k\\,."
},
{
"math_id": 18,
"text": "f'(x) = \\frac{d}{dx}f(x) = k x^{k-1}\\,."
},
{
"math_id": 19,
"text": "\\frac{d^a}{dx^a}x^k = \\dfrac{k!}{(k-a)!}x^{k-a}\\,,"
},
{
"math_id": 20,
"text": "\\frac{d^a}{dx^a}x^k = \\dfrac{\\Gamma(k+1)}{\\Gamma(k-a+1)}x^{k-a}, \\quad\\ k > 0."
},
{
"math_id": 21,
"text": "x \\mapsto x"
},
{
"math_id": 22,
"text": "\\frac{d^\\frac12}{dx^\\frac12}x=\\frac{\\Gamma(1+1)}{\\Gamma\\left(1-\\frac12 + 1\\right)} x^{1-\\frac12}=\\frac{\\Gamma(2)}{\\Gamma\\left(\\frac{3}{2}\\right)}x^\\frac12 = \\frac{1}{\\frac{\\sqrt{\\pi}}{2}}x^\\frac12."
},
{
"math_id": 23,
"text": "\\dfrac{d^\\frac12}{dx^\\frac12} \\dfrac{2x^{\\frac12}}{\\sqrt{\\pi}}\n=\\frac{2}{\\sqrt{\\pi}}\\dfrac{\\Gamma(1+\\frac12)}{\\Gamma(\\frac12-\\frac12+1)}x^{\\frac12-\\frac12}\n=\\frac{2}{\\sqrt{\\pi}} \\frac{\\Gamma\\left(\\frac{3}{2}\\right)}{\\Gamma(1)} x^0\n=\\frac{2 \\frac{\\sqrt{\\pi}}{2} x^0}{\\sqrt{\\pi}}=1\\,,"
},
{
"math_id": 24,
"text": "\\Gamma\\!\\left(\\frac{3}{2}\\right) = \\frac{\\sqrt{\\pi}}{2}"
},
{
"math_id": 25,
"text": "\\left(\\frac{d^\\frac12}{dx^\\frac12} \\frac{d^\\frac12}{dx^\\frac12}\\right)\\!x = \\frac{d}{dx} x = 1\\,."
},
{
"math_id": 26,
"text": "\\Gamma"
},
{
"math_id": 27,
"text": "\\frac{d^a}{dx^a}x^{-k} = \\left(-1\\right)^a\\dfrac{\\Gamma(k+a)}{\\Gamma(k)}x^{-(k+a)} \\quad\\text{ for } k \\ge 0."
},
{
"math_id": 28,
"text": "D^\\alpha f(x)=\\frac{1}{\\Gamma(1-\\alpha)}\\frac{d}{dx} \\int_0^x \\frac{f(t)}{\\left(x-t\\right)^\\alpha} \\, dt."
},
{
"math_id": 29,
"text": "D^\\frac32 f(x) = D^\\frac12 D^1 f(x)=D^\\frac12 \\frac d {dx} f(x)."
},
{
"math_id": 30,
"text": "\\mathcal L \\left\\{Jf\\right\\}(s) = \\mathcal L \\left\\{\\int_0^t f(\\tau)\\,d\\tau\\right\\}(s) = \\frac1 s \\bigl(\\mathcal L\\left\\{f\\right\\}\\bigr)(s)"
},
{
"math_id": 31,
"text": "\\mathcal L \\left\\{J^2f\\right\\}=\\frac1s\\bigl(\\mathcal L \\left\\{Jf\\right\\} \\bigr)(s)=\\frac1{s^2}\\bigl(\\mathcal L\\left\\{f\\right\\}\\bigr)(s)"
},
{
"math_id": 32,
"text": "J^\\alpha f=\\mathcal L^{-1}\\left\\{s^{-\\alpha}\\bigl(\\mathcal L\\{f\\}\\bigr)(s)\\right\\}"
},
{
"math_id": 33,
"text": "J^\\alpha(t^k) = \\mathcal L^{-1}\\left\\{\\frac{\\Gamma(k+1)}{s^{\\alpha+k+1}}\\right\\} = \\frac{\\Gamma(k+1)}{\\Gamma(\\alpha+k+1)} t^{\\alpha+k} "
},
{
"math_id": 34,
"text": "\\mathcal L\\{f*g\\}=\\bigl(\\mathcal L\\{f\\}\\bigr)\\bigl(\\mathcal L\\{g\\}\\bigr)"
},
{
"math_id": 35,
"text": "\\begin{align}\n\\left(J^\\alpha f\\right)(t) &= \\frac{1}{\\Gamma(\\alpha)}\\mathcal L^{-1}\\left\\{\\bigl(\\mathcal L\\{p\\}\\bigr)\\bigl(\\mathcal L\\{f\\}\\bigr)\\right\\}\\\\\n&=\\frac{1}{\\Gamma(\\alpha)}(p*f)\\\\\n&=\\frac{1}{\\Gamma(\\alpha)}\\int_0^t p(t-\\tau)f(\\tau)\\,d\\tau\\\\\n&=\\frac{1}{\\Gamma(\\alpha)}\\int_0^t\\left(t-\\tau\\right)^{\\alpha-1}f(\\tau)\\,d\\tau\\\\\n\\end{align}"
}
] |
https://en.wikipedia.org/wiki?curid=567743
|
56775942
|
Mathematical models of social learning
|
Mathematical models of social learning aim to model opinion dynamics in social networks. Consider a social network in which people (agents) hold a belief or opinion about the state of something in the world, such as the quality of a particular product, the effectiveness of a public policy, or the reliability of a news agency. In all these settings, people learn about the state of the world via observation or communication with others. Models of social learning try to formalize these interactions to describe how agents process the information received from their friends in the social network. Some of the main questions asked in the literature include:
Bayesian learning.
Bayesian learning is a model which assumes that agents update their beliefs using Bayes' rule. Indeed, each agent's belief about different states of the world can be seen as a probability distribution over a set of opinions, and Bayesian updating assumes that this distribution is updated in a statistically optimal manner using Bayes' rule. Moreover, Bayesian models typically make certain demanding assumptions about agents, e.g., that they have a reliable model of the world and that the social learning rule of each agent is common knowledge among all members of the community.
More rigorously, let the underlying state be θ. This parameter could correspond to an opinion among people about a certain social, economic, or political issue. At first, each individual has a prior probability of θ which can be shown by P(θ). This prior could be a result of the agents' personal observations of the world. Then each person updates their belief by receiving some signal "s". According to the Bayesian approach, the updating procedure will follow this rule:
formula_0
where the term formula_1 is the conditional probability over signal space given the true state of the world.
Non-Bayesian learning.
Bayesian learning is often considered the benchmark model for social learning, in which individuals use Bayes' rule to incorporate new pieces of information to their belief. However, it has been shown that such a Bayesian "update" is fairly sophisticated and imposes an unreasonable cognitive load on agents which might not be realistic for human beings.
Therefore, scientists have studied simpler non-Bayesian models, most notably the DeGroot model, introduced by DeGroot in 1974, which is one of the first models for describing how humans interact with each other in a social network. In this setting, there is a true state of the world, and each agent receives a noisy independent signal from this true value and communicates with other agents repeatedly. According to the DeGroot model, each agent takes a weighted average of their neighbors' opinions at each step to update their own belief.
The statistician George E. P. Box once said, "All models are wrong; however, some of them are useful." Along the same lines, the DeGroot model is a fairly simple model but it can provide us with useful insights about the learning process in social networks. Indeed, the simplicity of this model makes it tractable for theoretical studies. Specifically, we can analyze different network structure to see for which structures these naive agents can successfully aggregate decentralized information. Since the DeGroot model can be considered a Markov chain, provided that a network is strongly connected (so there is a direct path from any agent to any other) and satisfies a weak aperiodicity condition, beliefs will converge to a consensus. When consensus is reached, the belief of each agent is a weighted average of agents' initial beliefs. These weights provide a measure of social influence.
In the case of a converging opinion dynamic, the social network is called "wise" if the consensus belief is equal to the true state of the world. It can be shown that the necessary and sufficient condition for wisdom is that the influence of the most influential agent vanishes as the network grows. The speed of convergence is irrelevant to the wisdom of the social network.
Empirical evaluation of models.
Along with the theoretical framework for modeling social learning phenomenon, there has been a great amount of empirical research to assess the explanatory power of these models. In one such experiment, 665 subjects in 19 villages in Karnataka, India, were studied while communicating information with each other to learn the true state of the world. This study attempted to distinguish between two most prominent models of information aggregation in social networks, namely, Bayesian learning and DeGroot learning. The study showed that agents' aggregate behavior is statistically significantly better described by the DeGroot learning model.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "P(\\theta|s) = \\frac{P(s|\\theta)}{P(s)} \\cdot P(\\theta)"
},
{
"math_id": 1,
"text": "\\textstyle P(s|\\theta)"
}
] |
https://en.wikipedia.org/wiki?curid=56775942
|
5677733
|
Total variation diminishing
|
Property of discretization schemes used to solve hyperbolic partial differential equations
In numerical methods, total variation diminishing (TVD) is a property of certain discretization schemes used to solve hyperbolic partial differential equations. The most notable application of this method is in computational fluid dynamics. The concept of TVD was introduced by Ami Harten.
Model equation.
In systems described by partial differential equations, such as the following hyperbolic advection equation,
formula_0
the total variation (TV) is given by
formula_1
and the total variation for the discrete case is,
formula_2
where formula_3.
A numerical method is said to be total variation diminishing (TVD) if,
formula_4
Characteristics.
A numerical scheme is said to be monotonicity preserving if the following properties are maintained:
proved the following properties for a numerical scheme,
Application in CFD.
In Computational Fluid Dynamics, TVD scheme is employed to capture sharper shock predictions without any misleading oscillations when variation of field variable “formula_7” is discontinuous.
To capture the variation fine grids (formula_8 very small) are needed and the computation becomes heavy and therefore uneconomic. The use of coarse grids with central difference scheme, upwind scheme, hybrid difference scheme, and power law scheme gives false shock predictions. TVD scheme enables sharper shock predictions on coarse grids saving computation time and as the scheme preserves monotonicity there are no spurious oscillations in the solution.
Discretisation.
Consider the steady state one-dimensional convection diffusion equation,
formula_9,
where formula_10 is the density, formula_11 is the velocity vector, formula_12 is the property being transported, formula_13 is the coefficient of diffusion and formula_14 is the source term responsible for generation of the property formula_12.
Making the flux balance of this property about a control volume we get,
formula_15 formula_16
Here formula_17 is the normal to the surface of control volume.
Ignoring the source term, the equation further reduces to:
formula_18
Assuming
formula_19 and formula_20
The equation reduces to
formula_21
Say,
formula_22
formula_23
From the figure:
formula_24
formula_25
The equation becomes:formula_26 The continuity equation also has to be satisfied in one of its equivalent forms for this problem:
formula_27
Assuming diffusivity is a homogeneous property and equal grid spacing we can say
formula_28
we getformula_29The equation further reduces toformula_30The equation above can be written asformula_31where formula_32 is the Péclet number
formula_33
TVD scheme.
Total variation diminishing scheme makes an assumption for the values of formula_34 and formula_35 to be substituted in the discretized equation as follows:
formula_36
formula_37
Where formula_38 is the Péclet number and formula_39 is the weighing function to be determined from,
formula_40
where formula_41 refers to upstream, formula_42 refers to upstream of formula_41 and formula_43 refers to downstream.
Note that formula_44 is the weighing function when the flow is in positive direction (i.e., from left to right) and formula_45 is the weighing function when the flow is in the negative direction from right to left. So,
formula_46
If the flow is in positive direction then, Péclet number formula_38 is positive and the term formula_47, so the function formula_45 won't play any role in the assumption of formula_48 and formula_49. Likewise when the flow is in negative direction, formula_38 is negative and the term formula_50, so the function formula_44 won't play any role in the assumption of formula_34 and formula_34.
It therefore takes into account the values of property depending on the direction of flow and using the weighted functions tries to achieve monotonicity in the solution thereby producing results with no spurious shocks.
Limitations.
Monotone schemes are attractive for solving engineering and scientific problems because they do not produce non-physical solutions. Godunov's theorem proves that linear schemes which preserve monotonicity are, at most, only first order accurate. Higher order linear schemes, although more accurate for smooth solutions, are not TVD and tend to introduce spurious oscillations (wiggles) where discontinuities or shocks arise. To overcome these drawbacks, various high-resolution, non-linear techniques have been developed, often using flux/slope limiters.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{\\partial u}{\\partial t} + a\\frac{\\partial u}{\\partial x} = 0, "
},
{
"math_id": 1,
"text": "TV(u(\\cdot,t)) = \\int \\left| \\frac{\\partial u}{\\partial x} \\right| \\mathrm{d}x ,"
},
{
"math_id": 2,
"text": "TV(u^n) = TV(u(\\cdot,t^n)) = \\sum_j \\left| u_{j+1}^n - u_j^n \\right| ."
},
{
"math_id": 3,
"text": "u_{j}^n=u(x_{j},t^n)"
},
{
"math_id": 4,
"text": "TV \\left( u^{n+1}\\right) \\leq TV \\left( u^{n}\\right) ."
},
{
"math_id": 5,
"text": "u^{n}"
},
{
"math_id": 6,
"text": "u^{n+1}"
},
{
"math_id": 7,
"text": "\\phi "
},
{
"math_id": 8,
"text": "\\Delta x "
},
{
"math_id": 9,
"text": "\\nabla \\cdot (\\rho \\mathbf{u} \\phi)\\,= \\nabla \\cdot (\\Gamma \\nabla \\phi)+S_{\\phi}\\; "
},
{
"math_id": 10,
"text": " \\rho "
},
{
"math_id": 11,
"text": " \\mathbf{u} "
},
{
"math_id": 12,
"text": " \\phi "
},
{
"math_id": 13,
"text": " \\Gamma "
},
{
"math_id": 14,
"text": " S_{\\phi} "
},
{
"math_id": 15,
"text": "\\int_A \\mathbf {n} \\cdot (\\rho\\mathbf{u}\\phi) \\, \\mathrm{d}A = \\int_A \\mathbf{n} \\cdot (\\Gamma \\nabla \\phi) \\, \\mathrm{d}A+ \\int_{CV} S_\\phi \\, \\mathrm{d}V"
},
{
"math_id": 16,
"text": "\\;"
},
{
"math_id": 17,
"text": " \\mathbf {n} "
},
{
"math_id": 18,
"text": "(\\rho \\mathbf {u} \\phi A)_r - (\\rho \\mathbf {u} \\phi A)_l = \\left(\\Gamma A \\frac{\\partial \\phi}{\\partial x}\\right)_r-\\left(\\Gamma A \\frac{\\partial \\phi}{\\partial x}\\right)_l"
},
{
"math_id": 19,
"text": " \\frac{\\partial \\phi}{\\partial x}= \\frac{\\delta \\phi}{\\delta x}"
},
{
"math_id": 20,
"text": "A_r = A_l, "
},
{
"math_id": 21,
"text": "(\\rho \\mathbf {u} \\phi)_r - (\\rho \\mathbf {u} \\phi)_l \\,= \\left( \\frac{\\Gamma}{\\delta x} \\delta \\phi\\right)_r - \\left( \\frac{\\Gamma}{\\delta x} \\delta \\phi\\right)_l."
},
{
"math_id": 22,
"text": " F_r=(\\rho \\mathbf{u})_r ;\\qquad F_l=(\\rho \\mathbf{u})_l;"
},
{
"math_id": 23,
"text": " D_l = \\left(\\frac {\\Gamma}{\\delta x}\\right)_l ;\\qquad D_r =\\left(\\frac {\\Gamma}{\\delta x}\\right)_r;"
},
{
"math_id": 24,
"text": " \\delta \\phi _r = \\phi_R -\\phi_P ;\\qquad\\delta x_r = x_{PR}; "
},
{
"math_id": 25,
"text": " \\delta \\phi _l = \\phi_P -\\phi_L ;\\qquad\\delta x_l = x_{LP}; "
},
{
"math_id": 26,
"text": "F_r \\phi_r - F_l \\phi_l = D_r (\\phi _R -\\phi _P)-D_l(\\phi _P - \\phi _L);"
},
{
"math_id": 27,
"text": "(\\rho \\mathbf {u})_r -(\\rho \\mathbf {u})_l\\,=0\\ \\ \\Longleftrightarrow\\ \\ \nF_r-F_l=0\\ \\ \\Longleftrightarrow\\ \nF_r=F_l=F."
},
{
"math_id": 28,
"text": " \\Gamma _l=\\Gamma _r; \\qquad \\delta x_{LP}=\\delta x_{PR} = \\delta x,"
},
{
"math_id": 29,
"text": " D_l=D_r=D."
},
{
"math_id": 30,
"text": "(\\phi_r-\\phi_l)\\cdot F=D\\cdot(\\phi_R-2\\phi_P+\\phi_L)."
},
{
"math_id": 31,
"text": "(\\phi_r-\\phi_l)\\cdot P=(\\phi_R-2\\phi_P+\\phi_L) "
},
{
"math_id": 32,
"text": "P "
},
{
"math_id": 33,
"text": "P=\\frac{F}{D}=\\frac{\\rho \\mathbf{u} \\delta x}{\\Gamma}."
},
{
"math_id": 34,
"text": "\\phi_r"
},
{
"math_id": 35,
"text": "\\phi_l"
},
{
"math_id": 36,
"text": "\\phi_r\\cdot P=\\frac{1}{2}(P+|P|)[f_r^+\\phi_R+(1-f_r^+)\\phi_L]+\\frac{1}{2}(P-|P|)[f_r^-\\phi_P+(1-f_r^-)\\phi_{RR}]"
},
{
"math_id": 37,
"text": "\\phi_l\\cdot P=\\frac{1}{2}(P+|P|)[f_l^+\\phi_P+(1-f_l^+)\\phi_{LL}]+\\frac{1}{2}(P-|P|)[f_l^-\\phi_L+(1-f_l^-)\\phi_R]"
},
{
"math_id": 38,
"text": "P"
},
{
"math_id": 39,
"text": "f"
},
{
"math_id": 40,
"text": "f=f\\left(\\frac{\\phi_U-\\phi_{UU}}{\\phi_D-\\phi_{UU}}\\right)"
},
{
"math_id": 41,
"text": "U "
},
{
"math_id": 42,
"text": "UU "
},
{
"math_id": 43,
"text": "D "
},
{
"math_id": 44,
"text": "f^+"
},
{
"math_id": 45,
"text": "f^-"
},
{
"math_id": 46,
"text": "\n\\begin{align}\n& f_r^+\\text{ is a function of }\\left(\\dfrac{\\phi_P-\\phi_L}{\\phi_R-\\phi_L}\\right), \\\\[10pt]\n& f_r^-\\text{ is a function of }\\left(\\dfrac{\\phi_R-\\phi_{RR}}{\\phi_P-\\phi_{RR}}\\right), \\\\[10pt]\n& f_l^+\\text{ is a function of }\\left(\\dfrac{\\phi_L-\\phi_{LL}}{\\phi_P-\\phi_{LL}}\\right),\\text{ and} \\\\[10pt]\n& f_l^-\\text{ is a function of }\\left(\\dfrac{\\phi_P-\\phi_R}{\\phi_L-\\phi_R}\\right).\n\\end{align}\n"
},
{
"math_id": 47,
"text": "(P-|P|)= 0"
},
{
"math_id": 48,
"text": " \\phi_r"
},
{
"math_id": 49,
"text": " \\phi_l"
},
{
"math_id": 50,
"text": "(P+|P|)= 0"
}
] |
https://en.wikipedia.org/wiki?curid=5677733
|
56777468
|
Kirchhoff–Helmholtz integral
|
The Kirchhoff–Helmholtz integral combines the Helmholtz equation with the Kirchhoff integral theorem to produce a method applicable to acoustics, seismology and other disciplines involving wave propagation.
It states that the sound pressure is completely determined within a volume free of sources, if sound pressure and velocity are determined in all points on its surface.
formula_0
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\boldsymbol{P}(w,z)=\\iint_{dA} \\left(G(w,z \\vert z') \\frac{\\partial}{\\partial n} P(w,z')- P(w,z') \\frac{\\partial}{\\partial n} G(w,z \\vert z') \\right)dz'"
}
] |
https://en.wikipedia.org/wiki?curid=56777468
|
56779034
|
Biomolecular condensate
|
Class of membrane-less organelles within biological cells
In biochemistry, biomolecular condensates are a class of membrane-less organelles and organelle subdomains, which carry out specialized functions within the cell. Unlike many organelles, biomolecular condensate composition is not controlled by a bounding membrane. Instead, condensates can form and maintain organization through a range of different processes, the most well-known of which is phase separation of proteins, RNA and other biopolymers into either colloidal emulsions, gels, liquid crystals, solid crystals or aggregates within cells.
History.
Micellar theory.
The micellar theory of Carl Nägeli was developed from his detailed study of starch granules in 1858. Amorphous substances such as starch and cellulose were proposed to consist of building blocks, packed in a loosely crystalline array to form what he later termed "micelles". Water could penetrate between the micelles, and new micelles could form in the interstices between old micelles. The swelling of starch grains and their growth was described by a molecular-aggregate model, which he also applied to the cellulose of the plant cell wall. The modern usage of 'micelle' refers strictly to lipids, but its original usage clearly extended to other types of biomolecule, and this legacy is reflected to this day in the description of milk as being composed of 'casein micelles'.
Colloidal phase separation theory.
The concept of intracellular colloids as an organizing principle for the compartmentalization of living cells dates back to the end of the 19th century, beginning with William Bate Hardy and Edmund Beecher Wilson who described the cytoplasm (then called 'protoplasm') as a colloid. Around the same time, Thomas Harrison Montgomery Jr. described the morphology of the nucleolus, an organelle within the nucleus, which has subsequently been shown to form through intracellular phase separation. WB Hardy linked formation of biological colloids with phase separation in his study of globulins, stating that: "The globulin is dispersed in the solvent as particles which are the colloid particles and which are so large as to form an internal phase", and further contributed to the basic physical description of oil-water phase separation.
Colloidal phase separation as a driving force in cellular organisation appealed strongly to Stephane Leduc, who wrote in his influential 1911 book "The Mechanism of Life": "Hence the study of life may be best begun by the study of those physico-chemical phenomena which result from the contact of two different liquids. Biology is thus but a branch of the physico-chemistry of liquids; it includes the study of electrolytic and colloidal solutions, and of the molecular forces brought into play by solution, osmosis, diffusion, cohesion, and crystallization."
The primordial soup theory of the origin of life, proposed by Alexander Oparin in Russian in 1924 (published in English in 1936) and by J.B.S. Haldane in 1929, suggested that life was preceded by the formation of what Haldane called a "hot dilute soup" of "colloidal organic substances", and which Oparin referred to as 'coacervates' (after de Jong) – particles composed of two or more colloids which might be protein, lipid or nucleic acid. These ideas strongly influenced the subsequent work of Sidney W. Fox on proteinoid microspheres.
Support from other disciplines.
When cell biologists largely abandoned colloidal phase separation, it was left to relative outsiders – agricultural scientists and physicists – to make further progress in the study of phase separating biomolecules in cells.
Beginning in the early 1970s, Harold M Farrell Jr. at the US Department of Agriculture developed a colloidal phase separation model for milk casein micelles that form within mammary gland cells before secretion as milk.
Also in the 1970s, physicists Tanaka & Benedek at MIT identified phase-separation behaviour of gamma-crystallin proteins from lens epithelial cells and cataracts in solution, which Benedek referred to as 'protein condensation'.
In the 1980s and 1990s, Athene Donald's polymer physics lab in Cambridge extensively characterised phase transitions / phase separation of starch granules from the cytoplasm of plant cells, which behave as liquid crystals.
In 1991, Pierre-Gilles de Gennes received the Nobel Prize in Physics for developing a generalized theory of phase transitions with particular applications to describing ordering and phase transitions in polymers. Unfortunately, de Gennes wrote in "Nature" that polymers should be distinguished from other types of colloids, even though they can display similar clustering and phase separation behaviour, a stance that has been reflected in the reduced usage of the term colloid to describe the higher-order association behaviour of biopolymers in modern cell biology and molecular self-assembly.
Phase separation revisited.
Advances in confocal microscopy at the end of the 20th century identified proteins, RNA or carbohydrates localising to many non-membrane bound cellular compartments within the cytoplasm or nucleus which were variously referred to as 'puncta/dots', 'signalosomes', 'granules', 'bodies', 'assemblies', 'paraspeckles', 'purinosomes', 'inclusions', 'aggregates' or 'factories'. During this time period (1995-2008) the concept of phase separation was re-borrowed from colloidal chemistry & polymer physics and proposed to underlie both cytoplasmic and nuclear compartmentalization.
Since 2009, further evidence for biomacromolecules undergoing intracellular phase transitions (phase separation) has been observed in many different contexts, both within cells and in reconstituted "in vitro" experiments.
The newly coined term "biomolecular condensate" refers to biological polymers (as opposed to synthetic polymers) that undergo self assembly via clustering to increase the local concentration of the assembling components, and is analogous to the physical definition of condensation.
In physics, condensation typically refers to a gas–liquid phase transition.
In biology the term 'condensation' is used much more broadly and can also refer to liquid–liquid phase separation to form colloidal emulsions or liquid crystals within cells, and liquid–solid phase separation to form gels, sols, or suspensions within cells as well as liquid-to-solid phase transitions such as DNA condensation during prophase of the cell cycle or protein condensation of crystallins in cataracts. With this in mind, the term 'biomolecular condensates' was deliberately introduced to reflect this breadth (see below). Since biomolecular condensation generally involves oligomeric or polymeric interactions between an indefinite number of components, it is generally considered distinct from formation of smaller stoichiometric protein complexes with defined numbers of subunits, such as viral capsids or the proteasome – although both are examples of spontaneous molecular self-assembly or self-organisation.
Mechanistically, it appears that the conformational landscape (in particular, whether it is enriched in extended disordered states) and multivalent interactions between intrinsically disordered proteins (including cross-beta polymerisation), and/or protein domains that induce head-to-tail oligomeric or polymeric clustering, might play a role in phase separation of proteins.
Examples.
Many examples of biomolecular condensates have been characterized in the cytoplasm and the nucleus that are thought to arise by either liquid–liquid or liquid–solid phase separation.
Nuclear condensates.
Other nuclear structures including heterochromatin form by mechanisms similar to phase separation, so can also be classified as biomolecular condensates.
Lipid-enclosed organelles and lipoproteins are not considered condensates.
Typical organelles or endosomes enclosed by a lipid bilayer are not considered biomolecular condensates. In addition, lipid droplets are surrounded by a lipid monolayer in the cytoplasm, or in milk, or in tears, so appear to fall under the 'membrane bound' category. Finally, secreted LDL and HDL lipoprotein particles are also enclosed by a lipid monolayer. The formation of these structures involves phase separation to from colloidal micelles or liquid crystal bilayers, but they are not classified as biomolecular condensates, as this term is reserved for non-membrane bound organelles.
Liquid–liquid phase separation (LLPS) in biology.
Liquid biomolecular condensates.
Liquid–liquid phase separation (LLPS) generates a subtype of colloid known as an emulsion that can coalesce to from large droplets within a liquid. Ordering of molecules during liquid–liquid phase separation can generate liquid crystals rather than emulsions. In cells, LLPS produces a liquid subclass of biomolecular condensate that can behave as either an emulsion or liquid crystal.
The term biomolecular condensates was introduced in the context of intracellular assemblies as a convenient and non-exclusionary term to describe non-stoichiometric assemblies of biomolecules. The choice of language here is specific and important. It has been proposed that many biomolecular condensates form through liquid–liquid phase separation (LLPS) to form colloidal emulsions or liquid crystals in living organisms, as opposed to liquid–solid phase separation to form crystals/aggregates in gels, sols or suspensions within cells or extracellular secretions. However, unequivocally demonstrating that a cellular body forms through liquid–liquid phase separation is challenging, because different material states (liquid vs. gel vs. solid) are not always easy to distinguish in living cells. The term "biomolecular condensate" directly addresses this challenge by making no assumption regarding either the physical mechanism through which assembly is achieved, nor the material state of the resulting assembly. Consequently, cellular bodies that form through liquid–liquid phase separation are a subset of biomolecular condensates, as are those where the physical origins of assembly are unknown. Historically, many cellular non-membrane bound compartments identified microscopically fall under the broad umbrella of biomolecular condensates.
In physics, phase separation can be classified into the following types of colloid, of which biomolecular condensates are one example:
In biology, the most relevant forms of phase separation are either liquid–liquid or liquid–solid, although there have been reports of gas vesicles surrounded by a phase separated protein coat in the cytoplasm of some microorganisms.
Wnt signalling.
One of the first discovered examples of a highly dynamic intracellular liquid biomolecular condensate with a clear physiological function were the supramolecular complexes (Wnt signalosomes) formed by components of the Wnt signaling pathway. The Dishevelled (Dsh or Dvl) protein undergoes clustering in the cytoplasm via its DIX domain, which mediates protein clustering (polymerisation) and phase separation, and is important for signal transduction. The Dsh protein functions both in planar polarity and Wnt signalling, where it recruits another supramolecular complex (the Axin complex) to Wnt receptors at the plasma membrane. The formation of these Dishevelled and Axin containing droplets is conserved across metazoans, including in "Drosophila", "Xenopus", and human cells.
P granules.
Another example of liquid droplets in cells are the germline P granules in "Caenorhabditis elegans". These granules separate out from the cytoplasm and form droplets, as oil does from water. Both the granules and the surrounding cytoplasm are liquid in the sense that they flow in response to forces, and two of the granules can coalesce when they come in contact. When (some of) the molecules in the granules are studied (via fluorescence recovery after photobleaching), they are found to rapidly turnover in the droplets, meaning that molecules diffuse into and out of the granules, just as expected in a liquid droplet. The droplets can also grow to be many molecules across (micrometres) Studies of droplets of the "Caenorhabditis elegans" protein LAF-1 "in vitro" also show liquid-like behaviour, with an apparent viscosity formula_0Pa s. This is about a ten thousand times that of water at room temperature, but it is small enough to enable the LAF-1 droplets to flow like a liquid. Generally, interaction strength (affinity) and valence (number of binding sites) of the phase separating biomolecules influence their condensates viscosity, as well as their overall tendency to phase separate.
Liquid–liquid phase separation in human disease.
Growing evidence suggests that anomalies in biomolecular condensates formation can lead to a number of human pathologies such as cancer and neurodegenerative diseases.
Synthetic biomolecular condensates.
Biomolecular condensates can be synthesized for a number of purposes. Synthetic biomolecular condensates are inspired by endogenous biomolecular condensates, such as nucleoli, P bodies, and stress granules, which are essential to normal cellular organization and function.
Synthetic condensates are an important tool in synthetic biology, and have a wide and growing range of applications. Engineered synthetic condensates allow for probing cellular organization, and enable the creation of novel functionalized biological materials, which have the potential to serve as drug delivery platforms and therapeutic agents.
Design and control.
Despite the dynamic nature and lack of binding specificity that govern the formation of biomolecular condensates, synthetic condensates can still be engineered to exhibit different behaviors. One popular way to conceptualize condensate interactions and aid in design is through the "sticker-spacer" framework. Multivalent interaction sites, or "stickers", are separated by "spacers", which provide the conformational flexibility and physically separate individual interaction modules from one another. Proteins regions identified as 'stickers' usually consist of Intrinsically Disordered Regions (IDRs) that act as "sticky" biopolymers via short patches of interacting residues patterned along their unstructured chain, which collectively promote LLPS. By modifying the sticker-spacer framework, i.e. the polypeptide and RNA sequences as well as their mixture compositions, the material properties (viscous and elastic regimes) of condensates can be tuned to design novel condensates.
Other tools outside of tuning the sticker-spacer framework can be used to give new functionality and to allow for high temporal and spatial control over synthetic condensates. One way to gain temporal control over the formation and dissolution of biomolecular condensates is by using optogenetic tools. Several different systems have been developed which allow for control of condensate formation and dissolution which rely on chimeric protein expression, and light or small molecule activation. In one system, proteins are expressed in a cell which contain light-activated oligomerization domains fused to IDRs. Upon irradiation with a specific wavelength of light, the oligomerization domains bind each other and form a 'core', which also brings multiple IDRs close together because they are fused to the oligomerization domains. The recruitment of multiple IDRs effectively creates a new biopolymer with increased valency. This increased valency allows for the IDRs to form multivalent interactions and trigger LLPS. When the activation light is stopped, the oligomerization domains disassemble, causing the dissolution of the condensate. A similar system achieves the same temporal control of condensate formation by using light-sensitive 'caged' dimerizers. In this case, light-activation removes the dimerizer cage, allowing it to recruit IDRs to multivalent cores, which then triggers phase separation. Light-activation of a different wavelength results in the dimerizer being cleaved, which then releases the IDRs from the core and consequentially dissolves the condensate. This dimerizer system requires significantly reduced amounts of laser light to operate, which is advantageous because high intensity light can be toxic to cells.
Optogenetic systems can also be modified to gain spatial control over the formation of condensates. Multiple approaches have been developed to do so. In one approach, which localizes condensates to specific genomic regions, core proteins are fused to proteins such as TRF1 or catalytically dead Cas9, which bind specific genomic loci. When oligomerization is trigger by light activation, phase separation is preferentially induced on the specific genomic region which is recognized by fusion protein. Because condensates of the same composition can interact and fuse with each other, if they are tethered to specific regions of the genome, condensates can be used to alter the spatial organization of the genome, which can have effects on gene expression.
As biochemical reactors.
Synthetic condensates offer a way to probe cellular function and organization with high spatial and temporal control, but can also be used to modify or add functionality to the cell. One way this is accomplished is by modifying the condensate networks to include binding sites for other proteins of interest, thus allowing the condensate to serve as a scaffold for protein release or recruitment. These binding sites can be modified to be sensitive to light activation or small molecule addition, thus giving temporal control over the recruitment of a specific protein of interest. By recruiting specific proteins to condensates, reactants can be concentrated to increase reaction rates or sequestered to inhibit reactivity. In addition to protein recruitment, condensates can also be designed which release proteins in response to certain stimuli. In this case, a protein of interest can be fused to a scaffold protein via a photocleavable linker. Upon irradiation, the linker is broken, and the protein is released from the condensate. Using these design principles, proteins can either be released to, or sequestered from, their native environment, allowing condensates to serve as a tool to alter the biochemical activity of specific proteins with a high level of control.
Methods to study condensates.
A number of experimental and computational methods have been developed to examine the physico-chemical properties and underlying molecular interactions of biomolecular condensates. Experimental approaches include phase separation assays using bright-field imaging or fluorescence microscopy, and fluorescence recovery after photobleaching (FRAP), as well as rheological analysis of phase-separated droplets. Computational approaches include coarse-grained molecular dynamics simulations and circuit topology analysis.
Coarse-grained molecular models.
Molecular dynamics and Monte Carlo simulations have been extensively used to gain insights into the formation and the material properties of biomolecular condensates. Although molecular models of different resolution have been employed, modelling efforts have mainly focused on coarse-grained models of intrinsically disordered proteins, wherein amino acid residues are represented by single interaction sites.
Compared to more detailed molecular descriptions, residue-level models provide high computational efficiency, which enables simulations to cover the long length and time scales required to study phase separation. Moreover, the resolution of these models is sufficiently detailed to capture the dependence on amino acid sequence of the properties of the system.
Several residue-level models of intrinsically disordered proteins have been developed in recent years. Their common features are (i) the absence of an explicit representation of solvent molecules and salt ions, (ii) a mean-field description of the electrostatic interactions between charged residues (see Debye–Hückel theory), and (iii) a set of "stickiness" parameters which quantify the strength of the attraction between pairs of amino acids. In the development of most residue-level models, the stickiness parameters have been derived from hydrophobicity scales or from a bioinformatic analysis of crystal structures of folded proteins. Further refinement of the parameters has been achieved through iterative procedures which maximize the agreement between model predictions and a set of experiments, or by leveraging data obtained from all-atom molecular dynamics simulations.
Residue-level models of intrinsically disordered proteins have been validated by direct comparison with experimental data, and their predictions have been shown to be accurate across diverse amino acid sequences. Examples of experimental data used to validate the models are radii of gyration of isolated chains and saturation concentrations, which are threshold protein concentrations above which phase separation is observed.
Although intrinsically disordered proteins often play important roles in condensate formation, many biomolecular condensates contain multi-domain proteins constituted by folded domains connected by intrinsically disordered regions. Current residue-level models are only applicable to the study of condensates of intrinsically disordered proteins and nucleic acids. Including an accurate description of the folded domains in these models will considerably widen their applicability.
Mechanical analysis of bimolecular condensates.
To identify liquid-liquid phase separation and formation of condensate liquid droplets, one needs to demonstrate the liquid behaviors (viscoelasticity) of the condensates. Furthermore, mechanical processes are key to condensate related diseases, as pathological changes to condensates can lead to their solidification. Rheological methods are commonly used to demonstrate the liquid behavior of biomolecular condensates. These include active microrheological characterization by means of optical tweezers and scanning probe microscopy.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\eta \\sim 10"
}
] |
https://en.wikipedia.org/wiki?curid=56779034
|
5678057
|
Godunov's theorem
|
In numerical analysis and computational fluid dynamics, Godunov's theorem — also known as Godunov's order barrier theorem — is a mathematical theorem important in the development of the theory of high-resolution schemes for the numerical solution of partial differential equations.
The theorem states that:
<templatestyles src="Block indent/styles.css"/>"Linear numerical schemes for solving partial differential equations (PDE's), having the property of not generating new extrema (monotone scheme), can be at most first-order accurate."
Professor Sergei Godunov originally proved the theorem as a Ph.D. student at Moscow State University. It is his most influential work in the area of applied and numerical mathematics and has had a major impact on science and engineering, particularly in the development of methods used in computational fluid dynamics (CFD) and other computational fields. One of his major contributions was to prove the theorem (Godunov, 1954; Godunov, 1959), that bears his name.
The theorem.
We generally follow Wesseling (2001).
Aside
Assume a continuum problem described by a PDE is to be computed using a numerical scheme based upon a uniform computational grid and a one-step, constant step-size, "M" grid point, integration algorithm, either implicit or explicit. Then if formula_0 and formula_1, such a scheme can be described by
In other words, the solution formula_2 at time formula_3 and location formula_4 is a linear function of the solution at the previous time step formula_5. We assume that formula_6 determines formula_2 uniquely. Now, since the above equation represents a linear relationship between formula_7 and formula_8 we can perform a linear transformation to obtain the following equivalent form,
Theorem 1: "Monotonicity preserving"
The above scheme of equation (2) is monotonicity preserving if and only if
"Proof" - Godunov (1959)
Case 1: (sufficient condition)
Assume (3) applies and that formula_9 is monotonically increasing with formula_10.
Then, because formula_11 it therefore follows that formula_12 because
This means that monotonicity is preserved for this case.
Case 2: (necessary condition)
We prove the necessary condition by contradiction. Assume that formula_13 for some formula_14 and choose the following monotonically increasing formula_15,
Then from equation (2) we get
Now choose formula_16, to give
which implies that formula_2 is NOT increasing, and we have a contradiction. Thus, monotonicity is NOT preserved for formula_17, which completes the proof.
Theorem 2: "Godunov’s Order Barrier Theorem"
Linear one-step second-order accurate numerical schemes for the convection equation
cannot be monotonicity preserving unless
where formula_18 is the signed Courant–Friedrichs–Lewy condition (CFL) number.
"Proof" - Godunov (1959)
Assume a numerical scheme of the form described by equation (2) and choose
The exact solution is
If we assume the scheme to be at least second-order accurate, it should produce the following solution exactly
Substituting into equation (2) gives:
Suppose that the scheme IS monotonicity preserving, then according to the theorem 1 above, formula_19.
Now, it is clear from equation (15) that
Assume formula_20 and choose formula_10 such that formula_21. This implies that formula_22 and formula_23.
It therefore follows that,
which contradicts equation (16) and completes the proof.
The exceptional situation whereby formula_24 is only of theoretical interest, since this cannot be realised with variable coefficients. Also, integer CFL numbers greater than unity would not be feasible for practical problems.
|
[
{
"math_id": 0,
"text": " x_{j} = j\\,\\Delta x "
},
{
"math_id": 1,
"text": "t^n = n\\,\\Delta t "
},
{
"math_id": 2,
"text": "\\varphi _j^{n + 1} "
},
{
"math_id": 3,
"text": "n + 1"
},
{
"math_id": 4,
"text": "j"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "\\beta _m "
},
{
"math_id": 7,
"text": " \\varphi _j^{n } "
},
{
"math_id": 8,
"text": " \\varphi _j^{n + 1} "
},
{
"math_id": 9,
"text": "\\varphi _j^n "
},
{
"math_id": 10,
"text": "j "
},
{
"math_id": 11,
"text": "\\varphi _j^n \\le \\varphi _{j + 1}^n \\le \\cdots \\le \\varphi _{j + m}^n "
},
{
"math_id": 12,
"text": "\\varphi _j^{n + 1} \\le \\varphi _{j + 1}^{n + 1} \\le \\cdots \\le \\varphi _{j + m}^{n + 1} "
},
{
"math_id": 13,
"text": "\\gamma _p^{} < 0 "
},
{
"math_id": 14,
"text": "p "
},
{
"math_id": 15,
"text": "\\varphi_j^n \\, "
},
{
"math_id": 16,
"text": " j = k - p "
},
{
"math_id": 17,
"text": "\\gamma _p < 0 "
},
{
"math_id": 18,
"text": " \\sigma "
},
{
"math_id": 19,
"text": "\\gamma _m \\ge 0 "
},
{
"math_id": 20,
"text": "\\sigma > 0, \\quad \\sigma \\notin \\mathbb{ N} "
},
{
"math_id": 21,
"text": " j > \\sigma > \\left( j - 1 \\right) "
},
{
"math_id": 22,
"text": "\\left( {j - \\sigma } \\right) > 0 "
},
{
"math_id": 23,
"text": "\\left( {j - \\sigma - 1} \\right) < 0 "
},
{
"math_id": 24,
"text": "\\sigma = \\left| c \\right|{{\\Delta t} \\over {\\Delta x}} \\in \\mathbb{N} "
}
] |
https://en.wikipedia.org/wiki?curid=5678057
|
5678101
|
Local time (mathematics)
|
In the mathematical theory of stochastic processes, local time is a stochastic process associated with semimartingale processes such as Brownian motion, that characterizes the amount of time a particle has spent at a given level. Local time appears in various stochastic integration formulas, such as Tanaka's formula, if the integrand is not sufficiently smooth. It is also studied in statistical mechanics in the context of random fields.
Formal definition.
For a continuous real-valued semimartingale formula_0, the local time of formula_1 at the point formula_2 is the stochastic process which is informally defined by
formula_3
where formula_4 is the Dirac delta function and formula_5 is the quadratic variation. It is a notion invented by Paul Lévy. The basic idea is that formula_6 is an (appropriately rescaled and time-parametrized) measure of how much time formula_7 has spent at formula_2 up to time formula_8. More rigorously, it may be written as the almost sure limit
formula_9
which may be shown to always exist. Note that in the special case of Brownian motion (or more generally a real-valued diffusion of the form formula_10 where formula_11 is a Brownian motion), the term formula_12 simply reduces to formula_13, which explains why it is called the local time of formula_1 at formula_2. For a discrete state-space process formula_14, the local time can be expressed more simply as
formula_15
Tanaka's formula.
Tanaka's formula also provides a definition of local time for an arbitrary continuous semimartingale formula_14 on formula_16
formula_17
A more general form was proven independently by Meyer and Wang; the formula extends Itô's lemma for twice differentiable functions to a more general class of functions. If formula_18 is absolutely continuous with derivative formula_19 which is of bounded variation, then
formula_20
where formula_21 is the left derivative.
If formula_22 is a Brownian motion, then for any formula_23 the field of local times formula_24 has a modification which is a.s. Hölder continuous in formula_25 with exponent formula_26, uniformly for bounded formula_2 and formula_8. In general, formula_27 has a modification that is a.s. continuous in formula_8 and càdlàg in formula_2.
Tanaka's formula provides the explicit Doob–Meyer decomposition for the one-dimensional reflecting Brownian motion, formula_28.
Ray–Knight theorems.
The field of local times formula_29 associated to a stochastic process on a space formula_30 is a well studied topic in the area of random fields. Ray–Knight type theorems relate the field "L""t" to an associated Gaussian process.
In general Ray–Knight type theorems of the first kind consider the field "L""t" at a hitting time of the underlying process, whilst theorems of the second kind are in terms of a stopping time at which the field of local times first exceeds a given value.
First Ray–Knight theorem.
Let ("B""t")"t" ≥ 0 be a one-dimensional Brownian motion started from "B"0 = "a" > 0, and ("W""t")"t"≥0 be a standard two-dimensional Brownian motion started from "W"0 = 0 ∈ R2. Define the stopping time at which "B" first hits the origin, formula_31. Ray and Knight (independently) showed that
where ("L""t")"t" ≥ 0 is the field of local times of ("B""t")"t" ≥ 0, and equality is in distribution on "C"[0, "a"]. The process |"W""x"|2 is known as the squared Bessel process.
Second Ray–Knight theorem.
Let ("B""t")t ≥ 0 be a standard one-dimensional Brownian motion "B"0 = 0 ∈ R, and let ("L""t")"t" ≥ 0 be the associated field of local times. Let "T""a" be the first time at which the local time at zero exceeds "a" > 0
formula_32
Let ("W""t")"t" ≥ 0 be an independent one-dimensional Brownian motion started from "W"0 = 0, then
Equivalently, the process formula_33 (which is a process in the spatial variable formula_2) is equal in distribution to the square of a 0-dimensional Bessel process started at formula_34, and as such is Markovian.
Generalized Ray–Knight theorems.
Results of Ray–Knight type for more general stochastic processes have been intensively studied, and analogue statements of both (1) and (2) are known for strongly symmetric Markov processes.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(B_s)_{s\\ge 0}"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "L^x(t) =\\int_0^t \\delta(x-B_s)\\,d[B]_s,"
},
{
"math_id": 4,
"text": "\\delta"
},
{
"math_id": 5,
"text": "[B]"
},
{
"math_id": 6,
"text": "L^x(t)"
},
{
"math_id": 7,
"text": "B_s"
},
{
"math_id": 8,
"text": "t"
},
{
"math_id": 9,
"text": " L^x(t) =\\lim_{\\varepsilon\\downarrow 0} \\frac{1}{2\\varepsilon} \\int_0^t 1_{\\{ x- \\varepsilon < B_s < x+\\varepsilon \\}} \\, d[B]_s,"
},
{
"math_id": 10,
"text": " dB = b(t,B)\\,dt+ dW"
},
{
"math_id": 11,
"text": " W "
},
{
"math_id": 12,
"text": "d[B]_s"
},
{
"math_id": 13,
"text": "ds"
},
{
"math_id": 14,
"text": "(X_s)_{s\\ge 0}"
},
{
"math_id": 15,
"text": " L^x(t) =\\int_0^t 1_{\\{x\\}}(X_s) \\, ds."
},
{
"math_id": 16,
"text": " \\mathbb R: "
},
{
"math_id": 17,
"text": " L^x(t) = |X_t - x| - |X_0 - x| - \\int_0^t \\left( 1_{(0,\\infty)}(X_s - x) - 1_{(-\\infty, 0]}(X_s-x) \\right) \\, dX_s, \\qquad t \\geq 0. "
},
{
"math_id": 18,
"text": " F:\\mathbb R \\rightarrow \\mathbb R"
},
{
"math_id": 19,
"text": " F',"
},
{
"math_id": 20,
"text": " F(X_t) = F(X_0) + \\int_0^t F'_{-}(X_s) \\, dX_s + \\frac12 \\int_{-\\infty}^\\infty L^x(t) \\, dF'_{-}(x), "
},
{
"math_id": 21,
"text": " F'_{-}"
},
{
"math_id": 22,
"text": "X"
},
{
"math_id": 23,
"text": "\\alpha\\in(0,1/2)"
},
{
"math_id": 24,
"text": " L = (L^x(t))_{x \\in \\mathbb R, t \\geq 0}"
},
{
"math_id": 25,
"text": " x"
},
{
"math_id": 26,
"text": "\\alpha"
},
{
"math_id": 27,
"text": " L "
},
{
"math_id": 28,
"text": "(|B_s|)_{s \\geq 0}"
},
{
"math_id": 29,
"text": " L_t = (L^x_t)_{x \\in E}"
},
{
"math_id": 30,
"text": "E"
},
{
"math_id": 31,
"text": " T = \\inf\\{t \\geq 0 \\colon B_t = 0\\}"
},
{
"math_id": 32,
"text": " T_a = \\inf \\{ t \\geq 0 \\colon L^0_t > a \\}."
},
{
"math_id": 33,
"text": "(L^x_{T_a})_{x \\geq 0}"
},
{
"math_id": 34,
"text": " a "
}
] |
https://en.wikipedia.org/wiki?curid=5678101
|
5678338
|
Flux limiter
|
Flux limiters are used in high resolution schemes – numerical schemes used to solve problems in science and engineering, particularly fluid dynamics, described by partial differential equations (PDEs). They are used in high resolution schemes, such as the MUSCL scheme, to avoid the spurious oscillations (wiggles) that would otherwise occur with high order spatial discretization schemes due to shocks, discontinuities or sharp changes in the solution domain. Use of flux limiters, together with an appropriate high resolution scheme, make the solutions total variation diminishing (TVD).
Note that flux limiters are also referred to as slope limiters because they both have the same mathematical form, and both have the effect of limiting the solution gradient near shocks or discontinuities. In general, the term flux limiter is used when the limiter acts on system "fluxes", and slope limiter is used when the limiter acts on system "states" (like pressure, velocity etc.).
How they work.
The main idea behind the construction of flux limiter schemes is to limit the spatial derivatives to realistic values – for scientific and engineering problems this usually means physically realisable and meaningful values. They are used in high resolution schemes for solving problems described by PDEs and only come into operation when sharp wave fronts are present. For smoothly changing waves, the flux limiters do not operate and the spatial derivatives can be represented by higher order approximations without introducing spurious oscillations. Consider the 1D semi-discrete scheme below,
formula_0
where, formula_1 and formula_2 represent edge fluxes for the "i"-th cell. If these edge fluxes can be represented by "low" and "high" resolution schemes, then a flux limiter can switch between these schemes depending upon the gradients close to the particular cell, as follows,
formula_3
formula_4
where
The limiter function is constrained to be greater than or equal to zero, i.e., formula_10. Therefore, when the limiter is equal to zero (sharp gradient, opposite slopes or zero gradient), the flux is represented by a "low resolution scheme". Similarly, when the limiter is equal to 1 (smooth solution), it is represented by a "high resolution scheme". The various limiters have differing switching characteristics and are selected according to the particular problem and solution scheme. No particular limiter has been found to work well for all problems, and a particular choice is usually made on a trial and error basis.
Limiter functions.
The following are common forms of flux/slope limiter function, formula_11:
This is a desirable property as it ensures that the limiting actions for forward and backward gradients operate in the same way.
Unless indicated to the contrary, the above limiter functions are second order TVD. This means that they are designed such that they pass through a certain region of the solution, known as the TVD region, in order to guarantee stability of the scheme. Second-order, TVD limiters satisfy at least the following criteria:
The admissible limiter region for second-order TVD schemes is shown in the "Sweby Diagram" opposite, and plots showing limiter functions overlaid onto the TVD region are shown below. In this image, plots for the Osher and Sweby limiters have been generated using formula_32.
Generalised minmod limiter.
An additional limiter that has an interesting form is the van-Leer's one-parameter family of minmod limiters. It is defined as follows
formula_33
Note: formula_34 is most dissipative for formula_35 when it reduces to formula_36 and is least dissipative for formula_37.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{d u_i}{d t} + \\frac{1}{\\Delta x_i} \\left[\nF \\left( u_{i + {1}/{2}} \\right) - F \\left( u_{i - {1}/{2}} \\right) \\right] = 0, "
},
{
"math_id": 1,
"text": " F \\left( u_{i + {1}/{2}} \\right) "
},
{
"math_id": 2,
"text": " F \\left( u_{i - 1/2} \\right) "
},
{
"math_id": 3,
"text": "F \\left( u_{i + 1/2} \\right) = f^\\text{low}_{i + 1/2} - \\phi\\left( r_i \\right) \\left( f^\\text{low}_{i + 1/2} - f^\\text{high}_{i + 1/2} \\right) ,"
},
{
"math_id": 4,
"text": "F \\left( u_{i - 1/2} \\right) = f^\\text{low}_{i - 1/2} - \\phi\\left( r_{i-1} \\right) \\left( f^\\text{low}_{i - 1/2} - f^\\text{high}_{i - 1/2} \\right) ,"
},
{
"math_id": 5,
"text": "f^\\text{low} "
},
{
"math_id": 6,
"text": "f^\\text{high} "
},
{
"math_id": 7,
"text": "\\phi\\ (r) "
},
{
"math_id": 8,
"text": " r "
},
{
"math_id": 9,
"text": " r_{i} = \\frac{u_{i} - u_{i-1}}{u_{i+1} - u_{i}} ."
},
{
"math_id": 10,
"text": "\\phi\\ (r) \\ge 0 "
},
{
"math_id": 11,
"text": " \\phi (r) "
},
{
"math_id": 12,
"text": "\n\\phi_{cm}(r) = \\begin{cases}\n\\frac{r\\left(3r+1\\right)}{\\left(r+1\\right)^{2}}, & r>0, & \\lim_{r\\to\\infty}\\phi_{cm}(r)=3 \\\\\n0 \\quad \\quad\\, , & r\\le 0\n\\end{cases}\n"
},
{
"math_id": 13,
"text": " \\phi_{hc}(r) = \\frac{ 1.5 \\left(r+\\left| r \\right| \\right)}{ \\left(r+2 \\right)} ; \\quad \\lim_{r \\to \\infty}\\phi_{hc}(r) = 3."
},
{
"math_id": 14,
"text": " \\phi_{hq}(r) = \\frac{2 \\left(r + \\left|r \\right| \\right)}{ \\left(r+3 \\right)} ; \\quad \\lim_{r \\to \\infty}\\phi_{hq}(r) = 4."
},
{
"math_id": 15,
"text": " \\phi_{kn}(r) = \\max \\left[ 0, \\min \\left(2 r, \\min \\left( \\dfrac{(1 + 2 r)}{3}, 2 \\right) \\right) \\right]; \\quad \\lim_{r \\to \\infty}\\phi_{kn}(r) = 2."
},
{
"math_id": 16,
"text": " \\phi_{mm} (r) = \\max \\left[ 0 , \\min \\left( 1 , r \\right) \\right] ; \\quad \\lim_{r \\to \\infty} \\phi_{mm}(r) = 1."
},
{
"math_id": 17,
"text": " \\phi_{mc} (r) = \\max \\left[ 0 , \\min \\left( 2 r, 0.5 (1+r), 2 \\right) \\right] ; \\quad \\lim_{r \\to \\infty}\\phi_{mc}(r) = 2."
},
{
"math_id": 18,
"text": " \\phi_{os} (r) = \\max \\left[ 0 , \\min \\left( r, \\beta \\right) \\right], \\quad \\left(1 \\leq \\beta \\leq 2 \\right) ; \\quad \\lim_{r \\to \\infty}\\phi_{os} (r) = \\beta."
},
{
"math_id": 19,
"text": " \\phi_{op} (r) = \\frac{1.5 \\left(r^2 + r \\right) }{\\left(r^2 + r +1 \\right)} ; \\quad \\lim_{r \\to \\infty}\\phi_{op} (r) = 1.5 \\, ."
},
{
"math_id": 20,
"text": " \\phi_{sm}(r) = \\max \\left[ 0, \\min \\left(2 r, \\left(0.25 + 0.75 r \\right), 4 \\right) \\right] ; \\quad \\lim_{r \\to \\infty}\\phi_{sm}(r) = 4."
},
{
"math_id": 21,
"text": " \\phi_{sb} (r) = \\max \\left[ 0, \\min \\left( 2 r , 1 \\right), \\min \\left( r, 2 \\right) \\right] ; \\quad \\lim_{r \\to \\infty}\\phi_{sb} (r) = 2."
},
{
"math_id": 22,
"text": " \\phi_{sw} (r) = \\max \\left[ 0 , \\min \\left( \\beta r, 1 \\right), \\min \\left( r, \\beta \\right) \\right], \\quad \\left(1 \\leq \\beta \\leq 2 \\right) ; \\quad \\lim_{r \\to \\infty}\\phi_{sw} (r) = \\beta."
},
{
"math_id": 23,
"text": " \\phi_{um}(r) = \\max \\left[ 0, \\min \\left(2 r, \\left(0.25 + 0.75 r \\right), \\left(0.75 + 0.25 r \\right), 2 \\right) \\right] ; \\quad \\lim_{r \\to \\infty}\\phi_{um}(r) = 2."
},
{
"math_id": 24,
"text": " \\phi_{va1} (r) = \\frac{r^2 + r}{r^2 + 1 } ; \\quad \\lim_{r \\to \\infty}\\phi_{va1} (r) = 1."
},
{
"math_id": 25,
"text": " \\phi_{va2} (r) = \\frac{2 r}{r^2 + 1} ; \\quad \\lim_{r \\to \\infty}\\phi_{va2} (r) = 0."
},
{
"math_id": 26,
"text": " \\phi_{vl} (r) = \\frac{r + \\left| r \\right| }{1 + \\left| r \\right| } ; \\quad \\lim_{r \\to \\infty}\\phi_{vl} (r) = 2."
},
{
"math_id": 27,
"text": "\\frac{ \\phi \\left( r \\right)}{r} = \\phi \\left( \\frac{1}{r} \\right) ."
},
{
"math_id": 28,
"text": " r \\le \\phi(r) \\le 2r, \\left( 0 \\le r \\le 1 \\right) \\ "
},
{
"math_id": 29,
"text": " 1 \\le \\phi(r) \\le r, \\left( 1 \\le r \\le 2 \\right) \\ "
},
{
"math_id": 30,
"text": " 1 \\le \\phi(r) \\le 2, \\left( r > 2 \\right) \\ "
},
{
"math_id": 31,
"text": " \\phi(1) = 1 \\ "
},
{
"math_id": 32,
"text": " \\beta = 1.5 "
},
{
"math_id": 33,
"text": " \\phi_{mg}(r,\\theta) = \\max\\left(0,\\min\\left(\\theta r,\\frac{1+r}{2},\\theta\\right)\\right),\\quad\\theta\\in\\left[1,2\\right]. "
},
{
"math_id": 34,
"text": " \\phi_{mg} "
},
{
"math_id": 35,
"text": " \\theta=1, "
},
{
"math_id": 36,
"text": " \\phi_{mm}, "
},
{
"math_id": 37,
"text": " \\theta = 2 "
}
] |
https://en.wikipedia.org/wiki?curid=5678338
|
567840
|
Solution set
|
Set of values which satisfy a given set of equations
In mathematics, the solution set of a set of equations and inequalities is the set of all its solutions, that is the values that satisfy all equations and inequalities.
If there is no solution, the solution set is the empty set.
Remarks.
In algebraic geometry, solution sets are called algebraic sets if there are no inequalities. Over the reals, and with inequalities, there are called semialgebraic sets.
Other meanings.
More generally, the solution set to an arbitrary collection "E" of relations ("Ei") ("i" varying in some index set "I") for a collection of unknowns formula_5, supposed to take values in respective spaces formula_6, is the set "S" of all solutions to the relations "E", where a solution formula_7 is a family of values formula_8 such that substituting formula_9 by formula_7 in the collection "E" makes all relations "true".
The above meaning is a special case of this one, if the set of polynomials "fi" if interpreted as the set of equations "fi"("x")=0.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x=0"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "y"
},
{
"math_id": 3,
"text": "\\begin{cases} x + 2y = 3,&\\\\ x + 2y = -3 \\end{cases}"
},
{
"math_id": 4,
"text": "\\emptyset"
},
{
"math_id": 5,
"text": "{(x_j)}_{j\\in J}"
},
{
"math_id": 6,
"text": "{(X_j)}_{j\\in J}"
},
{
"math_id": 7,
"text": "x^{(k)}"
},
{
"math_id": 8,
"text": "{\\left( x^{(k)}_j \\right)}_{j\\in J}\\in \\prod_{j\\in J} X_j"
},
{
"math_id": 9,
"text": "{\\left(x_j\\right)}_{j\\in J}"
},
{
"math_id": 10,
"text": "(x,y)\\in \\R^2"
},
{
"math_id": 11,
"text": "x \\in \\R"
},
{
"math_id": 12,
"text": " E = \\{ \\sqrt x \\le 4 \\} "
},
{
"math_id": 13,
"text": "x\\in\\R"
},
{
"math_id": 14,
"text": "\\sqrt x"
},
{
"math_id": 15,
"text": " E = \\{ e^{i x} = 1 \\} "
},
{
"math_id": 16,
"text": "x\\in\\Complex"
}
] |
https://en.wikipedia.org/wiki?curid=567840
|
567883
|
Weakly harmonic function
|
In mathematics, a function formula_0 is weakly harmonic in a domain formula_1 if
formula_2
for all formula_3 with compact support in formula_1 and continuous second derivatives, where Δ is the Laplacian. This is the same notion as a weak derivative, however, a function can have a weak derivative and not be differentiable. In this case, we have the somewhat surprising result that a function is weakly harmonic if and only if it is harmonic. Thus weakly harmonic is actually equivalent to the seemingly stronger harmonic condition.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "D"
},
{
"math_id": 2,
"text": "\\int_D f\\, \\Delta g = 0"
},
{
"math_id": 3,
"text": "g"
}
] |
https://en.wikipedia.org/wiki?curid=567883
|
5679329
|
Daya Bay Reactor Neutrino Experiment
|
Particle physics experiment studying neutrinos
The Daya Bay Reactor Neutrino Experiment is a China-based multinational particle physics project studying neutrinos, in particular neutrino oscillations. The multinational collaboration includes researchers from China, Chile, the United States, Taiwan (Republic of China), Russia, and the Czech Republic. The US side of the project is funded by the US Department of Energy's Office of High Energy Physics.
It is situated at Daya Bay, approximately 52 kilometers northeast of Hong Kong and 45 kilometers east of Shenzhen. There is an affiliated project in the Aberdeen Tunnel Underground Laboratory in Hong Kong. The Aberdeen lab measures the neutrons produced by cosmic muons which may affect the Daya Bay Reactor Neutrino Experiment.
The experiment consists of eight antineutrino detectors, clustered in three locations within of six nuclear reactors. Each detector consists of 20 tons of liquid scintillator (linear alkylbenzene doped with gadolinium) surrounded by photomultiplier tubes and shielding.
A much larger follow-up is in development in the form of the Jiangmen Underground Neutrino Observatory (JUNO) in Kaiping, which will use an acrylic sphere filled with 20,000 tons of liquid scintillator to detect reactor antineutrinos. Groundbreaking began 10 January 2015, with operation expected in 2020.
Neutrino oscillations.
The experiment studies neutrino oscillations and is designed to measure the mixing angle "θ"13 using antineutrinos produced by the reactors of the Daya Bay Nuclear Power Plant and the Ling Ao Nuclear Power Plant. Scientists are also interested in whether neutrinos violate Charge-Parity conservation.
On 8 March 2012, the Daya Bay collaboration announced a 5.2σ discovery of "θ"13 ≠ 0, with
formula_0
This significant result represents a new type of oscillation and is surprisingly large. It is consistent with earlier, less significant results by T2K, MINOS and Double Chooz. With "θ"13 so large, NOνA has about a 50% probability of being sensitive to the neutrino mass hierarchy. Experiments may also be able to probe CP violation among neutrinos.
The collaboration produced an updated analysis of their results in 2014, which used the energy spectrum to improve the bounds on the mixing angle:
formula_1
An independent measurement was also published using events from neutrons captured on hydrogen:
formula_2.
Daya Bay has used its data to search for signals of a light sterile neutrino, resulting in exclusions of some previous unexplored mass regions.
At the Moriond 2015 physics conference a new best fit for mixing angle and mass difference was presented:
formula_3
On 21 April 2023 (published), Daya Bay's reports the following precision measurements:
formula_4 for the normal mass ordering or formula_5, for the inverted mass ordering. The Daya Bay collaboration suggests, "The reported formula_6 will likely remain the most precise measurement of formula_7 in the foreseeable future and be crucial to the investigation of the mass hierarchy and CP violation in neutrino oscillation."
Antineutrino spectrum.
Daya Bay Collaboration measured the anti-neutrino energy spectrum, and found that anti-neutrinos at an energy of around 5 MeV are in excess relative to theoretical expectations. This unexpected disagreement between observation and predictions suggested that the Standard model of particle physics needs improvement.
|
[
{
"math_id": 0,
"text": " \\sin^2 (2\\ \\theta_{13}) = 0.092 \\pm 0.016 \\, \\mathrm{(stat)} \\pm 0.005\\, \\mathrm{(syst)}. "
},
{
"math_id": 1,
"text": " \\sin^2 (2\\ \\theta_{13}) = 0.090^{+0.008}_{-0.009} "
},
{
"math_id": 2,
"text": "\\sin^2 (2\\ \\theta_{13}) = 0.083 \\pm 0.018"
},
{
"math_id": 3,
"text": "\\sin^2(2\\ \\theta_{13}) = 0.084 \\pm 0.005, \\qquad |\\Delta m^2_{ee}| = 2.44^{+0.10}_{-0.11} \\times 10^{-3} {\\rm eV}^2"
},
{
"math_id": 4,
"text": "\\sin^2(2\\ \\theta_{13}) = 0.0851 \\pm 0.0024,\\qquad \\Delta m^2_{32} = (2.466 \\pm 0.060) \\times\n10^{-3} {\\rm eV}^2 "
},
{
"math_id": 5,
"text": " \\Delta m^2_{32} = -(2.571\\pm 0.060)\\times 10^{-3} {\\rm eV}^2"
},
{
"math_id": 6,
"text": "\\sin^2(2\\ \\theta_{13})"
},
{
"math_id": 7,
"text": "\\theta_{13}"
}
] |
https://en.wikipedia.org/wiki?curid=5679329
|
567946
|
Kelvin–Helmholtz instability
|
Phenomenon of fluid mechanics
The Kelvin–Helmholtz instability (after Lord Kelvin and Hermann von Helmholtz) is a fluid instability that occurs when there is velocity shear in a single continuous fluid or a velocity difference across the interface between two fluids. Kelvin-Helmholtz instabilities are visible in the atmospheres of planets and moons, such as in cloud formations on Earth or the Red Spot on Jupiter, and the atmospheres of the Sun and other stars.
Theory overview and mathematical concepts.
Fluid dynamics predicts the onset of instability and transition to turbulent flow within fluids of different densities moving at different speeds. If surface tension is ignored, two fluids in parallel motion with different velocities and densities yield an interface that is unstable to short-wavelength perturbations for all speeds. However, surface tension is able to stabilize the short wavelength instability up to a threshold velocity.
If the density and velocity vary continuously in space (with the lighter layers uppermost, so that the fluid is RT-stable), the dynamics of the Kelvin-Helmholtz instability is described by the Taylor–Goldstein equation:
formula_0 where formula_1 denotes the Brunt–Väisälä frequency, U is the horizontal parallel velocity, k is the wave number, c is the eigenvalue parameter of the problem, formula_2 is complex amplitude of the stream function. Its onset is given by the Richardson number formula_3. Typically the layer is unstable for formula_4. These effects are common in cloud layers. The study of this instability is applicable in plasma physics, for example in inertial confinement fusion and the plasma–beryllium interface. In situations where there is a state of static stability, evident by heavier fluids found below than the lower fluid, the Rayleigh-Taylor instability can be ignored as the Kelvin–Helmholtz instability is sufficient given the conditions.
Numerically, the Kelvin–Helmholtz instability is simulated in a temporal or a spatial approach. In the temporal approach, the flow is considered in a periodic (cyclic) box "moving" at mean speed (absolute instability). In the spatial approach, simulations mimic a lab experiment with natural inlet and outlet conditions (convective instability).
Discovery and history.
The existence of the Kelvin-Helmholtz instability was first discovered by German physiologist and physicist Hermann von Helmholtz in 1868. Helmholtz identified that "every perfect geometrically sharp edge by which a fluid flows must tear it asunder and establish a surface of separation". Following that work, in 1871, collaborator William Thomson (later Lord Kelvin), developed a mathematical solution of linear instability whilst attempting to model the formation of ocean wind waves.
Throughout the early 20th Century, the ideas of Kelvin-Helmholtz instabilities were applied to a range of stratified fluid applications. In the early 1920s, Lewis Fry Richardson developed the concept that such shear instability would only form where shear overcame static stability due to stratification, encapsulated in the Richardson Number.
Geophysical observations of the Kelvin-Helmholtz instability were made through the late 1960s/early 1970s, for clouds, and later the ocean.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(U-c)^2\\left({d^2\\tilde\\phi \\over d z^2} - k^2\\tilde\\phi\\right) +\\left[N^2-(U-c){d^2 U \\over d z^2}\\right]\\tilde\\phi = 0,"
},
{
"math_id": 1,
"text": "N = \\sqrt{g / L_\\rho}"
},
{
"math_id": 2,
"text": "\\tilde\\phi"
},
{
"math_id": 3,
"text": "\\mathrm{Ri}"
},
{
"math_id": 4,
"text": "\\mathrm{Ri} < 0.25"
}
] |
https://en.wikipedia.org/wiki?curid=567946
|
56804318
|
Erdős–Tenenbaum–Ford constant
|
Mathematical constant
The Erdős–Tenenbaum–Ford constant is a mathematical constant that appears in number theory. Named after mathematicians Paul Erdős, Gérald Tenenbaum, and Kevin Ford, it is defined as
formula_0
where formula_1 is the natural logarithm.
Following up on earlier work by Tenenbaum, Ford used this constant in analyzing the number formula_2 of integers that are at most formula_3 and that have a divisor in the range formula_4.
Multiplication table problem.
For each positive integer formula_5, let formula_6 be the number of distinct integers in an formula_7 multiplication table. In 1960, Erdős studied the asymptotic behavior of formula_6 and proved that
formula_8
as formula_9.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\delta := 1 - \\frac{1 + \\log \\log 2}{\\log 2} = 0.0860713320\\dots"
},
{
"math_id": 1,
"text": "\\log"
},
{
"math_id": 2,
"text": "H(x,y,z)"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "[y,z]"
},
{
"math_id": 5,
"text": "N"
},
{
"math_id": 6,
"text": "M(N)"
},
{
"math_id": 7,
"text": "N \\times N"
},
{
"math_id": 8,
"text": "M(N) = \\frac{N^2}{(\\log N)^{\\delta + o(1)}},"
},
{
"math_id": 9,
"text": "N \\to +\\infty"
}
] |
https://en.wikipedia.org/wiki?curid=56804318
|
56811127
|
Cohn's theorem
|
In mathematics, Cohn's theorem states that a "n"th-degree self-inversive polynomial formula_0 has as many roots in the open unit disk formula_1 as the reciprocal polynomial of its derivative. Cohn's theorem is useful for studying the distribution of the roots of self-inversive and self-reciprocal polynomials in the complex plane.
An "n"th-degree polynomial,
formula_2
is called self-inversive if there exists a "fixed" complex number ( formula_3 ) of modulus 1 so that,
formula_4
where
formula_5
is the reciprocal polynomial associated with formula_0 and the bar means complex conjugation. Self-inversive polynomials have many interesting properties. For instance, its roots are all symmetric with respect to the unit circle and a polynomial whose roots are all on the unit circle is necessarily self-inversive. The coefficients of self-inversive polynomials satisfy the relations.
formula_6
In the case where formula_7 a "self-inversive polynomial" becomes a "complex-reciprocal polynomial" (also known as a "self-conjugate polynomial"). If its coefficients are real then it becomes a "real self-reciprocal polynomial".
The "formal derivative" of formula_0 is a ("n" − 1)th-degree polynomial given by
formula_8
Therefore, Cohn's theorem states that both formula_0 and the polynomial
formula_9
have the same number of roots in formula_10
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "p(z)"
},
{
"math_id": 1,
"text": "D =\\{z \\in \\mathbb{C}: |z|<1\\}"
},
{
"math_id": 2,
"text": "p(z) = p_0 + p_1 z + \\cdots + p_n z^n "
},
{
"math_id": 3,
"text": "\\omega"
},
{
"math_id": 4,
"text": "p(z) = \\omega p^*(z),\\qquad \\left(|\\omega|=1\\right),"
},
{
"math_id": 5,
"text": "p^*(z)=z^n \\bar{p}\\left(1 / \\bar{z}\\right) =\\bar{p}_n + \\bar{p}_{n-1} z + \\cdots + \\bar{p}_0 z^n"
},
{
"math_id": 6,
"text": "p_k = \\omega \\bar{p}_{n-k}, \\qquad 0 \\leqslant k \\leqslant n. "
},
{
"math_id": 7,
"text": "\\omega = 1, "
},
{
"math_id": 8,
"text": "q(z) =p'(z) = p_1 + 2p_2 z + \\cdots + n p_n z^{n-1}. "
},
{
"math_id": 9,
"text": "q^*(z) =z^{n-1}\\bar{q}_{n-1}\\left(1 / \\bar{z}\\right) = z^{n-1} \\bar{p}' \\left(1 / \\bar{z}\\right) = n \\bar{p}_n + (n-1)\\bar{p}_{n-1} z + \\cdots + \\bar{p}_1 z^{n-1} "
},
{
"math_id": 10,
"text": "|z|<1."
}
] |
https://en.wikipedia.org/wiki?curid=56811127
|
5682069
|
XvYCC
|
xvYCC or extended-gamut YCbCr is a color space that can be used in the video electronics of television sets to support a gamut 1.8 times as large as that of the sRGB color space. xvYCC was proposed by Sony, specified by the IEC in October 2005 and published in January 2006 as IEC 61966-2-4. xvYCC extends the ITU-R BT.709 tone curve by defining over-ranged values.
xvYCC-encoded video retains the same color primaries and white point as BT.709, and uses either a BT.601 or BT.709 RGB-to-YCC conversion matrix and encoding. This allows it to travel through existing digital limited range YCC data paths, and any colors within the normal gamut will be compatible. It works by allowing negative RGB inputs and expanding the output chroma. These are used to encode more saturated colors by using a greater part of the RGB values that can be encoded in the YCbCr signal compared with those used in Broadcast Safe Level. The extra-gamut colors can then be displayed by a device whose underlying technology is not limited by the standard primaries.
In a paper published by Society for Information Display in 2006, the authors mapped the 769 colors in the Munsell Color Cascade (so called Michael Pointer's gamut) to the BT.709 space and to the xvYCC space. About 55% of the Munsell colors could be mapped to the sRGB gamut, but 100% of those colors map to within the xvYCC gamut. Deeper hues can be created – for example a deeper cyan by giving the opposing primary (red) a negative coefficient. The quantization range of the xvYCC601 and xvYCC709 colorimetry is always Limited Range.
Background.
Camera and display technology is evolving with more distinct primaries, spaced farther apart per the CIE chromaticity diagram. Displays with more separated primaries permit a larger gamut of displayable colors, however, color data needs to be available to make use of the larger gamut color space. xvYCC is an extended gamut color space that is backwards compatible with the existing BT.709 YCbCr broadcast signal by making use of otherwise unused data portions of the signal.
The BT.709 YCbCr signal has unused code space, a limitation imposed for broadcasting purposes. In particular only 16-240 is used for the color Cb/Cr channels out of the 0-255 digital values available for 8 bit data encoding. xvYCC makes use of this portion of the signal to store extended gamut color data by using code values 1-15 and 241-254 in the Cb/Cr channels for gamut-extension.
Definition.
xvYCC expands the chroma values to 1-254 (i.e. a raw value of -0.567–0.567) while keeping the luma (Y) value range at 16-235 (though Superwhite may be supported), the same as Rec. 709. First the OETF (Transfer Characteristics 11 per H.273 as originally specified by the first amendment to H.264) is expanded to allow negative R'G'B' inputs such that:
formula_0
Here 1.099 number has the value 1 + 5.5 * β = 1.099296826809442... and β has the value 0.018053968510807..., while 0.099 is 1.099 - 1.
The YCC encoding matrix is unchanged, and can follow either Rec. 709 or Rec. 601 (Matrix Coefficients 1 and 5).
The possible range for non-linear R’G’B’601 is between -1.0732 and 2.0835 and for R’G’B’709 is between -1.1206 and 2.1305. That is achieved when YCC values are "1, 1, any" and "254, 254, any" in B' component.
xvYCC709 covers 37.19% of CIE 1976 u'v', while BT.709 only 33.24%.
The last step encodes the values to a binary number (quantization). It is basically unchanged, except that a bit-depth "n" of more than 8 bits can be selected:
formula_1
Example.
With negative primary amounts allowed, a cyan that lies outside the basic gamut of the primaries can be encoded as "green plus blue minus red". Since the 16-255 Y range is used (255 value is reserved in HDMI standard for synchronization but may be in files) and since the values of Cb and Cr are only little restricted, a lot of high saturated colors outside the 0–255 RGB space can be encoded. For example, if YCbCr is 255, 128, 128, in the case of a full level YCbCr encoding (0–255), then the corresponding R'G'B' is 255, 255, 255 which is the maximum encodable luminance value in this color space. But if Y=255 and Cr and/or Cb are not 128, this codes for the maximum luminance but with an added color: one primary must necessarily be above 255 and cannot be converted to R'G'B'. Adapted software and hardware must be used during production to not clip the video data levels that are above the sRGB space. This is almost never the case for software working with an RGB core.
The more complex example is YCbCr BT.709 values 139, 151, 24 (that is RGB -21, 182, 181). That is out-of-gamut for BT.709, but is not for sYCC and xvYCC709, and to convert those values to display gamut you would convert to XYZ (0.27018, 0.40327, 0.54109) and then to display gamut.
The XYZ matrix is as specified in Nvidia docs.
Adoption.
A mechanism for signaling xvYCC support and transmitting the gamut boundary definition for xvYCC has been defined in the HDMI 1.3 Specification. No new mechanism is required for transmitting the xvYCC data itself, as it is compatible with HDMI's existing YCbCr formats, but the display needs to signal its readiness to accept the extra-gamut xvYCC values (in Colorimetry block of EDID, flags xvYCC709 and xvYCC601), and the source needs to signal the actual gamut in use in AVI InfoFrame and use gamut metadata packets to help the display to intelligently adapt extreme colors to its own gamut limitations.
This should not be confused with HDMI 1.3's other new color feature, deep color. This is a separate feature that increases the precision of brightness and color information, and is independent of xvYCC.
xvYCC is not supported by DVD-Video but is supported by the high-definition recording format AVCHD and PlayStation 3 and Blu-ray. It is also supported by some cameras, like Sony HDR-CX405, that does actually tag the video as xvYCC with BT.709 inside Sony's XAVC.
History.
On January 7, 2013, Sony announced that it would release "Mastered in 4K" Blu-ray Disc titles which are sourced at 4K and encoded at 1080p. "Mastered in 4K" 1080p Blu-ray Disc titles can be played on existing Blu-ray Disc players and will support a larger color space using xvYCC.
On May 30, 2013, Eye IO announced that their encoding technology was licensed by Sony Pictures Entertainment to deliver 4K Ultra HD video with their "Sony 4K Video Unlimited Service". Eye IO encodes their video assets at 3840 x 2160 and includes support for the xvYCC color space.
Hardware support.
The following graphics hardware support xvYCC color space when connected to a display device supporting xvYCC:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "V=\\begin{cases}\n-1.099 (-L)^{0.45} + 0.099 & L \\le -0.018\\\\\n4.500L & -0.018 < L < 0.018\\\\\n1.099 L^{0.45} - 0.099 & L \\ge 0.018\n\\end{cases}\n"
},
{
"math_id": 1,
"text": "\n\\begin{align}\nY_{{\\rm xv}\\ n} &= \\left\\lfloor2^{n-8}(219\\times Y+16)\\right\\rceil\\\\\nCb_{{\\rm xv}\\ n} &= \\left\\lfloor2^{n-8}(224\\times Cb+128)\\right\\rceil\\\\\nCr_{{\\rm xv}\\ n} &= \\left\\lfloor2^{n-8}(224\\times Cr+128)\\right\\rceil\\\\\n\\end{align}\n"
}
] |
https://en.wikipedia.org/wiki?curid=5682069
|
56825635
|
Low thrust relative orbital transfer
|
In orbital mechanics, low-thrust relative transfer is an orbital maneuver in which a chaser spacecraft covers a specific relative distance relative to the target spacecraft using continuous low-thrust system with specific impulse of the order of 4000-8000s. This is in contrast to conventional impulsive transfers in the orbit which uses thermal rocket engines to develop impulse of the order of 300-400s. Such type of transfer uses low-thrust propulsion systems such as electrically powered spacecraft propulsion and solar sail.
Low-thrust relative transfer uses the orbital relative motion equations which are the non-linear set of equations that describes the motion of the chaser spacecraft relative to the target in terms of displacements along the respective axis of the accelerated frame of reference fixed on the target spacecraft. In 1960, W. H. Clohessy and R. S. Wiltshire published the Clohessy-Wiltshire equations, which presents a rather simplified model of orbital relative motion, in which the target is in a circular orbit, and the chaser spacecraft is in an elliptical or circular orbit. Since, the quantity of available thrust is limited, the transfer is occasionally posed as an optimal control problem subjected to the required objective and constraints.
Explanation.
Relative motion in the orbit means the motion of a spacecraft orbiting a planet relative to the other spacecraft orbiting the same planet. There can be one primary spacecraft known as the target and the other spacecraft with the task of performing the required maneuver relative to the target. Based on the mission requirement, the various relative orbital transfers can be rendezvous and docking operations, and maintaining station relative to the target. Unlike using a thrust-impulse to instantaneously change the velocity of the spacecraft, in non-impulsive transfer, there is a continuous application of thrust, so that, the spacecraft changes its direction gradually. Non-impulsive transfers relies on the low-thrust propulsion for the operation. Some of the mentionable low-thrust propulsion methods are, ionic propulsion, Hall-effect thruster and solar-sail systems. The electrostatic ion thruster uses high-voltage electrodes to accelerate ions with electrostatic forces, and achieve a specific impulse within the range of 4000-8000s.
Mathematical Models.
The continuous low-thrust relative transfer can be described in mathematical form by adding components of specific thrust which will act as control input in the equations of motion model for relative orbital transfer. Although a number of linearized models have been developed since 1960s which gives simplified set of equations, one popular model was developed by W. H. Clohessy and R. S. Wiltshire, and is modified to account for continuous motion and can be written as:
formula_0
formula_1
formula_2
where:
Optimal relative transfers.
Since, in continuous low-thrust transfers the thrust availability is limited, such type of transfers are usually subjected to certain performance index and final state constraints, posing the transfer as an optimal control problem with defined boundary conditions. For the transfer to have optimal control input expenditure, the problem can be written as:
formula_9
subjected to dynamics of the relative transfer:
formula_10
and boundary conditions:
formula_11
formula_12
where:
Sometimes, it is also useful to subject the system to control constraints because in case of continuous low-thrust transfer, there are always bounds on the availability of thrust. Hence, if the maximum quantity of thrust available is formula_26, then, an additional inequality constraint can be imposed on the optimal control problem posed above as:
formula_27
Additionally, if the relative transfer is occurring such that the chaser and the target spacecraft are very close to each other, the collision-avoidance constraints can also be employed in the optimal control problem in the form of a minimum relative distance, formula_28 as:
formula_29
and because of obvious reasons, the final value of state-vector cannot be less than formula_28.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\ddot{x} = 3n^2x+ 2n\\dot{y} + u_x\n\n"
},
{
"math_id": 1,
"text": "\\ddot{y} = -2n\\dot{x}+u_y"
},
{
"math_id": 2,
"text": "\\ddot{z}=-n^2z+u_z"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "y"
},
{
"math_id": 5,
"text": "z"
},
{
"math_id": 6,
"text": "u_x, u_y "
},
{
"math_id": 7,
"text": "u_z"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "J = \\frac{1}{2}\\int_{t_0}^{t_f}(\\vec{u}^T \\cdot R\\cdot \\vec{u}) dt "
},
{
"math_id": 10,
"text": "\\dot{\\vec{x}} = A\\vec{x} + B \\vec{u}"
},
{
"math_id": 11,
"text": "\\vec{x}(t_0) = \\vec{x}_0"
},
{
"math_id": 12,
"text": "\\vec{x}(t_f)=\\vec{x}_f"
},
{
"math_id": 13,
"text": "\\vec{x}"
},
{
"math_id": 14,
"text": "\\vec{x} = \n\\begin{bmatrix}\nx & \\dot{x} & y & \\dot{y} & z & \\dot{z}\n\\end{bmatrix}^T"
},
{
"math_id": 15,
"text": "\\vec{u}"
},
{
"math_id": 16,
"text": "\\vec{u} = \n\\begin{bmatrix}\nu_x & u_y & u_z\n\\end{bmatrix}^T"
},
{
"math_id": 17,
"text": "R"
},
{
"math_id": 18,
"text": "A"
},
{
"math_id": 19,
"text": "A = \n\\begin{bmatrix}\n0 & 1 & 0 & 0 & 0 & 0 \\\\\n3n^2 & 0 & 0 & 2n & 0 & 0 \\\\\n0 & 0 & 0 & 1 & 0 & 0 \\\\\n0 & -2n & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 1 \\\\\n0 & 0 & 0 & 0 & -n^2 & 0 \\\\\n\\end{bmatrix}"
},
{
"math_id": 20,
"text": "B"
},
{
"math_id": 21,
"text": "B = \n\\begin{bmatrix}\n0&0&0\\\\\n1&0&0\\\\\n0&0&0\\\\\n0&1&0\\\\\n0&0&0\\\\\n0&0&1\\\\\n\\end{bmatrix}"
},
{
"math_id": 22,
"text": "t_0"
},
{
"math_id": 23,
"text": "t_f"
},
{
"math_id": 24,
"text": "\\vec{x}_0"
},
{
"math_id": 25,
"text": "\\vec{x}_f"
},
{
"math_id": 26,
"text": "u_{max}"
},
{
"math_id": 27,
"text": "||\\vec{u}(t)||\\leq u_{max}"
},
{
"math_id": 28,
"text": "r_{min}"
},
{
"math_id": 29,
"text": "||\\vec{x}(t)||\\geq r_{min}"
}
] |
https://en.wikipedia.org/wiki?curid=56825635
|
56835844
|
Cryptographic multilinear map
|
A cryptographic formula_0-multilinear map is a kind of multilinear map, that is, a function formula_1 such that for any integers formula_2 and elements formula_3, formula_4, and which in addition is efficiently computable and satisfies some security properties. It has several applications on cryptography, as key exchange protocols, identity-based encryption, and broadcast encryption. There exist constructions of cryptographic 2-multilinear maps, known as bilinear maps, however, the problem of constructing such multilinear maps for formula_5 seems much more difficult and the security of the proposed candidates is still unclear.
Definition.
For "n" = 2.
In this case, multilinear maps are mostly known as bilinear maps or pairings, and they are usually defined as follows: Let formula_6 be two additive cyclic groups of prime order formula_7, and formula_8 another cyclic group of order formula_7 written multiplicatively. A pairing is a map: formula_9, which satisfies the following properties:
In addition, for security purposes, the discrete logarithm problem is required to be hard in both formula_13 and formula_14.
General case (for any "n").
We say that a map formula_1 is a formula_0-multilinear map if it satisfies the following properties:
In addition, for security purposes, the discrete logarithm problem is required to be hard in formula_22.
Candidates.
All the candidates multilinear maps are actually slightly generalizations of multilinear maps known as graded-encoding systems, since they allow the map formula_16 to be applied partially: instead of being applied in all the formula_0 values at once, which would produce a value in the target set formula_8, it is possible to apply formula_16 to some values, which generates values in intermediate target sets. For example, for formula_24, it is possible to do formula_25 then formula_26.
The three main candidates are GGH13, which is based on ideals of polynomial rings; CLT13, which is based approximate GCD problem and works over integers, hence, it is supposed to be easier to understand than GGH13 multilinear map; and GGH15, which is based on graphs.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "e:G_1\\times \\cdots \\times G_n \\rightarrow G_T"
},
{
"math_id": 2,
"text": " a_1, \\ldots, a_n "
},
{
"math_id": 3,
"text": " g_i \\in G_i "
},
{
"math_id": 4,
"text": "e(g_1^{a_1},\\ldots,g_n^{a_n})=e(g_1,\\ldots,g_n)^{\\prod_{i=1}^n a_i}"
},
{
"math_id": 5,
"text": "n > 2"
},
{
"math_id": 6,
"text": "G_1, G_2"
},
{
"math_id": 7,
"text": "q"
},
{
"math_id": 8,
"text": "G_T"
},
{
"math_id": 9,
"text": " e: G_1 \\times G_2 \\rightarrow G_T "
},
{
"math_id": 10,
"text": " \\forall a,b \\in F_q^*,\\ \\forall P\\in G_1, Q\\in G_2:\\ e(a P, b Q) = e(P,Q)^{ab}"
},
{
"math_id": 11,
"text": "g_1"
},
{
"math_id": 12,
"text": "g_2"
},
{
"math_id": 13,
"text": "G_1"
},
{
"math_id": 14,
"text": "G_2"
},
{
"math_id": 15,
"text": "e(g_1, g_2)"
},
{
"math_id": 16,
"text": "e"
},
{
"math_id": 17,
"text": "G_i"
},
{
"math_id": 18,
"text": "1 \\le i \\le n"
},
{
"math_id": 19,
"text": "a_1, \\ldots, a_n \\in \\mathbb{Z}"
},
{
"math_id": 20,
"text": "(g_1, \\ldots, g_n) \\in G_1 \\times \\cdots \\times G_n"
},
{
"math_id": 21,
"text": "g_1, \\ldots, g_n"
},
{
"math_id": 22,
"text": "G_1, \\ldots, G_n"
},
{
"math_id": 23,
"text": "e(g_1, \\ldots, g_n)"
},
{
"math_id": 24,
"text": "n = 3"
},
{
"math_id": 25,
"text": "y = e(g_2, g_3) \\in G_{T_2}"
},
{
"math_id": 26,
"text": "e(g_1, y) \\in G_T"
}
] |
https://en.wikipedia.org/wiki?curid=56835844
|
56848862
|
Dispersive flies optimisation
|
Dispersive flies optimisation (DFO) is a bare-bones swarm intelligence algorithm which is inspired by the swarming behaviour of flies hovering over food sources. DFO is a simple optimiser which works by iteratively trying to improve a candidate solution with regard to a numerical measure that is calculated by a fitness function. Each member of the population, a fly or an agent, holds a candidate solution whose suitability can be evaluated by their fitness value. Optimisation problems are often formulated as either minimisation or maximisation problems.
DFO was introduced with the intention of analysing a simplified swarm intelligence algorithm with the fewest tunable parameters and components. In the first work on DFO, this algorithm was compared against a few other existing swarm intelligence techniques using error, efficiency and diversity measures. It is shown that despite the simplicity of the algorithm, which only uses agents’ position vectors at time "t" to generate the position vectors for time "t" + 1, it exhibits a competitive performance. Since its inception, DFO has been used in a variety of applications including medical imaging and image analysis as well as data mining and machine learning.
Algorithm.
DFO bears many similarities with other existing continuous, population-based optimisers (e.g. particle swarm optimization and differential evolution). In that, the swarming behaviour of the individuals consists of two tightly connected mechanisms, one is the formation of the swarm and the other is its breaking or weakening. DFO works by facilitating the information exchange between the members of the population (the swarming flies). Each fly formula_0 represents a position in a "d"-dimensional search space: formula_1, and the fitness of each fly is calculated by the fitness function formula_2, which takes into account the flies' "d" dimensions: formula_3.
The pseudocode below represents one iteration of the algorithm:
for i = 1 : N flies
formula_4
end for i
formula_5 = arg min formula_6
for i = 1 : N and formula_7
for d = 1 : D dimensions
if formula_8
formula_9
else
formula_10
end if
end for d
end for i
In the algorithm above, formula_11 represents fly formula_12 at dimension formula_13 and time formula_14; formula_15 presents formula_16's best neighbouring fly in ring topology (left or right, using flies indexes), at dimension formula_13 and time formula_17; and formula_18 is the swarm's best fly. Using this update equation, the swarm's population update depends on each fly's best neighbour (which is used as the focus formula_19, and the difference between the current fly and the best in swarm represents the spread of movement, formula_20).
Other than the population size formula_21, the only tunable parameter is the disturbance threshold formula_22, which controls the dimension-wise restart in each fly vector. This mechanism is proposed to control the diversity of the swarm.
Other notable minimalist swarm algorithm is Bare bones particle swarms (BB-PSO), which is based on particle swarm optimisation, along with bare bones differential evolution (BBDE) which is a hybrid of the bare bones particle swarm optimiser and differential evolution, aiming to reduce the number of parameters. Alhakbani in her PhD thesis covers many aspects of the algorithms including several DFO applications in feature selection as well as parameter tuning.
Applications.
Some of the recent applications of DFO are listed below:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf{x}"
},
{
"math_id": 1,
"text": " \\mathbf{x} = (x_1,x_2,\\ldots,x_d)"
},
{
"math_id": 2,
"text": "f(\\mathbf{x})"
},
{
"math_id": 3,
"text": "f(\\mathbf{x}) = f(x_1,x_2,\\ldots,x_d) "
},
{
"math_id": 4,
"text": " \\mathbf{x_i}.\\text{fitness} = f(\\mathbf{x}_i) "
},
{
"math_id": 5,
"text": " \\mathbf{x}_s "
},
{
"math_id": 6,
"text": " [f(\\mathbf{x}_i)], \\; i \\in \\{1,\\ldots,N\\} "
},
{
"math_id": 7,
"text": " i \\ne s "
},
{
"math_id": 8,
"text": "U(0,1)<\\Delta "
},
{
"math_id": 9,
"text": "x_{id}^{t+1} = U(x_{\\min,d}, x_{\\max,d})"
},
{
"math_id": 10,
"text": "x_{id}^{t+1} = x_{i_{nd}}^t + U(0,1)( x_{sd}^t - x_{id}^t ) "
},
{
"math_id": 11,
"text": " x_{id}^{t+1} "
},
{
"math_id": 12,
"text": " i "
},
{
"math_id": 13,
"text": " d "
},
{
"math_id": 14,
"text": " t+1 "
},
{
"math_id": 15,
"text": " x_{i_{nd}}^t "
},
{
"math_id": 16,
"text": " x_i "
},
{
"math_id": 17,
"text": " t "
},
{
"math_id": 18,
"text": " x_{sd}^t "
},
{
"math_id": 19,
"text": " \\mu "
},
{
"math_id": 20,
"text": " \\sigma "
},
{
"math_id": 21,
"text": " N "
},
{
"math_id": 22,
"text": " \\Delta "
}
] |
https://en.wikipedia.org/wiki?curid=56848862
|
56858642
|
Slider-crank linkage
|
Mechanism for conveting rotary motion into linear motion
A slider-crank linkage is a four-link mechanism with three revolute joints and one prismatic (sliding) joint. The rotation of the crank drives the linear movement of the slider, or the expansion of gases against a sliding piston in a cylinder can drive the rotation of the crank.
There are two types of slider-cranks: in-line and offset.
There are also two methods to design each type: graphical and analytical.
In-line kinematics.
The displacement of the end of the connecting rod is approximately proportional to the cosine of the angle of rotation of the crank, when it is measured from top dead center (TDC). So the reciprocating motion created by a steadily rotating crank and connecting rod is approximately simple harmonic motion:
formula_0
where "x" is the distance of the end of the connecting rod from the crank axle, "l" is the length of the connecting rod, "r" is the length of the crank, and "α" is the angle of the crank measured from top dead center (TDC). Technically, the reciprocating motion of the connecting rod departs from sinusoidal motion due to the changing angle of the connecting rod during the cycle, the correct motion, given by the Piston motion equations is:
formula_1
As long as the connecting rod is much longer than the crank formula_2 the difference is negligible. This difference becomes significant in high-speed engines, which may need balance shafts to reduce the vibration due to this "secondary imbalance".
The mechanical advantage of a crank, the ratio between the force on the connecting rod and the torque on the shaft, varies throughout the crank's cycle. The relationship between the two is approximately:
formula_3
where formula_4 is the torque and "F" is the force on the connecting rod. But in reality, the torque is maximum at crank angle of less than "α" = 90° from TDC for a given force on the piston. One way to calculate this angle is to find out when the Connecting rod smallend (piston) speed becomes the fastest in downward direction given a steady crank rotational velocity. Piston speed x' is expressed as:
formula_5
For example, for rod length 6" and crank radius 2", numerically solving the above equation finds the velocity minima (maximum downward speed) to be at crank angle of 73.17615° after TDC. Then, using the triangle sine law, it is found that the crank to connecting rod angle is 88.21738° and the connecting rod angle is 18.60647° from vertical (see Piston motion equations#Example).
When the crank is driven by the connecting rod, a problem arises when the crank is at top dead centre (0°) or bottom dead centre (180°). At these points in the crank's cycle, a force on the connecting rod causes no torque on the crank. Therefore, if the crank is stationary and happens to be at one of these two points, it cannot be started moving by the connecting rod. For this reason, in steam locomotives, whose wheels are driven by cranks, the connecting rods are attached to the wheels at points separated by some angle, so that regardless of the position of the wheels when the engine starts, at least one connecting rod will be able to exert torque to start the train.
Design.
An in-line crank slider is oriented in a way in which the pivot point of the crank is coincident with the axis of the linear movement. The follower arm, which is the link that connects the crank arm to the slider, connects to a pin in the center of sliding object. This pin is considered to be on the linear movement axis. Therefore, to be considered an "in-line" crank slider, the pivot point of the crank arm must be "in-line" with this pin point. The stroke((ΔR4)max) of an in-line crank slider is defined as the maximum linear distance the slider may travel between the two extreme points of its motion. With an in-line crank slider, the motion of the crank and follower links is symmetric about the sliding axis. This means that the crank angle required to execute a forward stroke is equivalent to the angle required to perform a reverse stroke. "For this reason, the in-line slider-crank mechanism produces balanced motion." This balanced motion implies other ideas as well. Assuming the crank arm is driven at a constant velocity, the time it takes to perform a forward stroke is equal to the time it takes to act a reverse stroke.
Graphical approach.
The graphical method of designing an in-line slider-crank mechanism involves the usage of hand-drawn or computerized diagrams. These diagrams are drawn to scale in order for easy evaluation and successful design. Basic trigonometry, the practice of analyzing the relationship between triangle features in order to determine any unknown values, can be used with a graphical compass and protractor alongside these diagrams to determine the required stroke or link lengths.
When the stroke of a mechanism needs to be calculated, first identify the ground level for the specified slider-crank mechanism. This ground level is the axis on which both the crank arm pivot-point and the slider pin are positioned. Draw the crank arm pivot point anywhere on this ground level. Once the pin positions are correctly placed, set a graphical compass to the given link length of the crank arm. Positioning the compass point on the pivot point of the crank arm, rotate the compass to produce a circle with radius equal to the length of the crank arm. This newly drawn circle represents the potential motion of the crank arm. Next, draw two models of the mechanism. These models will be oriented in a way that displays both the extreme positions of the slider. Once both diagrams are drawn, the linear distance between the retracted slider and the extended slider can be easily measured to determine the slider-crank stroke.
The retracted position of the slider is determined by further graphical evaluation. Now that the crank path is found, draw the crank slider arm in the position that places it as far away as possible from the slider. Once drawn, the crank arm should be coincident with the ground level axis that was initially drawn. Next, from the free point on the crank arm, draw the follower link using its measured or given length. Draw this length coincident with the ground level axis but in the direction toward the slider. The unhinged end of the follower will now be at the fully retracted position of the slider. Next, the extended position of the slider needs to be determined. From the pivot point of the crank arm, draw a new crank arm coincident with the ground level axis but in a position closest to the slider. This position should put the new crank arm at an angle of 180 degrees away from the retracted crank arm. Then draw the follower link with its given length in the same manner as previously mentioned. The unhinged point of the new follower will now be at the fully extended position of the slider.
Both the retracted and extended positions of the slider should now be known. Using a measuring ruler, measure the distance between these two points. This distance will be the mechanism stroke, (ΔR4)max.
Analytical approach.
To analytically design an in-line slider crank and achieve the desired stroke, the appropriate lengths of the two links, the crank and follower, need to be determined. For this case, the crank arm will be referred to as "L2", and the follower link will be referred to as "L3". With all in-line slider-crank mechanisms, the stroke is twice the length of the crank arm. Therefore, given the stroke, the length of the crank arm can be determined. This relationship is represented as:
L2
(ΔR4)max ÷ 2
Once "L2" is found, the follower length ("L3") can be determined. However, because the stroke of the mechanism only depends on the crank arm length, the follower length is somewhat insignificant. As a general rule, the length of the follower link should be at least 3 times the length of the crank arm. This is to account for an often undesired increased acceleration yield, or output, of the connecting arm.
Offset design.
The position of an offset slider-crank is derived by a similar formula to that for the inline form; using the same letters as in the previous diagram and an offset of formula_6:
formula_7
Its speed (the first derivative of its position) is representable as:
formula_8
Its acceleration (the second derivative of its position) is representable as the complicated equation of:
formula_9
Analytical approach.
The analytical method for designing an offset crank slider mechanism is the process by which triangular geometry is evaluated in order to determine generalized relationships among certain lengths, distances, and angles. These generalized relationships are displayed in the form of 3 equations and can be used to determine unknown values for almost any offset slider-crank. These equations express the link lengths, "L1, L2, and L3", as a function of the stroke,"(ΔR4)max", the imbalance angle, "β", and the angle of an arbitrary line "M", "θM". Arbitrary line "M" is a designer-unique line that runs through the crank pivot point and the extreme retracted slider position. The 3 equations are as follows:
L1
(ΔR4)max × ["(sin(θM)sin(θM - β)) / sin(β)"]
L2
(ΔR4)max × ["(sin(θM) - sin(θM - β)) / 2sin(β)"]
L3
(ΔR4)max × ["(sin(θM) + sin(θM - β)) / 2sin(β)"]
With these relationships, the 3 link lengths can be calculated and any related unknown values can be determined.
Inversions.
Slider-crank chain inversion arises when the connecting rod, or coupler, of a slider-crank linkage becomes the ground link, so the slider is connected directly to the crank. This "inverted slider-crank" is the form of a slider-crank linkage that is often used to actuate a hinged joint in construction equipment like a crane or backhoe, as well as to open and close a swinging gate or door.
A slider-crank is a four-bar linkage that has a crank that rotates coupled to a slider that the moves along a straight line.
This mechanism is composed of three important parts: The crank which is the rotating disc, the slider which slides inside the tube and the connecting rod which joins the parts together. As the slider moves to the right the connecting rod pushes the wheel round for the first 180 degrees of wheel rotation. When the slider begins to move back into the tube, the connecting rod pulls the wheel round to complete the rotation.
Different mechanism by fixing different link of slider crank chain are as follows:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x = r \\cos \\alpha + l"
},
{
"math_id": 1,
"text": "x = r \\cos \\alpha + \\sqrt{l^2 - r^2\\sin^2 \\alpha}"
},
{
"math_id": 2,
"text": "l >> r"
},
{
"math_id": 3,
"text": "\\tau = Fr \\sin (\\alpha+\\beta) \\,"
},
{
"math_id": 4,
"text": "\\tau\\,"
},
{
"math_id": 5,
"text": "x' = \\left(-r\\sin\\alpha - \\frac{r^2\\sin\\alpha \\cos\\alpha}{\\sqrt{l^2 - r^2\\sin^2\\alpha}}\\right)\\frac{d\\alpha}{dt}"
},
{
"math_id": 6,
"text": "o"
},
{
"math_id": 7,
"text": "x = r \\cos \\alpha + \\sqrt{l^2 - (r\\sin( \\alpha) - o)^2}"
},
{
"math_id": 8,
"text": "\\frac{ - r \\cos( \\alpha) (r \\sin( \\alpha) + o)}{ \\sqrt{l^2 - (r \\sin ( \\alpha ) + o ) ^2}} - r \\sin ( \\alpha )"
},
{
"math_id": 9,
"text": "\\dfrac{r\\sin\\left({\\alpha}\\right)\\left(r\\sin\\left({\\alpha}\\right)+o\\right)-r^2\\cos^2\\left({\\alpha}\\right)}{\\sqrt{l^2-\\left(r\\sin\\left({\\alpha}\\right)+o\\right)^2}}-\\dfrac{r^2\\cos^2\\left({\\alpha}\\right)\\left(r\\sin\\left({\\alpha}\\right)+o\\right)^2}{\\left(l^2-\\left(r\\sin\\left({\\alpha}\\right)+o\\right)^2\\right)^\\frac{3}{2}}-r\\cos\\left({\\alpha}\\right)"
}
] |
https://en.wikipedia.org/wiki?curid=56858642
|
56860120
|
Stephens' constant
|
Stephens' constant expresses the density of certain subsets of the prime numbers. Let formula_0 and formula_1 be two multiplicatively independent integers, that is, formula_2 except when both formula_3 and formula_4 equal zero. Consider the set formula_5 of prime numbers formula_6 such that formula_6 evenly divides formula_7 for some power formula_8. Assuming the validity of the generalized Riemann hypothesis, the density of the set formula_5 relative to the set of all primes is a rational multiple of
formula_9(sequence in the OEIS)
Stephens' constant is closely related to the Artin constant formula_10 that arises in the study of primitive roots.
formula_11
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "a"
},
{
"math_id": 1,
"text": "b"
},
{
"math_id": 2,
"text": "a^m b^n \\neq 1"
},
{
"math_id": 3,
"text": "m"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "T(a,b)"
},
{
"math_id": 6,
"text": "p"
},
{
"math_id": 7,
"text": "a^k - b"
},
{
"math_id": 8,
"text": "k"
},
{
"math_id": 9,
"text": "C_S = \\prod_p \\left(1 - \\frac{p}{p^3-1} \\right) = 0.57595996889294543964316337549249669\\ldots "
},
{
"math_id": 10,
"text": "C_A"
},
{
"math_id": 11,
"text": "C_S= \\prod_{p} \\left( C_A + \\left( {{1-p^2}\\over{p^2(p-1)}}\\right) \\right)\n\\left({{p}\\over{(p+1+{{1}\\over{p}})}} \\right)"
}
] |
https://en.wikipedia.org/wiki?curid=56860120
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.