id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
14421547
|
Orbit (control theory)
|
The notion of orbit of a control system used in mathematical control theory is a particular case of the notion of orbit in group theory.
Definition.
Let
formula_0
be a formula_1 control system, where
formula_2
belongs to a finite-dimensional manifold formula_3 and formula_4 belongs to a control set formula_5. Consider the family formula_6
and assume that every vector field in formula_7 is complete.
For every formula_8 and every real formula_9, denote by formula_10 the flow of formula_11 at time formula_9.
The orbit of the control system formula_0 through a point formula_12 is the subset formula_13 of formula_3 defined by
formula_14
The difference between orbits and attainable sets is that, whereas for attainable sets only forward-in-time motions are allowed, both forward and backward motions are permitted for orbits.
In particular, if the family formula_7 is symmetric (i.e., formula_8 if and only if formula_15), then orbits and attainable sets coincide.
The hypothesis that every vector field of formula_7 is complete simplifies the notations but can be dropped. In this case one has to replace flows of vector fields by local versions of them.
Orbit theorem (Nagano–Sussmann).
Each orbit formula_13 is an immersed submanifold of formula_3.
The tangent space to the orbit
formula_13 at a point formula_16 is the linear subspace of formula_17 spanned by
the vectors formula_18 where formula_19 denotes the pushforward of formula_11 by formula_20, formula_11 belongs to formula_7 and formula_20 is a diffeomorphism of formula_3 of the form formula_21 with formula_22 and formula_23.
If all the vector fields of the family formula_7 are analytic, then formula_24 where formula_25 is the evaluation at formula_16 of the Lie algebra generated by formula_7 with respect to the Lie bracket of vector fields.
Otherwise, the inclusion formula_26 holds true.
Corollary (Rashevsky–Chow theorem).
If formula_27 for every formula_28 and if formula_3 is connected, then each orbit is equal to the whole manifold formula_3.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "{\\ }\\dot q=f(q,u)"
},
{
"math_id": 1,
"text": "\\ {\\mathcal C}^\\infty"
},
{
"math_id": 2,
"text": "{\\ q}"
},
{
"math_id": 3,
"text": "\\ M"
},
{
"math_id": 4,
"text": "\\ u"
},
{
"math_id": 5,
"text": "\\ U"
},
{
"math_id": 6,
"text": "{\\mathcal F}=\\{f(\\cdot,u)\\mid u\\in U\\}"
},
{
"math_id": 7,
"text": "{\\mathcal F}"
},
{
"math_id": 8,
"text": "f\\in {\\mathcal F}"
},
{
"math_id": 9,
"text": "\\ t"
},
{
"math_id": 10,
"text": "\\ e^{t f}"
},
{
"math_id": 11,
"text": "\\ f"
},
{
"math_id": 12,
"text": "q_0\\in M"
},
{
"math_id": 13,
"text": "{\\mathcal O}_{q_0}"
},
{
"math_id": 14,
"text": "{\\mathcal O}_{q_0}=\\{e^{t_k f_k}\\circ e^{t_{k-1} f_{k-1}}\\circ\\cdots\\circ e^{t_1 f_1}(q_0)\\mid k\\in\\mathbb{N},\\ t_1,\\dots,t_k\\in\\mathbb{R},\\ f_1,\\dots,f_k\\in{\\mathcal F}\\}."
},
{
"math_id": 15,
"text": "-f\\in {\\mathcal F}"
},
{
"math_id": 16,
"text": "\\ q"
},
{
"math_id": 17,
"text": "\\ T_q M"
},
{
"math_id": 18,
"text": "\\ P_* f(q)"
},
{
"math_id": 19,
"text": "\\ P_* f"
},
{
"math_id": 20,
"text": "\\ P"
},
{
"math_id": 21,
"text": "e^{t_k f_k}\\circ \\cdots\\circ e^{t_1 f_1}"
},
{
"math_id": 22,
"text": " k\\in\\mathbb{N},\\ t_1,\\dots,t_k\\in\\mathbb{R}"
},
{
"math_id": 23,
"text": "f_1,\\dots,f_k\\in{\\mathcal F}"
},
{
"math_id": 24,
"text": "\\ T_q{\\mathcal O}_{q_0}=\\mathrm{Lie}_q\\,\\mathcal{F}"
},
{
"math_id": 25,
"text": "\\mathrm{Lie}_q\\,\\mathcal{F}"
},
{
"math_id": 26,
"text": "\\mathrm{Lie}_q\\,\\mathcal{F}\\subset T_q{\\mathcal O}_{q_0}"
},
{
"math_id": 27,
"text": "\\mathrm{Lie}_q\\,\\mathcal{F}= T_q M"
},
{
"math_id": 28,
"text": "\\ q\\in M"
}
] |
https://en.wikipedia.org/wiki?curid=14421547
|
14422554
|
Isoenthalpic–isobaric ensemble
|
Statistical-mechanical ensemble
The isoenthalpic-isobaric ensemble (constant enthalpy and constant pressure ensemble) is a statistical mechanical ensemble that maintains constant enthalpy formula_0 and constant pressure formula_1 applied. It is also called the formula_2-ensemble, where the number of particles formula_3 is also kept as a constant. It was developed by physicist H. C. Andersen in 1980. The ensemble adds another degree of freedom, which represents the variable volume formula_4 of a system to which the coordinates of all particles are relative. The volume formula_4 becomes a dynamical variable with potential energy and kinetic energy given by formula_5. The enthalpy formula_6 is a conserved quantity.
Using isoenthalpic-isobaric ensemble of Lennard-Jones fluid, it was shown that the Joule–Thomson coefficient and inversion curve can be computed directly from a single molecular dynamics simulation. A complete vapor-compression refrigeration cycle and a vapor–liquid coexistence curve, as well as a reasonable estimate of the supercritical point can be also simulated from this approach.
NPH simulation can be carried out using GROMACS and LAMMPS.
|
[
{
"math_id": 0,
"text": "H \\,"
},
{
"math_id": 1,
"text": "P \\,"
},
{
"math_id": 2,
"text": "NPH"
},
{
"math_id": 3,
"text": "N \\,"
},
{
"math_id": 4,
"text": "V \\,"
},
{
"math_id": 5,
"text": "PV \\,"
},
{
"math_id": 6,
"text": "H=E+PV \\,"
}
] |
https://en.wikipedia.org/wiki?curid=14422554
|
14423377
|
Binary cyclic group
|
Algebraic structure
In mathematics, the binary cyclic group of the "n"-gon is the cyclic group of order 2"n", formula_0, thought of as an extension of the cyclic group formula_1 by a cyclic group of order 2. Coxeter writes the "binary cyclic group" with angle-brackets, ⟨"n"⟩, and the index 2 subgroup as ("n") or ["n"]+.
It is the binary polyhedral group corresponding to the cyclic group.
In terms of binary polyhedral groups, the binary cyclic group is the preimage of the cyclic group of rotations (formula_2) under the 2:1 covering homomorphism
formula_3
of the special orthogonal group by the spin group.
As a subgroup of the spin group, the binary cyclic group can be described concretely as a discrete subgroup of the unit quaternions, under the isomorphism formula_4 where Sp(1) is the multiplicative group of unit quaternions. (For a description of this homomorphism see the article on quaternions and spatial rotations.)
Presentation.
The "binary cyclic group" can be defined as the set of formula_5th roots of unity—that is, the set formula_6, where
formula_7
using multiplication as the group operation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "C_{2n}"
},
{
"math_id": 1,
"text": "C_n"
},
{
"math_id": 2,
"text": "C_n < \\operatorname{SO}(3)"
},
{
"math_id": 3,
"text": "\\operatorname{Spin}(3) \\to \\operatorname{SO}(3)\\,"
},
{
"math_id": 4,
"text": "\\operatorname{Spin}(3) \\cong \\operatorname{Sp}(1)"
},
{
"math_id": 5,
"text": "2n"
},
{
"math_id": 6,
"text": "\\left\\{\\omega_n^k \\; | \\; k \\in \\{0,1,2,...,2n-1\\}\\right\\}"
},
{
"math_id": 7,
"text": "\\omega_n = e^{i\\pi/n} = \\cos\\frac{\\pi}{n} + i\\sin\\frac{\\pi}{n},"
}
] |
https://en.wikipedia.org/wiki?curid=14423377
|
1442361
|
Four-bar linkage
|
Mechanical linkage consisting of four links connected by joints in a loop
In the study of mechanisms, a four-bar linkage, also called a four-bar, is the simplest closed-chain movable linkage. It consists of four bodies, called "bars" or "links", connected in a loop by four joints. Generally, the joints are configured so the links move in parallel planes, and the assembly is called a "planar four-bar linkage". Spherical and spatial four-bar linkages also exist and are used in practice.
Planar four-bar linkage.
Planar four-bar linkages are constructed from four links connected in a loop by four one-degree-of-freedom joints. A joint may be either a revolute joint – also known as a pin joint or hinged joint – denoted by R, or a prismatic joint – also known as a sliding pair – denoted by P.
A link that is fixed in place relative to the viewer is called a "ground link."<br>
A link connecting to the ground by a revolute joint that can perform a complete revolution is called a "crank link."<br>
A link connecting to the ground by a revolute joint that cannot perform a complete revolution is called a "rocker link."<br>
A link connecting to a ground line by a prismatic joint is called a slider. Sliders are sometimes considered to be cranks that have a hinged pivot at an infinitely long distance away perpendicular to the travel of the slider.<br>
A link connecting to two other links is called a "floating link" or "coupler."
A coupler connecting a crank and a slider in a single slider crank mechanism is often called a "connecting rod," however, it has also been used to refer to any type of coupler.
There are three basic types of planar four-bar linkage, depending on the use of revolute or prismatic joints:
Planar four-bar linkages can be designed to guide a wide variety of movements, and are often the base mechanisms found in many machines. Because of this, the kinematics and dynamics of planar four-bar linkages are also important topics in mechanical engineering.
Planar quadrilateral linkage.
Planar quadrilateral linkage, RRRR or 4R linkages have four rotating joints. One link of the chain is usually fixed, and is called the "ground link", "fixed link", or the "frame". The two links connected to the frame are called the "grounded links" and are generally the input and output links of the system, sometimes called the "input link" and "output link". The last link is the "floating link", which is also called a "coupler" or "connecting rod" because it connects an input to the output.
Assuming the frame is horizontal there are four possibilities for the input and output links:
Some authors do not distinguish between the types of rocker.
Grashof condition.
The Grashof condition for a four-bar linkage states: "If the sum of the shortest and longest link of a planar quadrilateral linkage is less than or equal to the sum of the remaining two links, then the shortest link can rotate fully with respect to a neighboring link." In other words, the condition is satisfied if "S" + "L" ≤ "P" + "Q", where "S" is the shortest link, "L" is the longest, and "P" and "Q" are the other links.
Classification.
The movement of a quadrilateral linkage can be classified into eight cases based on the dimensions of its four links. Let a, b, g and h denote the lengths of the input crank, the output crank, the ground link and floating link, respectively. Then, we can construct the three terms:
formula_0;
formula_1;
formula_2.
The movement of a quadrilateral linkage can be classified into eight types based on the positive and negative values for these three terms, T1, T2, and T3.
The cases of T1 = 0, T2 = 0, and T3 = 0 are interesting because the linkages fold. If we distinguish folding quadrilateral linkage, then there are 27 different cases.
The figure shows examples of the various cases for a planar quadrilateral linkage.
The configuration of a quadrilateral linkage may be classified into three types: convex, concave, and crossing. In the convex and concave cases no two links cross over each other. In the crossing linkage two links cross over each other. In the convex case all four internal angles are less than 180 degrees, and in the concave configuration one internal angle is greater than 180 degrees. There exists a simple geometrical relationship between the lengths of the two diagonals of the quadrilateral. For convex and crossing linkages, the length of one diagonal increases if and only if the other decreases. On the other hand, for nonconvex non-crossing linkages, the opposite is the case; one diagonal increases if and only if the other also increases.
Design of four-bar mechanisms.
The synthesis, or design, of four-bar mechanisms is important when aiming to produce a desired output motion for a specific input motion. In order to minimize cost and maximize efficiency, a designer will choose the simplest mechanism possible to accomplish the desired motion. When selecting a mechanism type to be designed, link lengths must be determined by a process called dimensional synthesis. Dimensional synthesis involves an "iterate-and-analyze" methodology which in certain circumstances can be an inefficient process; however, in unique scenarios, exact and detailed procedures to design an accurate mechanism may not exist.
Time ratio.
The time ratio ("Q") of a four-bar mechanism is a measure of its quick return and is defined as follows:
formula_3
With four-bar mechanisms there are two strokes, the forward and return, which when added together create a cycle. Each stroke may be identical or have different average speeds. The time ratio numerically defines how fast the forward stroke is compared to the quicker return stroke. The total cycle time (Δtcycle) for a mechanism is:
formula_4
Most four-bar mechanisms are driven by a rotational actuator, or crank, that requires a specific constant speed. This required speed ("ω"crank)is related to the cycle time as follows:
formula_5
Some mechanisms that produce reciprocating, or repeating, motion are designed to produce symmetrical motion. That is, the forward stroke of the machine moves at the same pace as the return stroke. These mechanisms, which are often referred to as "in-line" design, usually do work in both directions, as they exert the same force in both directions.
Examples of symmetrical motion mechanisms include:
Other applications require that the mechanism-to-be-designed has a faster average speed in one direction than the other. This category of mechanism is most desired for design when work is only required to operate in one direction. The speed at which this one stroke operates is also very important in certain machine applications. In general, the return and work-non-intensive stroke should be accomplished as fast as possible. This is so the majority of time in each cycle is allotted for the work-intensive stroke. These "quick-return" mechanisms are often referred to as "offset".
Examples of offset mechanisms include:
With offset mechanisms, it is very important to understand how and to what degree the offset affects the time ratio. To relate the geometry of a specific linkage to the timing of the stroke, an imbalance angle ("β") is used. This angle is related to the time ratio, "Q", as follows:
formula_6
Through simple algebraic rearrangement, this equation can be rewritten to solve for "β":
formula_7
Timing charts.
Timing charts are often used to synchronize the motion between two or more mechanisms. They graphically display information showing where and when each mechanism is stationary or performing its forward and return strokes. Timing charts allow designers to qualitatively describe the required kinematic behavior of a mechanism.
These charts are also used to estimate the velocities and accelerations of certain four-bar links. "The velocity of a link is the time rate at which its position is changing, while the link's acceleration is the time rate at which its velocity is changing." Both velocity and acceleration are vector quantities, in that they have both magnitude and direction; however, only their magnitudes are used in timing charts. When used with two mechanisms, timing charts assume constant acceleration. This assumption produces polynomial equations for velocity as a function of time. Constant acceleration allows for the velocity vs. time graph to appear as straight lines, thus designating a relationship between displacement ("ΔR"), maximum velocity ("vpeak"), acceleration ("a"), and time("Δt"). The following equations show this.
Δ"R"
"v"peakΔ"t"
Δ"R"
"a"(Δ"t")2
Given the displacement and time, both the maximum velocity and acceleration of each mechanism in a given pair can be calculated.
Slider-crank linkage.
A slider-crank linkage is a four-bar linkage with three revolute joints and one prismatic, or sliding, joint. The rotation of the crank drives the linear movement the slider, or the expansion of gases against a sliding piston in a cylinder can drive the rotation of the crank.
There are two types of slider-cranks: in-line and offset.
Spherical and spatial four-bar linkages.
If the linkage has four hinged joints with axes angled to intersect in a single point, then the links move on concentric spheres and the assembly is called a "spherical four-bar linkage". The input-output equations of a spherical four-bar linkage can be applied to spatial four-bar linkages when the variables are replaced by dual numbers. "Note that the cited conference paper incorrectly conflates Moore-Penrose pseudoinverses with one-sided inverses of matrices, falsely claiming that the latter are unique whenever they exist. This is contradicted by the fact that formula_8 admits the set of matrices formula_9 as all its left inverses."
"Bennett's linkage" is a spatial four-bar linkage with hinged joints that have their axes angled in a particular way that makes the system movable.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "T_1 = g + h - a - b"
},
{
"math_id": 1,
"text": "T_2 = b + g - a - h"
},
{
"math_id": 2,
"text": "T_3 = b + h - a - g"
},
{
"math_id": 3,
"text": "Q = \\frac{\\text{Time of slower stroke}}{\\text{Time of quicker stroke}} \\ge 1"
},
{
"math_id": 4,
"text": "\\Delta t_\\text{cycle} = \\text{Time of slower stroke} + \\text{Time of quicker stroke}"
},
{
"math_id": 5,
"text": "\\omega_\\text{crank} = (\\Delta t_\\text{cycle})^{-1}"
},
{
"math_id": 6,
"text": "Q = \\frac{180^\\circ + \\beta}{180^\\circ - \\beta}"
},
{
"math_id": 7,
"text": "\\beta = 180^\\circ \\times \\frac{Q-1}{Q+1}"
},
{
"math_id": 8,
"text": "(1,0)^T"
},
{
"math_id": 9,
"text": "\\{(1,x) \\mid x \\in \\mathbb C\\}"
}
] |
https://en.wikipedia.org/wiki?curid=1442361
|
144241
|
Molar mass
|
Mass per amount of substance
In chemistry, the molar mass (M) (sometimes called molecular weight or formula weight, but see related quantities for usage) of a chemical compound is defined as the ratio between the mass and the amount of substance (measured in moles) of any sample of the compound. The molar mass is a bulk, not molecular, property of a substance. The molar mass is an "average" of many instances of the compound, which often vary in mass due to the presence of isotopes. Most commonly, the molar mass is computed from the standard atomic weights and is thus a terrestrial average and a function of the relative abundance of the isotopes of the constituent atoms on Earth. The molar mass is appropriate for converting between the mass of a substance and the amount of a substance for bulk quantities.
The molecular mass (for molecular compounds) and formula mass (for non-molecular compounds, such as ionic salts) are commonly used as synonyms of molar mass, differing only in units (daltons vs g/mol); however, the most authoritative sources define it differently. The difference is that molecular mass is the mass of one specific particle or molecule, while the molar mass is an average over many particles or molecules.
The molar mass is an intensive property of the substance, that does not depend on the size of the sample. In the International System of Units (SI), the coherent unit of molar mass is kg/mol. However, for historical reasons, molar masses are almost always expressed in g/mol.
The mole was defined in such a way that the molar mass of a compound, in g/mol, is numerically equal to the average mass of one molecule or formula unit, in daltons. It was exactly equal before the redefinition of the mole in 2019, and is now only approximately equal, but the difference is negligible for all practical purposes. Thus, for example, the average mass of a molecule of water is about 18.0153 daltons, and the molar mass of water is about 18.0153 g/mol.
For chemical elements without isolated molecules, such as carbon and metals, the molar mass is computed dividing by the number of moles of atoms instead. Thus, for example, the molar mass of iron is about 55.845 g/mol.
Since 1971, SI defined the "amount of substance" as a separate dimension of measurement. Until 2019, the mole was defined as the amount of substance that has as many constituent particles as there are atoms in 12 grams of carbon-12. During that period, the molar mass of carbon-12 was thus "exactly" 12 g/mol, by definition. Since 2019, a mole of any substance has been redefined in the SI as the amount of that substance containing an exactly defined number of particles, . The molar mass of a compound in g/mol thus is equal to the mass of this number of molecules of the compound in grams.
Molar masses of elements.
The molar mass of atoms of an element is given by the relative atomic mass of the element multiplied by the molar mass constant, "M"u ≈ kg/mol = 1 g/mol. For normal samples from earth with typical isotope composition, the atomic weight can be approximated by the standard atomic weight or the conventional atomic weight.
formula_0
Multiplying by the molar mass constant ensures that the calculation is dimensionally correct: standard relative atomic masses are dimensionless quantities (i.e., pure numbers) whereas molar masses have units (in this case, grams per mole).
Some elements are usually encountered as molecules, e.g. hydrogen (), sulfur (), chlorine (). The molar mass of molecules of these elements is the molar mass of the atoms multiplied by the number of atoms in each molecule:
formula_1
Molar masses of compounds.
The molar mass of a compound is given by the sum of the relative atomic mass "A"r of the atoms which form the compound multiplied by the molar mass constant formula_2:
formula_3
Here, "M"r is the relative molar mass, also called formula weight. For normal samples from earth with typical isotope composition, the standard atomic weight or the conventional atomic weight can be used as an approximation of the relative atomic mass of the sample. Examples are: formula_4
An average molar mass may be defined for mixtures of compounds. This is particularly important in polymer science, where there is usually a molar mass distribution of non-uniform polymers so that different polymer molecules contain different numbers of monomer units.
Average molar mass of mixtures.
The average molar mass of mixtures formula_5 can be calculated from the mole fractions xi of the components and their molar masses Mi:
formula_6
It can also be calculated from the mass fractions wi of the components:
formula_7
As an example, the average molar mass of dry air is 28.96 g/mol.
Related quantities.
Molar mass is closely related to the relative molar mass ("M"r) of a compound and to the standard atomic weights of its constituent elements. However, it should be distinguished from the molecular mass (which is confusingly "also" sometimes known as molecular weight), which is the mass of "one" molecule (of any "single" isotopic composition), and to the atomic mass, which is the mass of "one" atom (of any "single" isotope). The dalton, symbol Da, is also sometimes used as a unit of molar mass, especially in biochemistry, with the definition 1 Da = 1 g/mol, despite the fact that it is strictly a unit of mass (1 Da = 1 u = , as of 2022 CODATA recommended values).
Obsolete terms for molar mass include gram atomic mass for the mass, in grams, of one mole of atoms of an element, and gram molecular mass for the mass, in grams, of one mole of molecules of a compound. The gram-atom is a former term for a mole of atoms, and gram-molecule for a mole of molecules.
Molecular weight (M.W.) (for molecular compounds) and formula weight (F.W.) (for non-molecular compounds), are older terms for what is now more correctly called the relative molar mass ("M"r). This is a dimensionless quantity (i.e., a pure number, without units) equal to the molar mass divided by the molar mass constant.
Molecular mass.
The molecular mass (m) is the mass of a given molecule: it is usually measured in daltons (Da or u). Different molecules of the same compound may have different molecular masses because they contain different isotopes of an element. This is distinct but related to the molar mass, which is a measure of the average molecular mass of all the molecules in a sample and is usually the more appropriate measure when dealing with macroscopic (weigh-able) quantities of a substance.
Molecular masses are calculated from the atomic masses of each nuclide, while molar masses are calculated from the standard atomic weights of each element. The standard atomic weight takes into account the isotopic distribution of the element in a given sample (usually assumed to be "normal"). For example, water has a molar mass of , but individual water molecules have molecular masses which range between () and ().
The distinction between molar mass and molecular mass is important because relative molecular masses can be measured directly by mass spectrometry, often to a precision of a few parts per million. This is accurate enough to directly determine the chemical formula of a molecule.
DNA synthesis usage.
The term formula weight has a specific meaning when used in the context of DNA synthesis: whereas an individual phosphoramidite nucleobase to be added to a DNA polymer has protecting groups and has its "molecular weight" quoted including these groups, the amount of molecular weight that is ultimately added by this nucleobase to a DNA polymer is referred to as the nucleobase's "formula weight" (i.e., the molecular weight of this nucleobase within the DNA polymer, minus protecting groups).
Precision and uncertainties.
The precision to which a molar mass is known depends on the precision of the atomic masses from which it was calculated (and very slightly on the value of the molar mass constant, which depends on the measured value of the dalton). Most atomic masses are known to a precision of at least one part in ten-thousand, often much better (the atomic mass of lithium is a notable, and serious, exception). This is adequate for almost all normal uses in chemistry: it is more precise than most chemical analyses, and exceeds the purity of most laboratory reagents.
The precision of atomic masses, and hence of molar masses, is limited by the knowledge of the isotopic distribution of the element. If a more accurate value of the molar mass is required, it is necessary to determine the isotopic distribution of the sample in question, which may be different from the standard distribution used to calculate the standard atomic mass. The isotopic distributions of the different elements in a sample are not necessarily independent of one another: for example, a sample which has been distilled will be enriched in the lighter isotopes of all the elements present. This complicates the calculation of the standard uncertainty in the molar mass.
A useful convention for normal laboratory work is to quote molar masses to two decimal places for all calculations. This is more accurate than is usually required, but avoids rounding errors during calculations. When the molar mass is greater than 1000 g/mol, it is rarely appropriate to use more than one decimal place. These conventions are followed in most tabulated values of molar masses.
Measurement.
Molar masses are almost never measured directly. They may be calculated from standard atomic masses, and are often listed in chemical catalogues and on safety data sheets (SDS). Molar masses typically vary between:
1–238 g/mol for atoms of naturally occurring elements;
for simple chemical compounds;
for polymers, proteins, DNA fragments, etc.
While molar masses are almost always, in practice, calculated from atomic weights, they can also be measured in certain cases. Such measurements are much less precise than modern mass spectrometric measurements of atomic weights and molecular masses, and are of mostly historical interest. All of the procedures rely on colligative properties, and any dissociation of the compound must be taken into account.
Vapour density.
The measurement of molar mass by vapour density relies on the principle, first enunciated by Amedeo Avogadro, that equal volumes of gases under identical conditions contain equal numbers of particles. This principle is included in the ideal gas equation:
formula_8
where n is the amount of substance. The vapour density (ρ) is given by
formula_9
Combining these two equations gives an expression for the molar mass in terms of the vapour density for conditions of known pressure and temperature:
formula_10
Freezing-point depression.
The freezing point of a solution is lower than that of the pure solvent, and the freezing-point depression (Δ"T") is directly proportional to the amount concentration for dilute solutions. When the composition is expressed as a molality, the proportionality constant is known as the cryoscopic constant ("K"f) and is characteristic for each solvent. If w represents the mass fraction of the solute in solution, and assuming no dissociation of the solute, the molar mass is given by
formula_11
Boiling-point elevation.
The boiling point of a solution of an involatile solute is higher than that of the pure solvent, and the boiling-point elevation (Δ"T") is directly proportional to the amount concentration for dilute solutions. When the composition is expressed as a molality, the proportionality constant is known as the ebullioscopic constant ("K"b) and is characteristic for each solvent. If w represents the mass fraction of the solute in solution, and assuming no dissociation of the solute, the molar mass is given by
formula_12
References.
<templatestyles src="Reflist/styles.css" />
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\begin{array}{lll}\nM(\\ce{H}) &= 1.00797(7) \\times M_\\mathrm{u} &= 1.00797(7) \\text{ g/mol} \\\\\nM(\\ce{S}) &= 32.065(5) \\times M_\\mathrm{u} &= 32.065(5) \\text{ g/mol} \\\\\nM(\\ce{Cl}) &= 35.453(2) \\times M_\\mathrm{u} &= 35.453(2) \\text{ g/mol} \\\\\nM(\\ce{Fe}) &= 55.845(2) \\times M_\\mathrm{u} &= 55.845(2) \\text{ g/mol}\n\\end{array}"
},
{
"math_id": 1,
"text": "\\begin{array}{lll}\nM(\\ce{H2}) &= 2\\times 1.00797(7) \\times M_\\mathrm{u} &= 2.01595(4) \\text{ g/mol} \\\\\nM(\\ce{S8}) &= 8\\times 32.065(5) \\times M_\\mathrm{u} &= 256.52(4) \\text{ g/mol} \\\\\nM(\\ce{Cl2}) &= 2\\times 35.453(2) \\times M_\\mathrm{u} &= 70.906(4) \\text{ g/mol}\n\\end{array}"
},
{
"math_id": 2,
"text": "M_u \\approx 1 \\text{ g/mol}"
},
{
"math_id": 3,
"text": "M = M_{\\rm u} M_{\\rm r} = M_{\\rm u} \\sum_i {A_{\\rm r}}_i."
},
{
"math_id": 4,
"text": "\\begin{array}{ll}\nM(\\ce{NaCl}) &= \\bigl[22.98976928(2) + 35.453(2)\\bigr] \\times 1 \\text{ g/mol} \\\\\n &= 58.443(2) \\text{ g/mol} \\\\[4pt]\nM(\\ce{C12H22O11}) &= \\bigl[12 \\times 12.0107(8) + 22 \\times 1.00794(7) + 11 \\times 15.9994(3)\\bigr] \\times 1 \\text{ g/mol} \\\\ \n &= 342.297(14) \\text{ g/mol}\n\\end{array}"
},
{
"math_id": 5,
"text": "\\overline{M}"
},
{
"math_id": 6,
"text": "\\overline{M} = \\sum_i x_i M_i."
},
{
"math_id": 7,
"text": "\\frac{1}{\\overline{M}} = \\sum_i\\frac{w_i}{M_i}."
},
{
"math_id": 8,
"text": "pV = nRT ,"
},
{
"math_id": 9,
"text": "\\rho = {{nM}\\over{V}} ."
},
{
"math_id": 10,
"text": "M = {{RT\\rho}\\over{p}} ."
},
{
"math_id": 11,
"text": "M = {{wK_\\text{f}}\\over{\\Delta T}}.\\ "
},
{
"math_id": 12,
"text": "M = {{wK_\\text{b}}\\over{\\Delta T}}.\\ "
}
] |
https://en.wikipedia.org/wiki?curid=144241
|
14424249
|
Multiple trace theory
|
Theory for how the brain handles memory recall
In psychology, multiple trace theory is a memory consolidation model advanced as an alternative model to strength theory. It posits that each time some information is presented to a person, it is neurally encoded in a unique memory trace composed of a combination of its attributes. Further support for this theory came in the 1960s from empirical findings that people could remember specific attributes about an object without remembering the object itself. The mode in which the information is presented and subsequently encoded can be flexibly incorporated into the model. This memory trace is unique from all others resembling it due to differences in some aspects of the item's attributes, and all memory traces incorporated since birth are combined into a multiple-trace representation in the brain. In memory research, a mathematical formulation of this theory can successfully explain empirical phenomena observed in recognition and recall tasks.
Attributes.
The attributes an item possesses form its trace and can fall into many categories. When an item is committed to memory, information from each of these attributional categories is encoded into the item's trace. There may be a kind of semantic categorization at play, whereby an individual trace is incorporated into overarching concepts of an object. For example, when a person sees a pigeon, a trace is added to the "pigeon" cluster of traces within his or her mind. This new "pigeon" trace, while distinguishable and divisible from other instances of pigeons that the person may have seen within his or her life, serves to support the more general and overarching concept of a pigeon.
Physical.
Physical attributes of an item encode information about physical properties of a presented item. For a word, this could include color, font, spelling, and size, while for a picture, the equivalent aspects could be shapes and colors of objects. It has been shown experimentally that people who are unable to recall an individual word can sometimes recall the first or last letter or even rhyming words, all aspects encoded in the physical orthography of a word's trace. Even when an item is not presented visually, when encoded, it may have some physical aspects based on a visual representation of the item.
Contextual.
Contextual attributes are a broad class of attributes that define the internal and external features that are simultaneous with presentation of the item. Internal context is a sense of the internal network that a trace evokes. This may range from aspects of an individual's mood to other semantic associations the presentation of the word evokes. On the other hand, external context encodes information about the spatial and temporal aspects as information is being presented. This may reflect time of day or weather, for example. Spatial attributes can refer both to physical environment and imagined environment. The method of loci, a mnemonic strategy incorporating an imagined spatial position, assigns relative spatial positions to different items memorized and then "walking through" these assigned positions to remember the items.
Modal.
Modality attributes possess information as to the method by which an item was presented. The most frequent types of modalities in an experimental setting are auditory and visual. Any sensory modality may be utilized practically.
Classifying.
These attributes refer to the categorization of items presented. Items that fit into the same categories will have the same class attributes. For example, if the item "touchdown" were presented, it would evoke the overarching concept of "football" or perhaps, more generally, "sports", and it would likely share class attributes with "endzone" and other elements that fit into the same concept. A single item may fit into different concepts at the time it is presented depending on other attributes of the item, like context. For example, the word "star" might fall into the class of astronomy after visiting a space museum or a class with words like "celebrity" or "famous" after seeing a movie.
Mathematical formulation.
The mathematical formulation of traces allows for a model of memory as an ever-growing matrix that is continuously receiving and incorporating information in the form of a vectors of attributes. Multiple trace theory states that every item ever encoded, from birth to death, will exist in this matrix as multiple traces. This is done by giving every possible attribute some numerical value to classify it as it is encoded, so each encoded memory will have a unique set of numerical attributes.
Matrix definition of traces.
By assigning numerical values to all possible attributes, it is convenient to construct a column vector representation of each encoded item. This vector representation can also be fed into computational models of the brain like neural networks, which take as inputs vectorial "memories" and simulate their biological encoding through neurons.
Formally, one can denote an encoded memory by numerical assignments to all of its possible attributes. If two items are perceived to have the same color or experienced in the same context, the numbers denoting their color and contextual attributes, respectively, will be relatively close. Suppose we encode a total of "L" attributes anytime we see an object. Then, when a memory is encoded, it can be written as m1 with "L" total numerical entries in a column vector:
formula_0.
A subset of the "L" attributes will be devoted to contextual attributes, a subset to physical attributes, and so on. One underlying assumption of multiple trace theory is that, when we construct multiple memories, we organize the attributes in the same order. Thus, we can similarly define vectors m2, m3, ..., mn to account for "n" total encoded memories. Multiple trace theory states that these memories come together in our brain to form a memory matrix from the simple concatenation of the individual memories:
formula_1.
For "L" total attributes and "n" total memories, M will have "L" rows and "n" columns. Note that, although the "n" traces are combined into a large memory matrix, each trace is individually accessible as a column in this matrix.
In this formulation, the "n" different memories are made to be more or less independent of each other. However, items presented in some setting together will become tangentially associated by the similarity of their context vectors. If multiple items are made associated with each other and intentionally encoded in that manner, say an item a and an item b, then the memory for these two can be constructed, with each having "k" attributes as follows:
formula_2.
Context as a stochastic vector.
When items are learned one after another, it is tempting to say that they are learned in the same temporal context. However, in reality, there are subtle variations in context. Hence, contextual attributes are often considered to be changing over time as modeled by a stochastic process. Considering a vector of only "r" total context attributes ti that represents the context of memory mi, the context of the next-encoded memory is given by ti+1:
formula_3
so,
formula_4
Here, ε(j) is a random number sampled from a Gaussian distribution.
Summed similarity.
As explained in the subsequent section, the hallmark of multiple trace theory is an ability to compare some probe item to the pre-existing matrix of encoded memories. This simulates the memory search process, whereby we can determine whether we have ever seen the probe before as in recognition tasks or whether the probe gives rise to another previously encoded memory as in cued recall.
First, the probe p is encoded as an attribute vector. Continuing with the preceding example of the memory matrix M, the probe will have "L" entries:
formula_5.
This p is then compared one by one to all pre-existing memories (trace) in M by determining the Euclidean distance between p and each mi:
formula_6.
Due to the stochastic nature of context, it is almost never the case in multiple trace theory that a probe item exactly matches an encoded memory. Still, high similarity between p and mi is indicated by a small Euclidean distance. Hence, another operation must be performed on the distance that leads to very low similarity for great distance and very high similarity for small distance. A linear operation does not eliminate low-similarity items harshly enough. Intuitively, an exponential decay model seems most suitable:
formula_7
where τ is a decay parameter that can be experimentally assigned. We can go on to then define similarity to the entire memory matrix by a summed similarity SS(p,M) between the probe p and the memory matrix M:
formula_8.
If the probe item is very similar to even one of the encoded memories, SS receives a large boost. For example, given m1 as a probe item, we will get a near 0 distance (not exactly due to context) for i=1, which will add nearly the maximal boost possible to SS. To differentiate from background similarity (there will always be some low similarity to context or a few attributes for example), SS is often compared to some arbitrary criterion. If it is higher than the criterion, then the probe is considered among those encoded. The criterion can be varied based on the nature of the task and the desire to prevent false alarms. Thus, multiple trace theory predicts that, given some cue, the brain can compare that cue to a criterion to answer questions like "has this cue been experienced before?" (recognition) or "what memory does this cue elicit?" (cued recall), which are applications of summed similarity described below.
Applications to memory phenomena.
Recognition.
Multiple trace theory fits well into the conceptual framework for recognition. Recognition requires an individual to determine whether or not they have seen an item before. For example, facial recognition is determining whether one has seen a face before. When asked this for a successfully encoded item (something that has indeed been seen before), recognition should occur with high probability. In the mathematical framework of this theory, we can model recognition of an individual probe item p by summed similarity with a criterion. We translate the test item into an attribute vector as done for the encoded memories and compared to every trace ever encountered. If summed similarity passes the criterion, we say we have seen the item before. Summed similarity is expected to be very low if the item has never been seen but relatively higher if it has due to the similarity of the probe's attributes to some memory of the memory matrix.
formula_9
This can be applied both to individual item recognition and associative recognition for two or more items together.
Cued recall.
The theory can also account for cued recall. Here, some cue is given that is meant to elicit an item out of memory. For example, a factual question like "Who was the first President of the United States?" is a cue to elicit the answer of "George Washington". In the "ab" framework described above, we can take all attributes present in a cue and list consider these the a item in an encoded association as we try to recall the b portion of the mab memory. In this example, attributes like "first", "President", and "United States" will be combined to form the a vector, which will have already been formulated into the mab memory whose b values encode "George Washington". Given a, there are two popular models for how we can successfully recall b:
1) We can go through and determine similarity (not summed similarity, see above for distinction) to every item in memory for the a attributes, then pick whichever memory has the highest similarity for the a. Whatever b-type attributes we are linked to gives what we recall. The mab memory gives best chance of recall since its a elements will have high similarity to the cue a. Still, since recall does not always occur, we can say that the similarity must pass a criterion for recall to occur at all. This is similar to how the IBM machine Watson operates. Here, the similarity compares only the a-type attributes of a to mab.
formula_10
2) We can use a probabilistic choice rule to determine probability of recalling an item as proportional to its similarity. This is akin to throwing a dart at a dartboard with bigger areas represented by larger similarities to the cue item. Mathematically speaking, given the cue a, the probability of recalling the desired memory mab is:
formula_11
In computing both similarity and summed similarity, we only consider relations among a-type attributes. We add the error term because without it, the probability of recalling any memory in M will be 1, but there are certainly times when recall does not occur at all.
Other common results explained.
Phenomena in memory associated with repetition, word frequency, recency, forgetting, and contiguity, among others, can be easily explained in the realm of multiple trace theory. Memory is known to improve with repeated exposure to items. For example, hearing a word several times in a list will improve recognition and recall of that word later on. This is because repeated exposure simply adds the memory into the ever-growing memory matrix, so summed similarity for this memory will be larger and thus more likely to pass the criterion.
In recognition, very common words are harder to recognize as part of a memorized list, when tested, than rare words. This is known as the word frequency effect and can be explained by multiple trace theory as well. For common words, summed similarity will be relatively high, whether the word was seen in the list or not, because it is likely that the word has been encountered and encoded in the memory matrix several times throughout life. Thus, the brain typically selects a higher criterion in determining whether common words are part of a list, making them harder to successfully select. However, rarer words are typically encountered less throughout life and so their presence in the memory matrix is limited. Hence, low overall summed similarity will lead to a more lax criterion. If the word was present in the list, high context similarity at time of test and other attribute similarity will lead to enough boost in summed similarity to excel past criterion and thus recognize the rare word successfully.
Recency in the serial position effect can be explained because more recent memories encoded will share a temporal context most similar to the present context, as the stochastic nature of time will not have had as pronounced an effect. Thus, context similarity will be high for recently encoded items, so overall similarity will be relatively higher for these items as well. The stochastic contextual drift is also thought to account for forgetting because the context in which a memory was encoded is lost over time, so summed similarity for an item only presented in that context will decrease over time.
Finally, empirical data have shown a contiguity effect, whereby items that are presented together temporally, even though they may not be encoded as a single memory as in the "ab" paradigm described above, are more likely to be remembered together. This can be considered a result of low contextual drift between items remembered together, so the contextual similarity between two items presented together is high.
Shortcomings.
One of the biggest shortcomings of multiple trace theory is the requirement of some item with which to compare the memory matrix when determining successful encoding. As mentioned above, this works quite well in recognition and cued recall, but there is a glaring inability to incorporate free recall into the model. Free recall requires an individual to freely remember some list of items. Although the very act of asking to recall may act as a cue that can then elicit cued recall techniques, it is unlikely that the cue is unique enough to reach a summed similarity criterion or to otherwise achieve a high probability of recall.
Another major issue lies in translating the model to biological relevance. It is hard to imagine that the brain has unlimited capacity to keep track of such a large matrix of memories and continue expanding it with every item with which it has ever been presented. Furthermore, searching through this matrix is an exhaustive process that would not be relevant on biological time scales.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf{m_1} = \\begin{bmatrix} m_{1}(1) \\\\ m_{1}(2) \\\\ m_{1}(3) \\\\ \\vdots \\\\m_{1}(L) \\end{bmatrix} "
},
{
"math_id": 1,
"text": "\\mathbf{M} = \\begin{bmatrix} \\mathbf{m_1} & \\mathbf{m_2} & \\mathbf{m_3} & \\cdots & \\mathbf{m_n} \\end{bmatrix} \n= \\begin{bmatrix} m_{1}(1) & m_{2}(1) & m_{3}(1) & \\cdots & m_{n}(1) \\\\ m_{1}(2) & m_{2}(2) & m_{3}(2) & \\cdots & m_{n}(2) \\\\ \\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\\\m_{1}(L) & m_{2}(L) & m_{3}(L) & \\cdots & m_{n}(L)\\end{bmatrix} "
},
{
"math_id": 2,
"text": "\\mathbf{m_{ab}} = \\begin{bmatrix} a(1) \\\\a(2) \\\\ \\vdots \\\\a(k) \\\\b(1) \\\\b(2) \\\\ \\vdots \\\\b(k) \\end{bmatrix} = \\begin{bmatrix} \\mathbf{a} \\\\ \\mathbf{b} \\end{bmatrix} "
},
{
"math_id": 3,
"text": "\\mathbf{t_{i+1}(j)} = \\mathbf{t_{i}(j) + \\epsilon(j)}"
},
{
"math_id": 4,
"text": "\\mathbf{t_{i+1}} = \\begin{bmatrix} t_i(1)+\\epsilon(1) \\\\t_i(2)+\\epsilon(2) \\\\ \\vdots \\\\ t_i(r)+\\epsilon(r) \\end{bmatrix}"
},
{
"math_id": 5,
"text": "\\mathbf{p} = \\begin{bmatrix} p(1) \\\\ p(2) \\\\ \\vdots \\\\p(L) \\end{bmatrix}"
},
{
"math_id": 6,
"text": "\\left \\Vert \\mathbf{p-m_i} \\right \\| = \\sqrt{\\sum_{j=1}^L (p(j)-m_i(j))^2}"
},
{
"math_id": 7,
"text": "similarity(\\mathbf{p,m_i}) = e^{-\\tau \\left \\Vert \\mathbf{p-m_i} \\right \\|} "
},
{
"math_id": 8,
"text": "\\mathbf{SS(p,M)} = \\sum_{i=1}^n e^{-\\tau \\left \\Vert \\mathbf{p-m_i} \\right \\|} = \\sum_{i=1}^n e^{-\\tau \\sqrt{\\sum_{j=1}^L (p(j)-m_i(j))^2}}"
},
{
"math_id": 9,
"text": "P(recognizing~p)~=~P(\\mathbf{SS(p,M)}>criterion)"
},
{
"math_id": 10,
"text": "P(recalling~m_{ab})~=~P(similarity(a,m_{ab})>criterion)"
},
{
"math_id": 11,
"text": "P(recalling~m_{ab})~=~\\frac{similarity(a,m_{ab})}{SS(a,M)+error}"
}
] |
https://en.wikipedia.org/wiki?curid=14424249
|
1442470
|
Transpose of a linear map
|
Induced map between the dual spaces of the two vector spaces
In linear algebra, the transpose of a linear map between two vector spaces, defined over the same field, is an induced map between the dual spaces of the two vector spaces.
The transpose or algebraic adjoint of a linear map is often used to study the original linear map. This concept is generalised by adjoint functors.
Definition.
Let formula_0 denote the algebraic dual space of a vector space formula_1
Let formula_2 and formula_3 be vector spaces over the same field formula_4
If formula_5 is a linear map, then its algebraic adjoint or dual, is the map formula_6 defined by formula_7
The resulting functional formula_8 is called the pullback of formula_9 by formula_10
The continuous dual space of a topological vector space (TVS) formula_2 is denoted by formula_11
If formula_2 and formula_3 are TVSs then a linear map formula_5 is weakly continuous if and only if formula_12 in which case we let formula_13 denote the restriction of formula_14 to formula_15
The map formula_16 is called the transpose or algebraic adjoint of formula_10
The following identity characterizes the transpose of formula_17:
formula_18
where formula_19 is the natural pairing defined by formula_20
Properties.
The assignment formula_21 produces an injective linear map between the space of linear operators from formula_2 to formula_3 and the space of linear operators from formula_22 to formula_23
If formula_24 then the space of linear maps is an algebra under composition of maps, and the assignment is then an antihomomorphism of algebras, meaning that formula_25
In the language of category theory, taking the dual of vector spaces and the transpose of linear maps is therefore a contravariant functor from the category of vector spaces over formula_26 to itself.
One can identify formula_27 with formula_17 using the natural injection into the double dual.
formula_31
and if the linear operator formula_5 is bounded then the operator norm of formula_16 is equal to the norm of formula_17; that is
formula_32
and moreover,
formula_33
Polars.
Suppose now that formula_5 is a weakly continuous linear operator between topological vector spaces formula_2 and formula_3 with continuous dual spaces formula_34 and formula_35 respectively.
Let formula_36 denote the canonical dual system, defined by formula_37 where formula_38 and formula_39 are said to be orthogonal if formula_40
For any subsets formula_41 and formula_42 let
formula_43
denote the (absolute) polar of formula_44 in formula_34 (resp. of formula_45 in formula_2).
formula_49
and
formula_50
formula_51
Annihilators.
Suppose formula_2 and formula_3 are topological vector spaces and formula_5 is a weakly continuous linear operator (so formula_52). Given subsets formula_53 and formula_54 define their annihilators (with respect to the canonical dual system) by
formula_55
and
formula_56
formula_58
Duals of quotient spaces.
Let formula_60 be a closed vector subspace of a Hausdorff locally convex space formula_2 and denote the canonical quotient map by
formula_61
Assume formula_62 is endowed with the quotient topology induced by the quotient map formula_63
Then the transpose of the quotient map is valued in formula_64 and
formula_65
is a TVS-isomorphism onto formula_66
If formula_2 is a Banach space then formula_67 is also an isometry.
Using this transpose, every continuous linear functional on the quotient space formula_62 is canonically identified with a continuous linear functional in the annihilator formula_64 of formula_68
Duals of vector subspaces.
Let formula_60 be a closed vector subspace of a Hausdorff locally convex space formula_1
If formula_69 and if formula_70 is a continuous linear extension of formula_71 to formula_2 then the assignment formula_72 induces a vector space isomorphism
formula_73
which is an isometry if formula_2 is a Banach space.
Denote the inclusion map by
formula_74
The transpose of the inclusion map is
formula_75
whose kernel is the annihilator formula_76 and which is surjective by the Hahn–Banach theorem. This map induces an isomorphism of vector spaces
formula_77
Representation as a matrix.
If the linear map formula_17 is represented by the matrix formula_44 with respect to two bases of formula_2 and formula_78 then formula_16 is represented by the transpose matrix formula_79 with respect to the dual bases of formula_57 and formula_80 hence the name.
Alternatively, as formula_17 is represented by formula_44 acting to the right on column vectors, formula_16 is represented by the same matrix acting to the left on row vectors.
These points of view are related by the canonical inner product on formula_81 which identifies the space of column vectors with the dual space of row vectors.
Relation to the Hermitian adjoint.
The identity that characterizes the transpose, that is, formula_82 is formally similar to the definition of the Hermitian adjoint, however, the transpose and the Hermitian adjoint are not the same map.
The transpose is a map formula_83 and is defined for linear maps between any vector spaces formula_2 and formula_78 without requiring any additional structure.
The Hermitian adjoint maps formula_84 and is only defined for linear maps between Hilbert spaces, as it is defined in terms of the inner product on the Hilbert space.
The Hermitian adjoint therefore requires more mathematical structure than the transpose.
However, the transpose is often used in contexts where the vector spaces are both equipped with a nondegenerate bilinear form such as the Euclidean dot product or another real inner product.
In this case, the nondegenerate bilinear form is often used implicitly to map between the vector spaces and their duals, to express the transposed map as a map formula_85
For a complex Hilbert space, the inner product is sesquilinear and not bilinear, and these conversions change the transpose into the adjoint map.
More precisely: if formula_2 and formula_3 are Hilbert spaces and formula_5 is a linear map then the transpose of formula_17 and the Hermitian adjoint of formula_86 which we will denote respectively by formula_16 and formula_87 are related.
Denote by formula_88 and formula_89 the canonical antilinear isometries of the Hilbert spaces formula_2 and formula_3 onto their duals.
Then formula_90 is the following composition of maps:
formula_91
Applications to functional analysis.
Suppose that formula_2 and formula_3 are topological vector spaces and that formula_5 is a linear map, then many of formula_17's properties are reflected in formula_92
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X^{\\#}"
},
{
"math_id": 1,
"text": "X."
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "Y"
},
{
"math_id": 4,
"text": "\\mathcal{K}."
},
{
"math_id": 5,
"text": "u : X \\to Y"
},
{
"math_id": 6,
"text": "{}^{\\#} u : Y^{\\#} \\to X^{\\#}"
},
{
"math_id": 7,
"text": "f \\mapsto f \\circ u."
},
{
"math_id": 8,
"text": "{}^{\\#} u(f) := f \\circ u"
},
{
"math_id": 9,
"text": "f"
},
{
"math_id": 10,
"text": "u."
},
{
"math_id": 11,
"text": "X^{\\prime}."
},
{
"math_id": 12,
"text": "{}^{\\#} u\\left(Y^{\\prime}\\right) \\subseteq X^{\\prime},"
},
{
"math_id": 13,
"text": "{}^t u : Y^{\\prime} \\to X^{\\prime}"
},
{
"math_id": 14,
"text": "{}^{\\#} u"
},
{
"math_id": 15,
"text": "Y^{\\prime}."
},
{
"math_id": 16,
"text": "{}^t u"
},
{
"math_id": 17,
"text": "u"
},
{
"math_id": 18,
"text": "\\left\\langle {}^t u(f), x \\right\\rangle = \\left\\langle f, u(x) \\right\\rangle \\quad \\text{ for all } f \\in Y ^{\\prime} \\text{ and } x \\in X,"
},
{
"math_id": 19,
"text": "\\left\\langle \\cdot, \\cdot \\right\\rangle"
},
{
"math_id": 20,
"text": "\\left\\langle z, h \\right\\rangle := z(h)."
},
{
"math_id": 21,
"text": "u \\mapsto {}^t u"
},
{
"math_id": 22,
"text": "Y^{\\#}"
},
{
"math_id": 23,
"text": "X^{\\#}."
},
{
"math_id": 24,
"text": "X = Y"
},
{
"math_id": 25,
"text": "{}^t (u v) = {}^t v {}^t u."
},
{
"math_id": 26,
"text": "\\mathcal{K}"
},
{
"math_id": 27,
"text": "{}^t \\left({}^t u\\right)"
},
{
"math_id": 28,
"text": "v : Y \\to Z"
},
{
"math_id": 29,
"text": "{}^t (v \\circ u) = {}^t u \\circ {}^t v"
},
{
"math_id": 30,
"text": "{}^t u : Y^{\\prime} \\to X^{\\prime}."
},
{
"math_id": 31,
"text": "\\|x\\| = \\sup_{\\|x^{\\prime}\\| \\leq 1} \\left|x^{\\prime}(x) \\right| \\quad \\text{ for each } x \\in X"
},
{
"math_id": 32,
"text": "\\|u\\| = \\left\\|{}^t u\\right\\|,"
},
{
"math_id": 33,
"text": "\\|u\\| = \\sup \\left\\{\\left| y^{\\prime}(u x) \\right| : \\|x\\| \\leq 1, \\left\\|y^*\\right\\| \\leq 1 \\text{ where } x \\in X, y^{\\prime} \\in Y^{\\prime} \\right\\}."
},
{
"math_id": 34,
"text": "X^{\\prime}"
},
{
"math_id": 35,
"text": "Y^{\\prime},"
},
{
"math_id": 36,
"text": "\\langle \\cdot, \\cdot \\rangle : X \\times X^{\\prime} \\to \\Complex"
},
{
"math_id": 37,
"text": "\\left\\langle x, x^{\\prime} \\right\\rangle = x^{\\prime} x"
},
{
"math_id": 38,
"text": "x"
},
{
"math_id": 39,
"text": "x^{\\prime}"
},
{
"math_id": 40,
"text": "\\left\\langle x, x^{\\prime} \\right\\rangle = x^{\\prime} x = 0."
},
{
"math_id": 41,
"text": "A \\subseteq X"
},
{
"math_id": 42,
"text": "S^{\\prime} \\subseteq X^{\\prime},"
},
{
"math_id": 43,
"text": "A^{\\circ} = \\left\\{ x^{\\prime} \\in X^{\\prime} : \\sup_{a \\in A} \\left|x^{\\prime}(a)\\right| \\leq 1 \\right\\} \\qquad \\text{ and } \\qquad S^{\\circ} = \\left\\{ x \\in X : \\sup_{s^{\\prime} \\in S^{\\prime}} \\left|s^{\\prime}(x)\\right| \\leq 1 \\right\\}"
},
{
"math_id": 44,
"text": "A"
},
{
"math_id": 45,
"text": "S^{\\prime}"
},
{
"math_id": 46,
"text": "B \\subseteq Y"
},
{
"math_id": 47,
"text": "{}^t u\\left(B^{\\circ}\\right) \\subseteq A^{\\circ}"
},
{
"math_id": 48,
"text": "u(A) \\subseteq B."
},
{
"math_id": 49,
"text": "[u(A)]^{\\circ} = \\left({}^t u\\right)^{-1}\\left(A^{\\circ}\\right)"
},
{
"math_id": 50,
"text": "u(A) \\subseteq B \\quad \\text{ implies } \\quad {}^t u\\left(B^{\\circ}\\right) \\subseteq A^{\\circ}."
},
{
"math_id": 51,
"text": "\\operatorname{ker} {}^t u = \\left(\\operatorname{Im} u\\right)^{\\circ}."
},
{
"math_id": 52,
"text": "\\left({}^t u\\right)\\left(Y^{\\prime}\\right) \\subseteq X^{\\prime}"
},
{
"math_id": 53,
"text": "M \\subseteq X"
},
{
"math_id": 54,
"text": "N \\subseteq X^{\\prime},"
},
{
"math_id": 55,
"text": "\\begin{alignat}{4}\nM^{\\bot} :&= \\left\\{ x^{\\prime} \\in X^{\\prime} : \\left\\langle m, x^{\\prime} \\right\\rangle = 0 \\text{ for all } m \\in M \\right\\} \\\\\n&= \\left\\{ x^{\\prime} \\in X^{\\prime} : x^{\\prime}(M) = \\{0\\} \\right\\} \\qquad \\text{ where } x^{\\prime}(M) := \\left\\{ x^{\\prime}(m) : m \\in M \\right\\}\n\\end{alignat}"
},
{
"math_id": 56,
"text": "\\begin{alignat}{4}\n{}^{\\bot} N :&= \\left\\{ x \\in X : \\left\\langle x, n^{\\prime} \\right\\rangle = 0 \\text{ for all } n^{\\prime} \\in N \\right\\} \\\\\n&= \\left\\{ x \\in X : N(x) = \\{ 0 \\} \\right\\} \\qquad \\text{ where } N(x) := \\left\\{ n^{\\prime}(x) : n^{\\prime} \\in N \\right\\} \\\\\n\\end{alignat}"
},
{
"math_id": 57,
"text": "Y^{\\prime}"
},
{
"math_id": 58,
"text": "\\ker {}^t u = (\\operatorname{Im} u)^{\\bot}"
},
{
"math_id": 59,
"text": "\\operatorname{ker} {}^t u"
},
{
"math_id": 60,
"text": "M"
},
{
"math_id": 61,
"text": "\\pi : X \\to X / M \\quad \\text{ where } \\quad \\pi(x) := x + M."
},
{
"math_id": 62,
"text": "X / M"
},
{
"math_id": 63,
"text": "\\pi : X \\to X / M."
},
{
"math_id": 64,
"text": "M^{\\bot}"
},
{
"math_id": 65,
"text": "{}^t \\pi : (X / M)^{\\prime} \\to M^{\\bot} \\subseteq X^{\\prime}"
},
{
"math_id": 66,
"text": "M^{\\bot}."
},
{
"math_id": 67,
"text": "{}^t \\pi : (X / M)^{\\prime} \\to M^{\\bot}"
},
{
"math_id": 68,
"text": "M."
},
{
"math_id": 69,
"text": "m^{\\prime} \\in M^{\\prime}"
},
{
"math_id": 70,
"text": "x^{\\prime} \\in X^{\\prime}"
},
{
"math_id": 71,
"text": "m^{\\prime}"
},
{
"math_id": 72,
"text": "m^{\\prime} \\mapsto x^{\\prime} + M^{\\bot}"
},
{
"math_id": 73,
"text": "M^{\\prime} \\to X^{\\prime} / \\left(M^{\\bot}\\right),"
},
{
"math_id": 74,
"text": "\\operatorname{In} : M \\to X \\quad \\text{ where } \\quad \\operatorname{In}(m) := m \\quad \\text{ for all } m \\in M."
},
{
"math_id": 75,
"text": "{}^t \\operatorname{In} : X^{\\prime} \\to M^{\\prime}"
},
{
"math_id": 76,
"text": "M^{\\bot} = \\left\\{ x^{\\prime} \\in X^{\\prime} : \\left\\langle m, x^{\\prime} \\right\\rangle = 0 \\text{ for all } m \\in M \\right\\}"
},
{
"math_id": 77,
"text": "X^{\\prime} / \\left(M^{\\bot}\\right) \\to M^{\\prime}."
},
{
"math_id": 78,
"text": "Y,"
},
{
"math_id": 79,
"text": "A^T"
},
{
"math_id": 80,
"text": "X^{\\prime},"
},
{
"math_id": 81,
"text": "\\R^n,"
},
{
"math_id": 82,
"text": "\\left[u^{*}(f), x\\right] = [f, u(x)],"
},
{
"math_id": 83,
"text": "Y^{\\prime} \\to X^{\\prime}"
},
{
"math_id": 84,
"text": "Y \\to X"
},
{
"math_id": 85,
"text": "Y \\to X."
},
{
"math_id": 86,
"text": "u,"
},
{
"math_id": 87,
"text": "u^{*},"
},
{
"math_id": 88,
"text": "I : X \\to X^{*}"
},
{
"math_id": 89,
"text": "J : Y \\to Y^{*}"
},
{
"math_id": 90,
"text": "u^{*}"
},
{
"math_id": 91,
"text": "Y \\overset{J}{\\longrightarrow} Y^* \\overset{{}^{\\text{t}}u}{\\longrightarrow} X^* \\overset{I^{-1}}{\\longrightarrow} X"
},
{
"math_id": 92,
"text": "{}^t u."
},
{
"math_id": 93,
"text": "u(X)"
}
] |
https://en.wikipedia.org/wiki?curid=1442470
|
1442471
|
Dual representation
|
Group representation
In mathematics, if "G" is a group and ρ is a linear representation of it on the vector space "V", then the dual representation ρ* is defined over the dual vector space "V"* as follows:
ρ*("g") is the transpose of ρ("g"−1), that is, ρ*("g") = ρ("g"−1)T for all "g" ∈ "G".
The dual representation is also known as the contragredient representation.
If g is a Lie algebra and π is a representation of it on the vector space "V", then the dual representation π* is defined over the dual vector space "V"* as follows:
π*("X") = −π("X")"T" for all "X" ∈ g.
The motivation for this definition is that Lie algebra representation associated to the dual of a Lie group representation is computed by the above formula. But the definition of the dual of a Lie algebra representation makes sense even if it does not come from a Lie group representation.
In both cases, the dual representation is a representation in the usual sense.
Properties.
Irreducibility and second dual.
If a (finite-dimensional) representation is irreducible, then the dual representation is also irreducible—but not necessarily isomorphic to the original representation. On the other hand, the dual of the dual of any representation is isomorphic to the original representation.
Unitary representations.
Consider a "unitary" representation formula_0 of a group formula_1, and let us work in an orthonormal basis. Thus, formula_0 maps formula_1 into the group of unitary matrices. Then the abstract transpose in the definition of the dual representation may be identified with the ordinary matrix transpose. Since the adjoint of a matrix is the complex conjugate of the transpose, the transpose is the conjugate of the adjoint. Thus, formula_2 is the complex conjugate of the adjoint of the inverse of formula_3. But since formula_3 is assumed to be unitary, the adjoint of the inverse of formula_3 is just formula_3.
The upshot of this discussion is that when working with unitary representations in an orthonormal basis, formula_4 is just the complex conjugate of formula_3.
The SU(2) and SU(3) cases.
In the representation theory of SU(2), the dual of each irreducible representation does turn out to be isomorphic to the representation. But for the representations of SU(3), the dual of the irreducible representation with label formula_5 is the irreducible representation with label formula_6. In particular, the standard three-dimensional representation of SU(3) (with highest weight formula_7) is not isomorphic to its dual. In the theory of quarks in the physics literature, the standard representation and its dual are called "formula_8" and "formula_9."
General semisimple Lie algebras.
More generally, in the representation theory of semisimple Lie algebras (or the closely related representation theory of compact Lie groups), the weights of the dual representation are the "negatives" of the weights of the original representation. (See the figure.) Now, for a given Lie algebra, if it should happen that operator formula_10 is an element of the Weyl group, then the weights of every representation are automatically invariant under the map formula_11. For such Lie algebras, "every" irreducible representation will be isomorphic to its dual. (This is the situation for SU(2), where the Weyl group is formula_12.) Lie algebras with this property include the odd orthogonal Lie algebras formula_13 (type formula_14) and the symplectic Lie algebras formula_15 (type formula_16).
If, for a given Lie algebra, formula_10 is "not" in the Weyl group, then the dual of an irreducible representation will generically not be isomorphic to the original representation. To understand how this works, we note that there is always a unique Weyl group element formula_17 mapping the negative of the fundamental Weyl chamber to the fundamental Weyl chamber. Then if we have an irreducible representation with highest weight formula_18, the "lowest" weight of the dual representation will be formula_19. It then follows that the "highest" weight of the dual representation will be formula_20. Since we are assuming formula_10 is not in the Weyl group, formula_17 cannot be formula_10, which means that the map formula_21 is not the identity. Of course, it may still happen that for certain special choices of formula_18, we might have formula_22. The adjoint representation, for example, is always isomorphic to its dual.
In the case of SU(3) (or its complexified Lie algebra, formula_23), we may choose a base consisting of two roots formula_24 at an angle of 120 degrees, so that the third positive root is formula_25. In this case, the element formula_17 is the reflection about the line perpendicular to formula_26. Then the map formula_21 is the reflection about the line "through" formula_26. The self-dual representations are then the ones that lie along the line through formula_26. These are the representations with labels of the form formula_27, which are the representations whose weight diagrams are "regular" hexagons.
Motivation.
In representation theory, both vectors in "V" and linear functionals in "V"* are considered as "column vectors" so that the representation can act (by matrix multiplication) from the "left". Given a basis for "V" and the dual basis for "V"*, the action of a linear functional φ on "v", φ(v) can be expressed by matrix multiplication,
formula_28,
where the superscript "T" is matrix transpose. Consistency requires
formula_29
With the definition given,
formula_30
For the Lie algebra representation one chooses consistency with a possible group representation. Generally, if Π is a representation of a Lie group, then π given by
formula_31
is a representation of its Lie algebra. If Π* is dual to Π, then its corresponding Lie algebra representation π* is given by
formula_32
Example.
Consider the group formula_33 of complex numbers of absolute value 1. The irreducible representations are all one dimensional, as a consequence of Schur's lemma. The irreducible representations are parameterized by integers formula_34 and given explicitly as
formula_35
The dual representation to formula_36 is then the inverse of the transpose of this one-by-one matrix, that is,
formula_37
That is to say, the dual of the representation formula_36 is formula_38.
Generalization.
A general ring module does not admit a dual representation. Modules of Hopf algebras do, however.
|
[
{
"math_id": 0,
"text": "\\rho"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "\\rho^\\ast(g)"
},
{
"math_id": 3,
"text": "\\rho(g)"
},
{
"math_id": 4,
"text": "\\rho^*(g)"
},
{
"math_id": 5,
"text": "(m_1,m_2)"
},
{
"math_id": 6,
"text": "(m_2,m_1)"
},
{
"math_id": 7,
"text": "(1,0)"
},
{
"math_id": 8,
"text": "3"
},
{
"math_id": 9,
"text": "\\bar 3"
},
{
"math_id": 10,
"text": "-I"
},
{
"math_id": 11,
"text": "\\mu\\mapsto -\\mu"
},
{
"math_id": 12,
"text": "\\{I,-I\\}"
},
{
"math_id": 13,
"text": "\\operatorname{so}(2n+1;\\mathbb C)"
},
{
"math_id": 14,
"text": "B_n"
},
{
"math_id": 15,
"text": "\\operatorname{sp}(n;\\mathbb C)"
},
{
"math_id": 16,
"text": "C_n"
},
{
"math_id": 17,
"text": "w_0"
},
{
"math_id": 18,
"text": "\\mu"
},
{
"math_id": 19,
"text": "-\\mu"
},
{
"math_id": 20,
"text": "w_0\\cdot(-\\mu)\\,"
},
{
"math_id": 21,
"text": "\\mu\\mapsto w_0\\cdot(-\\mu)"
},
{
"math_id": 22,
"text": "\\mu=w_0\\cdot(-\\mu)"
},
{
"math_id": 23,
"text": "\\operatorname{sl}(3;\\mathbb C)"
},
{
"math_id": 24,
"text": "\\{\\alpha_1,\\alpha_2\\}"
},
{
"math_id": 25,
"text": "\\alpha_3=\\alpha_1+\\alpha_2"
},
{
"math_id": 26,
"text": "\\alpha_3"
},
{
"math_id": 27,
"text": "(m,m)"
},
{
"math_id": 28,
"text": "\\langle\\varphi, v\\rangle \\equiv \\varphi(v) = \\varphi^Tv"
},
{
"math_id": 29,
"text": "\\langle{\\rho}^*(g)\\varphi, \\rho(g)v\\rangle = \\langle\\varphi, v\\rangle."
},
{
"math_id": 30,
"text": "\\langle{\\rho}^*(g)\\varphi, \\rho(g)v\\rangle = \\langle\\rho(g^{-1})^T\\varphi, \\rho(g)v\\rangle = (\\rho(g^{-1})^T\\varphi)^T \\rho(g)v = \\varphi^T\\rho(g^{-1})\\rho(g)v = \\varphi^Tv = \\langle\\varphi, v\\rangle."
},
{
"math_id": 31,
"text": "\\pi(X) = \\frac{d}{dt}\\Pi(e^{tX})|_{t = 0}."
},
{
"math_id": 32,
"text": "\\pi^*(X) = \\frac{d}{dt}\\Pi^*(e^{tX})|_{t = 0} = \\frac{d}{dt}\\Pi(e^{-tX})^T|_{t = 0} = -\\pi(X)^T."
},
{
"math_id": 33,
"text": "G=U(1)"
},
{
"math_id": 34,
"text": "n"
},
{
"math_id": 35,
"text": "\\rho_n(e^{i\\theta})=[e^{in\\theta}]."
},
{
"math_id": 36,
"text": "\\rho_n"
},
{
"math_id": 37,
"text": "\\rho_n^*(e^{i\\theta})=[e^{-in\\theta}]=\\rho_{-n}(e^{i\\theta})."
},
{
"math_id": 38,
"text": "\\rho_{-n}"
}
] |
https://en.wikipedia.org/wiki?curid=1442471
|
1442505
|
Complex conjugate of a vector space
|
Mathematics concept
In mathematics, the complex conjugate of a complex vector space formula_0 is a complex vector space formula_1 that has the same elements and additive group structure as formula_2 but whose scalar multiplication involves conjugation of the scalars. In other words, the scalar multiplication of formula_1 satisfies
formula_3
where formula_4 is the scalar multiplication of formula_5 and formula_6 is the scalar multiplication of formula_7
The letter formula_8 stands for a vector in formula_2 formula_9 is a complex number, and formula_10 denotes the complex conjugate of formula_11
More concretely, the complex conjugate vector space is the same underlying real vector space (same set of points, same vector addition and real scalar multiplication) with the conjugate linear complex structure formula_12 (different multiplication by formula_13).
Motivation.
If formula_14 and formula_15 are complex vector spaces, a function formula_16 is antilinear if
formula_17
With the use of the conjugate vector space formula_1, an antilinear map formula_16 can be regarded as an ordinary linear map of type formula_18 The linearity is checked by noting:
formula_19
Conversely, any linear map defined on formula_5 gives rise to an antilinear map on formula_7
This is the same underlying principle as in defining the opposite ring so that a right formula_20-module can be regarded as a left formula_21-module, or that of an opposite category so that a contravariant functor formula_22 can be regarded as an ordinary functor of type formula_23
Complex conjugation functor.
A linear map formula_24 gives rise to a corresponding linear map formula_25 that has the same action as formula_26 Note that formula_27 preserves scalar multiplication because
formula_28
Thus, complex conjugation formula_29 and formula_30 define a functor from the category of complex vector spaces to itself.
If formula_14 and formula_15 are finite-dimensional and the map formula_31 is described by the complex matrix formula_32 with respect to the bases formula_33 of formula_14 and formula_34 of formula_35 then the map formula_36 is described by the complex conjugate of formula_32 with respect to the bases formula_37 of formula_5 and formula_38 of formula_39
Structure of the conjugate.
The vector spaces formula_14 and formula_5 have the same dimension over the complex numbers and are therefore isomorphic as complex vector spaces. However, there is no natural isomorphism from formula_14 to formula_40
The double conjugate formula_41 is identical to formula_7
Complex conjugate of a Hilbert space.
Given a Hilbert space formula_42 (either finite or infinite dimensional), its complex conjugate formula_43 is the same vector space as its continuous dual space formula_44
There is one-to-one antilinear correspondence between continuous linear functionals and vectors.
In other words, any continuous linear functional on formula_42 is an inner multiplication to some fixed vector, and vice versa.
Thus, the complex conjugate to a vector formula_45 particularly in finite dimension case, may be denoted as formula_46 (v-dagger, a row vector that is the conjugate transpose to a column vector formula_8).
In quantum mechanics, the conjugate to a "ket vector" formula_47 is denoted as formula_48 – a "bra vector" (see bra–ket notation).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "V\\,"
},
{
"math_id": 1,
"text": "\\overline V"
},
{
"math_id": 2,
"text": "V,"
},
{
"math_id": 3,
"text": "\\alpha\\,*\\, v = {\\,\\overline{\\alpha} \\cdot \\,v\\,}"
},
{
"math_id": 4,
"text": "*"
},
{
"math_id": 5,
"text": "\\overline{V}"
},
{
"math_id": 6,
"text": "\\cdot"
},
{
"math_id": 7,
"text": "V."
},
{
"math_id": 8,
"text": "v"
},
{
"math_id": 9,
"text": "\\alpha"
},
{
"math_id": 10,
"text": "\\overline{\\alpha}"
},
{
"math_id": 11,
"text": "\\alpha."
},
{
"math_id": 12,
"text": "J"
},
{
"math_id": 13,
"text": "i"
},
{
"math_id": 14,
"text": "V"
},
{
"math_id": 15,
"text": "W"
},
{
"math_id": 16,
"text": "f : V \\to W"
},
{
"math_id": 17,
"text": "f(v + w) = f(v) + f(w) \\quad \\text{ and } \\quad f(\\alpha v) = \\overline{\\alpha} \\, f(v)"
},
{
"math_id": 18,
"text": "\\overline{V} \\to W."
},
{
"math_id": 19,
"text": "f(\\alpha * v) = f(\\overline{\\alpha} \\cdot v) = \\overline{\\overline{\\alpha}} \\cdot f(v) = \\alpha \\cdot f(v)"
},
{
"math_id": 20,
"text": "R"
},
{
"math_id": 21,
"text": "R^{op}"
},
{
"math_id": 22,
"text": "C \\to D"
},
{
"math_id": 23,
"text": "C^{op} \\to D."
},
{
"math_id": 24,
"text": "f : V \\to W\\,"
},
{
"math_id": 25,
"text": "\\overline{f} : \\overline{V} \\to \\overline{W}"
},
{
"math_id": 26,
"text": "f."
},
{
"math_id": 27,
"text": "\\overline f"
},
{
"math_id": 28,
"text": "\\overline{f}(\\alpha * v) = f(\\overline{\\alpha} \\cdot v) = \\overline{\\alpha} \\cdot f(v) = \\alpha * \\overline{f}(v)"
},
{
"math_id": 29,
"text": "V \\mapsto \\overline{V}"
},
{
"math_id": 30,
"text": "f \\mapsto\\overline f"
},
{
"math_id": 31,
"text": "f"
},
{
"math_id": 32,
"text": "A"
},
{
"math_id": 33,
"text": "\\mathcal{B}"
},
{
"math_id": 34,
"text": "\\mathcal{C}"
},
{
"math_id": 35,
"text": "W,"
},
{
"math_id": 36,
"text": "\\overline{f}"
},
{
"math_id": 37,
"text": "\\overline{\\mathcal{B}}"
},
{
"math_id": 38,
"text": "\\overline{\\mathcal{C}}"
},
{
"math_id": 39,
"text": "\\overline{W}."
},
{
"math_id": 40,
"text": "\\overline{V}."
},
{
"math_id": 41,
"text": "\\overline{\\overline{V}}"
},
{
"math_id": 42,
"text": "\\mathcal{H}"
},
{
"math_id": 43,
"text": "\\overline{\\mathcal{H}}"
},
{
"math_id": 44,
"text": "\\mathcal{H}^{\\prime}."
},
{
"math_id": 45,
"text": "v,"
},
{
"math_id": 46,
"text": "v^\\dagger"
},
{
"math_id": 47,
"text": "\\,|\\psi\\rangle"
},
{
"math_id": 48,
"text": "\\langle\\psi|\\,"
}
] |
https://en.wikipedia.org/wiki?curid=1442505
|
14425296
|
Hume-Rothery rules
|
Rules for elements dissolving in a solid metal
Hume-Rothery rules, named after William Hume-Rothery, are a set of basic rules that describe the conditions under which an element could dissolve in a metal, forming a solid solution. There are two sets of rules; one refers to substitutional solid solutions, and the other refers to interstitial solid solutions.
Substitutional solid solution rules.
For substitutional solid solutions, the Hume-Rothery rules are as follows:
Interstitial solid solution rules.
For interstitial solid solutions, the Hume-Rothery Rules are:
Solid solution rules for multicomponent systems.
Fundamentally, the Hume-Rothery rules are restricted to binary systems that form either substitutional or interstitial solid solutions. However, this approach limits assessing advanced alloys which are commonly multicomponent systems. Free energy diagrams (or phase diagrams) offer in-depth knowledge of equilibrium restraints in complex systems. In essence the Hume-Rothery rules (and Pauling's rules) are based on geometrical restraints. Likewise are the advancements being done to the Hume-Rothery rules. Where they are being considered as critical contact criterion describable with Voronoi diagrams. This could ease the theoretical phase diagram generation of multicomponent systems.
For alloys containing TM elements there is a difficulty in interpretation of the Hume-Rothery electron concentration rule as the e/a values for transition metals have been quite controversial for a long time and no satisfied solutions have yet emerged.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\% \\text{ difference} = \\left ( \\frac{r_\\text{solute} - r_\\text{solvent}}{r_\\text{solvent}} \\right ) \\times 100\\% \\le 15\\%."
}
] |
https://en.wikipedia.org/wiki?curid=14425296
|
14425627
|
AD–AS model
|
Macroeconomic model relating aggregate demand and supply
The AD–AS or aggregate demand–aggregate supply model (also known as the aggregate supply–aggregate demand or AS–AD model) is a widely used macroeconomic model that explains short-run and long-run economic changes through the relationship of aggregate demand (AD) and aggregate supply (AS) in a diagram. It coexists in an older and static version depicting the two variables output and price level, and in a newer dynamic version showing output and inflation (i.e. the change in the price level over time, which is usually of more direct interest).
The AD–AS model was invented around 1950 and became one of the primary simplified representations of macroeconomic issues toward the end of the 1970s when inflation became an important political issue. From around 2000 the modified version of a dynamic AD–AS model, incorporating contemporary monetary policy strategies focusing on inflation targeting and using the interest rate as a primary policy instrument, was developed, gradually superseding the traditional static model version in university-level economics textbooks.
The dynamic AD–AS model can be viewed as a simplified version of the more advanced and complex dynamic stochastic general equilibrium (DSGE) models which are state-of-the-art models used by central banks and other organizations to analyze economic fluctuations. Unlike DSGE models, the dynamic AD–AS model does not provide a microeconomic foundation in the form of optimizing firms and households, but the macroeconomic relationships ultimately posited by the optimizing models are similar to those emerging from the modern-version AD–AS model. At the same time, the latter is much simpler and consequently more easily accessible for students, making it a widespread tool for teaching purposes.
History.
Origins.
According to economic historian A.K. Dutt, the AD–AS diagram first made its appearance in 1948 in a contribution by O.H. Brownlee to a textbook on applied economics. Also a textbook by Kenneth E. Boulding in the same year presented a diagram in output-price space, but unlike Brownlee's version without trying to solve the model; Boulding rather uses the diagram to warn about the dangers of aggregative thinking. Brownlee, on the contrary, went on working on the diagram and in 1950 published an article in Journal of Political Economy, which is allegedly the first published version of a full AD–AS model in Y-P space. In 1951, Jacob Marschak published lecture notes providing the first full textbook treatment of the AD–AS model, presenting the same model as Brownlee's 1948 version, though not citing neither Brownlee nor anyone else.
Growing popularity in 1970s.
In the course of time, the model spread to several textbooks, becoming a standard modelling tool in principles and intermediate economics textbooks. In particular, after inflation became important in the late 1960s and 1970s, there was a need to complement the IS–LM model, which had been a dominant model for teaching purposes until that time, but assumed a constant price level, with a model that incorporated aggregate supply and consequently could provide an explanation of changes in the price level. Thus, the "IS–LM–AS model", graphically depicted as an aggregate supply curve together with a curve combining the IS and LM curves and called an aggregate demand curve, became a standard teaching model only after the inflationary supply shocks of the 1970s. In particular, two intermediate textbooks appearing in 1978 and later to be widely used, one by Rudi Dornbusch and Stanley Fischer, and one by Robert J. Gordon, together with William Hoban Branson's texbook from its second edition in 1979, all presented an AD–AS model.
Rise of the dynamic AD–AS version.
From around the turn of the century, the traditional AD–AS diagram, as well as the traditional version of the IS–LM diagram, upon which the derivation of the AD curve rests, has been criticized for being obsolete. One reason is that the traditional IS–LM diagram and, consequently, AD curve rested upon the assumption of the central bank targeting money supply as its central policy variable. In contrast, central banks since around 1990 have largely abandoned controlling money supply, instead attempting to target inflation, using the policy interest rate as their main policy instrument, possibly via a Taylor rule-like strategy. Another reason is that for real-world policy purposes, it is generally not interesting to analyze the interaction between output and the price level per se, which is what the traditional AD–AS diagram illustrates, but rather between output and the "change" in the price level, i.e. inflation.
Because of that, the original AD–AS model has increasingly been supplanted in textbooks by a dynamic version which directly analyzes equilibria in output and inflation levels, showing these variables along the axes of the diagram. In some textbooks, the dynamic AD–AS version is referred to as the "three-equation New Keynesian model", the three equations being an IS relation, often augmented with a term that allows for expectations influencing demand, a monetary policy (interest) rule and a short-run Phillips curve. Olivier Blanchard in his widely-used intermediate-level textbook uses the term IS–LM–PC model (PC standing for Phillips curve) for the same basic construction.
A stepping stone towards DSGE models.
The dynamic AD–AS model can be viewed as a simplified version of the more advanced and complex dynamic stochastic general equilibrium (DSGE) models which are state-of-the-art models used by central banks and other organizations to analyze economic fluctuations. Unlike DSGE models, the dynamic AD–AS model does not provide a microeconomic foundation in the form of optimizing firms and households, but the macroeconomic relationships ultimately posited by the optimizing models are similar to those emerging from the modern-version AD–AS model.
Static AD–AS model.
The traditional or static AD/AS model illustrates the relationship between output and the price level of the economy under the assumptions of the model, containing both a short-run and a long-run aggregate supply curve (abbreviated SRAS and LRAS, respectively). In the short run wages and other resource prices are sticky and slow to adjust to new price levels. This gives way to an upward sloping or, in the extreme case of completely fixed prices, horizontal SRAS. In the long-run, resource prices adjust to the price level bringing the economy back to its structural output level along a vertical LRAS. Movements of the two curves can be used to predict the effects that various exogenous events will have on two variables: real GDP and the price level.
Aggregate demand curve.
The AD (aggregate demand) curve in the static AD–AS model is downward sloping, reflecting a negative correlation between output and the price level on the demand side. It shows the combinations of the price level and level of the output at which the goods and assets markets are simultaneously in equilibrium.
The equation for the AD curve in general terms can be written as:
formula_0,
where Y is real GDP, M is the nominal money supply, P is the price level, G is real government spending, T is real taxes levied, and Z1 any other variables that affect aggregate demand.
Aggregate supply curve.
The aggregate supply curve in the static AD–AS model illustrates the relationship between the supply of goods and services on the one hand and the price level on the other hand. Under the premise that the price level is flexible in the long run, but sticky or even completely fixed under shorter time horizons, it is usual to distinguish between a long-run and a short-run aggregate supply curve. Whereas the long-run aggregate supply curve (LRAS) is vertical, the short-run aggregate supply curve will have a positive slope or, in the extreme case of a completely constant price level, be horizontal.
The equation for the aggregate supply curve in general terms may be written as
formula_1,
where W is the nominal wage rate (exogenous due to stickiness in the short run), Pe is the anticipated (expected) price level, and Z2 is a vector of exogenous variables that can affect the position of the labor demand curve.
A horizontal aggregate supply curve (sometimes called a "Keynesian" aggregate supply curve) implies that the firm will supply whatever amount of goods is demanded at a particular price level. One possible justification for this is that when there is unemployment, firms can readily obtain as much labour as they want at that current wage, and production can increase without any additional costs (e.g. machines are idle which can simply be turned on). Firms' average costs of production therefore are assumed not to change as their output level changes.
The long-run aggregate supply curve refers not to a time frame in which the capital stock is free to be set optimally (as would be the terminology in the micro-economic theory of the firm), but rather to a time frame in which wages are free to adjust in order to equilibrate the labor market and in which price anticipations are accurate. A vertical long-run aggregate supply curve (sometimes called a "classical" aggregate supply curve) illustrates a situation where the level of output does not depend on the price level, but is exclusively determined by the supply of production factors like the capital and the labour force, employment being at its structural ("natural") level.
Dynamic AD–AS model.
The modern or dynamic AD/AS model illustrates the connection between output and inflation, combining an IS relation (i.e., a relation describing aggregate demand as a function of various demand components, some of which are negative ly related to the interest rate), a monetary policy rule determining the policy interest rate (which together form the AD curve) and a Phillips curve relationship from which the aggregate supply curve is derived.
Aggregate demand curve.
The AD curve slopes downward, illustrating a negative correlation between output and inflation. When the central bank observes increased inflation, it will raise its policy interest rate sufficiently to increase the real interest rate of the economy, dampening aggregate demand and consequently the overall activity level of the economy.
Aggregate supply curve.
The dynamic AS curve slopes upward, reflecting the mechanisms of the Phillips curve: Other things equal, higher levels of activity reflect higher increases in wages and other marginal costs of production, causing higher inflation through the firms' price-setting mechanisms as they induce firms to raise their prices at a higher rate. There will be a vertical long-run aggregate supply curve at the level of structural (natural) output.
Shifts of aggregate demand and aggregate supply.
The following summarizes the exogenous events that could shift the aggregate supply or aggregate demand curve to the right. Exogenous events happening in the opposite direction would shift the relevant curve in the opposite direction.
Shifts of aggregate demand.
The dynamic aggregate demand curve shifts when either fiscal policy or monetary policy is changed or any other kinds of shocks to aggregate demand occur. Changes in the level of potential Y also shifts the AD curve, so that this type of shocks has an effect on both the supply and the demand side of the model.
Rightward aggregate demand shifts can be caused by any shock to one of the autonomous components of aggregate demand, e.g.:
Shifts of aggregate supply.
The dynamic aggregate supply curve is drawn for a given value of inflation expectations and level of potential output. Changes in either of these variables as well as a number of possible supply shocks will shift the dynamic aggregate supply curve.
The long-run aggregate supply curve is affected by events that affect the potential output of the economy. These include the following shocks which would shift the long-run aggregate supply curve to the right:
Applications.
Functional finance theory.
AD-AS analysis are applied to Functional Finance Theory and/or MMT to study a relationship between inflation rate and economic growth rate. When a country's economy grows, the country needs deficit spending to maintain full employment without inflation. Inflation starts to occur when the interest rate of its government bond becomes larger than the growth rate, provided that the country maintains full employment. Also, when the country recovers from recession, it needs to increase government expenditure to achieve full employment.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Y=Y^{d}\\left(\\tfrac{M}{P}, G, T, Z_1\\right)"
},
{
"math_id": 1,
"text": "Y=Y^{s}(W/P, \\ \\ P/P^{e}, \\ \\ Z_2)"
}
] |
https://en.wikipedia.org/wiki?curid=14425627
|
1442618
|
Complex conjugate representation
|
In mathematics, if "G" is a group and Π is a representation of it over the complex vector space "V", then the complex conjugate representation is defined over the complex conjugate vector space as follows:
is the conjugate of Π("g") for all "g" in "G".
is also a representation, as one may check explicitly.
If g is a real Lie algebra and π is a representation of it over the vector space "V", then the conjugate representation is defined over the conjugate vector space as follows:
is the conjugate of π("X") for all "X" in g.
is also a representation, as one may check explicitly.
If two real Lie algebras have the same complexification, and we have a complex representation of the complexified Lie algebra, their conjugate representations are still going to be different. See spinor for some examples associated with spinor representations of the spin groups Spin("p" + "q") and Spin("p", "q").
If formula_0 is a *-Lie algebra (a complex Lie algebra with a * operation which is compatible with the Lie bracket),
is the conjugate of −π("X"*) for all "X" in g
For a finite-dimensional unitary representation, the dual representation and the conjugate representation coincide. This also holds for pseudounitary representations.
|
[
{
"math_id": 0,
"text": "\\mathfrak{g}"
}
] |
https://en.wikipedia.org/wiki?curid=1442618
|
14427248
|
Sargan–Hansen test
|
Statistical test
The Sargan–Hansen test or Sargan's formula_0 test is a statistical test used for testing over-identifying restrictions in a statistical model. It was proposed by John Denis Sargan in 1958, and several variants were derived by him in 1975. Lars Peter Hansen re-worked through the derivations and showed that it can be extended to general non-linear GMM in a time series context.
The Sargan test is based on the assumption that model parameters are identified via a priori restrictions on the coefficients, and tests the validity of over-identifying restrictions. The test statistic can be computed from residuals from instrumental variables regression by constructing a quadratic form based on the cross-product of the residuals and exogenous variables. Under the null hypothesis that the over-identifying restrictions are valid, the statistic is asymptotically distributed as a chi-square variable with formula_1 degrees of freedom (where formula_2 is the number of instruments and formula_3 is the number of endogenous variables).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "J"
},
{
"math_id": 1,
"text": "(m - k)"
},
{
"math_id": 2,
"text": "m"
},
{
"math_id": 3,
"text": "k"
}
] |
https://en.wikipedia.org/wiki?curid=14427248
|
14429995
|
Balance puzzle
|
Logic puzzle
A balance puzzle or weighing puzzle is a logic puzzle about balancing items—often coins—to determine which holds a different value, by using balance scales a limited number of times. These differ from puzzles that assign weights to items, in that only the relative mass of these items is relevant.
For example, in detecting a dissimilar coin in three weighings (n = 3), the maximum number of coins that can be analyzed is = 13. Note that with 3 weighs and 13 coins, it is not always possible to determine the identity of the last coin (whether it is heavier or lighter than the rest), but merely that the coin is different. In general, with "n" weighs, you can determine the identity of a coin if you have - 1 or less coins. In the case n = 3, you can truly discover the identity of the different coin out of 12 coins.
Nine-coin problem.
A well-known example has up to nine items, say coins (or balls), that are identical in weight except one, which is lighter than the others—a counterfeit (an oddball). The difference is perceptible only by weighing them on scale—but only the coins themselves can be weighed. How can one isolate the counterfeit coin with only two weighings?
Solution.
To find a solution, we first consider the maximum number of items from which one can find the lighter one in just one weighing. The maximum number possible is three. To find the lighter one, we can compare any two coins, leaving the third out. If the two coins weigh the same, then the lighter coin must be one of those not on the balance. Otherwise, it is the one indicated as lighter by the balance.
Now, imagine the nine coins in three stacks of three coins each. In one move we can find which of the three stacks is lighter (i.e. the one containing the lighter coin). It then takes only one more move to identify the light coin from within that lighter stack. So in two weighings, we can find a single light coin from a set of 3 × 3 = 9.
By extension, it would take only three weighings to find the odd light coin among 27 coins, and four weighings to find it from 81 coins.
Twelve-coin problem.
A more complex version has twelve coins, eleven of twelve of which are identical. If one is different, we don't know whether it is heavier or lighter than the others. This time the balance may be used three times to determine if there is a unique coin—and if there is, to isolate it and determine its weight relative to the others. (This puzzle and its solution first appeared in an article in 1945.) The problem has a simpler variant with three coins in two weighings, and a more complex variant with 39 coins in four weighings.
Solution.
This problem has more than one solution. One is easily scalable to a higher number of coins by using base-three numbering: labelling each coin with a different number of three digits in base three, and positioning at the "n"-th weighing all the coins that are labelled with the "n"-th digit identical to the label of the plate (with three plates, one on each side of the scale labelled 0 and 2, and one off the scale labelled 1). Other step-by-step procedures are similar to the following. It is less straightforward for this problem, and the second and third weighings depend on what has happened previously, although that need not be the case (see below).
1. One side is heavier than the other. If this is the case, remove three coins from the heavier side, move three coins from the lighter side to the heavier side, and place three coins that were not weighed the first time on the lighter side. (Remember which coins are which.) There are three possibilities:
1.a) The same side that was heavier the first time is still heavier. This means that either the coin that stayed there is heavier or that the coin that stayed on the lighter side is lighter. Balancing one of these against one of the other ten coins reveals which of these is true, thus solving the puzzle.
1.b) The side that was heavier the first time is lighter the second time. This means that one of the three coins that went from the lighter side to the heavier side is the light coin. For the third attempt, weigh two of these coins against each other: if one is lighter, it is the unique coin; if they balance, the third coin is the light one.
1.c) Both sides are even. This means that one of the three coins that was removed from the heavier side is the heavy coin. For the third attempt, weigh two of these coins against each other: if one is heavier, it is the unique coin; if they balance, the third coin is the heavy one.
2. Both sides are even. If this is the case, all eight coins are identical and can be set aside. Take the four remaining coins and place three on one side of the balance. Place 3 of the 8 identical coins on the other side. There are three possibilities:
2.a) The three remaining coins are lighter. In this case you now know that one of those three coins is the odd one out and that it is lighter. Take two of those three coins and weigh them against each other. If the balance tips then the lighter coin is the odd one out. If the two coins balance then the third coin not on the balance is the odd one out and it is lighter.
2.b) The three remaining coins are heavier. In this case you now know that one of those three coins is the odd one out and that it is heavier. Take two of those three coins and weigh them against each other. If the balance tips then the heavier coin is the odd one out. If the two coins balance then the third coin not on the balance is the odd one out and it is heavier.
2.c) The three remaining coins balance. In this case you just need to weigh the remaining coin against any of the other 11 coins and this tells you whether it is heavier, lighter, or the same.
Variations.
Given a population of 13 coins in which it is known that 1 of the 13 is different (mass) from the rest, it is simple to determine which coin it is with a balance and 3 tests as follows:
1) Subdivide the coins in to 2 groups of 4 coins and a third group with the remaining 5 coins.
2) Test 1, Test the 2 groups of 4 coins against each other:
a. If the coins balance, the odd coin is in the population of 5 and proceed to test 2a.
b. The odd coin is among the population of 8 coins, proceed in the same way as in the 12 coins problem.
3) Test 2a, Test 3 of the coins from the group of 5 coins against any 3 coins from the population of 8 coins:
a. If the 3 coins balance, then the odd coin is among the remaining population of 2 coins. Test one of the 2 coins against any other coin; if they balance, the odd coin is the last untested coin, if they do not balance, the odd coin is the current test coin.
b. If the 3 coins do not balance, then the odd coin is from this population of 3 coins. Pay attention to the direction of the balance swing (up means the odd coin is light, down means it is heavy). Remove one of the 3 coins, move another to the other side of the balance (remove all other coins from balance). If the balance evens out, the odd coin is the coin that was removed. If the balance switches direction, the odd coin is the one that was moved to the other side, otherwise, the odd coin is the coin that remained in place.
With a reference coin.
If there is one authentic coin for reference then the suspect coins can be thirteen. Number the coins from 1 to 13 and the authentic coin number 0 and perform these weighings in any order:
If the scales are only off balance once, then it must be one of the coins 1, 2, 3—which only appear in one weighing.
If there is never balance then it must be one of the coins 10–13 that appear in all weighings. Picking out the one counterfeit coin corresponding to each of the 27 outcomes is always possible (13 coins one either too heavy or too light is 26 possibilities) except when all weighings are balanced, in which case there is no counterfeit coin (or its weight is correct). If coins 0 and 13 are deleted from these weighings they give one generic solution to the 12-coin problem.
If two coins are counterfeit, this procedure, in general, does not pick either of these, but rather some authentic coin. For instance, if both coins 1 and 2 are counterfeit, either coin 4 or 5 is wrongly picked.
Without a reference coin.
In a relaxed variation of this puzzle, one only needs to find the counterfeit coin without necessarily being able to tell its weight relative to the others. In this case, clearly any solution that previously weighed every coin at some point can be adapted to handle one extra coin. This coin is never put on the scales, but if all weighings are balanced it is picked as the counterfeit coin. It is not possible to do any better, since any coin that is put on the scales at some point and picked as the counterfeit coin can then always be assigned weight relative to the others.
A method which weighs the same sets of coins regardless of outcomes lets one either
The three possible outcomes of each weighing can be denoted by "\" for the left side being lighter, "/" for the right side being lighter, and "–" for both sides having the same weight. The symbols for the weighings are listed in sequence. For example, "//–" means that the right side is lighter in the first and second weighings, and both sides weigh the same in the third weighing. Three weighings give the following 33 = 27 outcomes. Except for "–––", the sets are divided such that each set on the right has a "/" where the set on the left has a "\", and vice versa:
As each weighing gives a meaningful result only when the number of coins on the left side is equal to the number on the right side, we disregard the first row, so that each column has the same number of "\" and "/" symbols (four of each). The rows are labelled, the order of the coins being irrelevant:
Using the pattern of outcomes above, the composition of coins for each weighing can be determined; for example the set "\/– D light" implies that coin D must be on the left side in the first weighing (to cause that side to be lighter), on the right side in the second, and unused in the third:
The outcomes are then read off the table. For example, if the right side is lighter in the first two weighings and both sides weigh the same in the third, the corresponding code "//– G heavy" implies that coin G is the odd one, and it is heavier than the others.
Generalizations.
The generalization of this problem is described in Chudnov.
Let formula_0 be the formula_1-dimensional Euclidean space and formula_2 be the inner product of vectors
formula_3 and formula_4 from formula_5 For vectors formula_6 and subsets formula_7 the operations formula_8 and formula_9 are defined, respectively, as formula_10 ; formula_11, formula_12, formula_13 By formula_14 we shall denote the discrete [−1; 1]-cube in formula_0; i.e., the set of all sequences of length formula_15 over the alphabet formula_16 The set formula_17 is the discrete ball of radius formula_18 (in the Hamming metric formula_19 ) with centre at the point formula_20 Relative weights of formula_21 objects are given by a vector formula_22 which defines the configurations of weights of the objects: the formula_23th object has standard weight if formula_24 the weight of the formula_23th object is greater (smaller) by a constant (unknown) value if formula_25 (respectively, formula_26). The vector formula_27 characterizes the types of objects: the standard type, the non-standard type (i.e., configurations of types), and it does not contain information about relative weights of non-standard objects.
A weighing (a check) is given by a vector formula_28 the result of a weighing for a situation formula_29 is formula_30 The weighing given by a vector formula_31 has the following interpretation: for a given check the formula_23th object participates in the weighing if formula_32; it is put on the left balance pan if formula_33 and is put on the right pan if formula_34 For each weighing formula_35, both pans should contain the same number of objects: if on some pan the number of objects is smaller than as it should be, then it receives some formula_36 reference objects. The result of a weighing formula_37 describes the following cases: the balance if formula_38, the left pan outweighs the right one if formula_39, and the right pan outweighs the left one if formula_40 The incompleteness of initial information about the distribution of weights of a group of objects is characterized by the set of admissible distributions of weights of objects formula_41 which is also called the set of admissible situations, the elements of formula_42 are called admissible situations.
Each weighing formula_43 induces the partition of the set formula_14 by the plane (hyperplane ) formula_44 into three parts formula_45, formula_46 and defines the corresponding partition of the set formula_47 where formula_48
Definition 1. A weighing algorithm (WA) formula_49 of length formula_50 is a sequence formula_51 where formula_52 is the function determining the check formula_53 at each formula_54th step, formula_55 of the algorithm from the results of formula_56 weighings at the previous steps ( formula_57 is a given initial check).
Let formula_58 be the set of all formula_59-syndromes and formula_60 be the set of situations with the same syndrome formula_61; i.e., formula_62; formula_63
Definition 2. A WA formula_64 is said to:
a) identify the situations in a set formula_65 if the condition formula_66 is satisfied for all formula_67
b) identify the types of objects in a set formula_65 if the condition formula_68 is satisfied for all formula_69
It is proved in that for so-called suitable sets formula_70 an algorithm of identification the types identifies also the situations in formula_71
As an example the perfect dynamic (two-cascade) algorithms with parameters formula_72 are constructed in which correspond to the parameters of the perfect ternary Golay code (Virtakallio-Golay code). At the same time, it is established that a static WA (i.e. weighting code) with the same parameters does not exist.
Each of these algorithms using 5 weighings finds among 11 coins up to two counterfeit coins which could be heavier or lighter than real coins by the same value. In this case the uncertainty domain (the set of admissible situations) contains formula_73 situations, i.e. the constructed WA lies on the Hamming bound for formula_74 and in this sense is perfect.
To date it is not known whether there are other perfect WA that identify the situations in formula_75 for some values of formula_76. Moreover, it is not known whether for some formula_77 there exist solutions for the equation
formula_78
(corresponding to the Hamming bound for ternary codes) which is, obviously, necessary for the existence of a perfect WA. It is only known that for formula_79 there are no perfect WA, and for formula_80 this equation has the unique nontrivial solution formula_81 which determines the parameters of the constructed perfect WA.
Original parallel weighings puzzle.
Eliran Sabag invented this puzzle. There are "N" indistinguishable coins, one of which is fake (it is not known whether it is heavier or lighter than the genuine coins, which all weigh the same). There are two balance scales that can be used in parallel. Each weighing lasts three minute. What is the largest number of coins N for which it is possible to find the fake coin in ten minutes?
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathbb{R}^n "
},
{
"math_id": 1,
"text": " n"
},
{
"math_id": 2,
"text": " [\\mathrm{e}^1, \\mathrm{e}^2] "
},
{
"math_id": 3,
"text": " \\mathrm{e}^1 "
},
{
"math_id": 4,
"text": " \\mathrm{e}^2 "
},
{
"math_id": 5,
"text": " \\mathbb{R}^n. "
},
{
"math_id": 6,
"text": " \\mathrm{e} = (e_1, \\dots,e_n) \\in \\mathbb{R}^n "
},
{
"math_id": 7,
"text": " E = \\{\\mathrm{e}^j\\} \\subseteq \\mathbb{R}^n, "
},
{
"math_id": 8,
"text": " (\\cdot)^{*} "
},
{
"math_id": 9,
"text": " (\\cdot)^{+} "
},
{
"math_id": 10,
"text": " \\mathrm{e}^{*} = (sign(e_i))_i"
},
{
"math_id": 11,
"text": " E^{*} = \\{(\\mathrm{e}^j) ^{*}\\} "
},
{
"math_id": 12,
"text": " \\mathrm{e}^{+} = (|sign(e_i)|)_i"
},
{
"math_id": 13,
"text": " E^{+} = \\{(\\mathrm{e}^j)^{+}\\}. "
},
{
"math_id": 14,
"text": " I^n "
},
{
"math_id": 15,
"text": " n "
},
{
"math_id": 16,
"text": " I = \\{ -1,0, 1 \\} "
},
{
"math_id": 17,
"text": " I^n_t = \\{\\mathrm{x} \\in I^n |w(\\mathrm{x}) \\le t \\} \\subseteq I^n "
},
{
"math_id": 18,
"text": " t "
},
{
"math_id": 19,
"text": " w()"
},
{
"math_id": 20,
"text": " \\mathrm{0}. "
},
{
"math_id": 21,
"text": " n "
},
{
"math_id": 22,
"text": " \\mathrm{x} = (x_1, \\dots, x_n) \\in I^n , "
},
{
"math_id": 23,
"text": " i"
},
{
"math_id": 24,
"text": " x_i = 0; "
},
{
"math_id": 25,
"text": " x_i = 1"
},
{
"math_id": 26,
"text": " x_i = -1"
},
{
"math_id": 27,
"text": " \\mathrm{x}^{+} "
},
{
"math_id": 28,
"text": " \\mathrm{h} \\in I^n ; "
},
{
"math_id": 29,
"text": " \\mathrm{x} \\in I^n "
},
{
"math_id": 30,
"text": " s(\\mathrm{x}; \\mathrm{h}) = sign([\\mathrm{x}; \\mathrm{h}]). "
},
{
"math_id": 31,
"text": " \\mathrm{h} = (h_1, \\dots, h_n) "
},
{
"math_id": 32,
"text": " h_i \\ne 0"
},
{
"math_id": 33,
"text": " h_i < 0 "
},
{
"math_id": 34,
"text": " h_i > 0. "
},
{
"math_id": 35,
"text": " \\mathrm{h}"
},
{
"math_id": 36,
"text": " r(\\mathrm{h}) = [\\mathrm{h}; 1 ,\\dots, 1] "
},
{
"math_id": 37,
"text": " s(\\mathrm{x}; \\mathrm{h}) "
},
{
"math_id": 38,
"text": " s(\\mathrm{x}; \\mathrm{h}) = 0"
},
{
"math_id": 39,
"text": " s(\\mathrm{x}; \\mathrm{h}) = -1"
},
{
"math_id": 40,
"text": " s(\\mathrm{x}; \\mathrm{h}) = 1. "
},
{
"math_id": 41,
"text": " Z \\subseteq I^n, "
},
{
"math_id": 42,
"text": " z \\in Z"
},
{
"math_id": 43,
"text": " \\mathrm{h} "
},
{
"math_id": 44,
"text": " [\\mathrm{x}; \\mathrm{h}] = 0 "
},
{
"math_id": 45,
"text": " W(s|I^n ; \\mathrm{h}) = \\{\\mathrm{x} \\in I^n |s(\\mathrm{x}; \\mathrm{h}) = s\\} "
},
{
"math_id": 46,
"text": " s \\in I, "
},
{
"math_id": 47,
"text": " Z = W(0|Z, \\mathrm{h}) + W(1|Z, \\mathrm{h}) + W(-1|Z, \\mathrm{h}), "
},
{
"math_id": 48,
"text": " W(s|Z, \\mathrm{h}) = W(s|I^n , \\mathrm{h}) \\cap Z. "
},
{
"math_id": 49,
"text": " \\mathcal{A} "
},
{
"math_id": 50,
"text": " m "
},
{
"math_id": 51,
"text": "\\mathcal{A} = < \\mathrm {A}_1, \\dots, \\mathrm {A}_m >,"
},
{
"math_id": 52,
"text": " \\mathrm {A}_j : I^{j-1} \\to I^n "
},
{
"math_id": 53,
"text": " \\mathrm{h}^j = \\mathrm{A}_j(s^{j-1}); \\mathrm{h}^j \\in I^n, "
},
{
"math_id": 54,
"text": " j"
},
{
"math_id": 55,
"text": " j = 1, 2, \\dots, m, "
},
{
"math_id": 56,
"text": " \\mathrm{s}^{j-1} = (s_1, \\dots, s_{j-1}) \\in I^{j-1} "
},
{
"math_id": 57,
"text": " \\mathrm{h}^1 = \\mathrm {A}_1() "
},
{
"math_id": 58,
"text": " S(Z, \\mathcal{A}) "
},
{
"math_id": 59,
"text": " (Z, \\mathcal{A}) "
},
{
"math_id": 60,
"text": " W(s|\\mathcal{A}) \\subseteq I "
},
{
"math_id": 61,
"text": " s"
},
{
"math_id": 62,
"text": " W(s|\\mathcal{A}) = \\{\\mathrm{z} \\in I^m |s(z|\\mathcal{A}) = s \\}"
},
{
"math_id": 63,
"text": " W(s|Z; \\mathcal{A}) =W(s|\\mathcal{A}) \\cap Z. "
},
{
"math_id": 64,
"text": " \\mathcal{A} "
},
{
"math_id": 65,
"text": " Z"
},
{
"math_id": 66,
"text": " |W(s|Z, \\mathcal{A}) | = 1 "
},
{
"math_id": 67,
"text": " s \\in S(Z\\mathcal{A}); "
},
{
"math_id": 68,
"text": " |W^{+}(s|Z\\mathcal{A})| = 1 "
},
{
"math_id": 69,
"text": " s \\in S(Z \\mathcal{A}). "
},
{
"math_id": 70,
"text": " Z "
},
{
"math_id": 71,
"text": " Z. "
},
{
"math_id": 72,
"text": " n = 11, m = 5, t = 2 "
},
{
"math_id": 73,
"text": " 1 + 2 C_{11} ^ 1 + 2 ^ 2 C_{11} ^ 2=3^5 "
},
{
"math_id": 74,
"text": " t=2 "
},
{
"math_id": 75,
"text": " I_t^n "
},
{
"math_id": 76,
"text": "n, t "
},
{
"math_id": 77,
"text": " t> 2 "
},
{
"math_id": 78,
"text": "<Math> \\sum_ {i = 0}^t 2^iC_n^i = 3^m </math>"
},
{
"math_id": 79,
"text": " t = 1 "
},
{
"math_id": 80,
"text": " t = 2 "
},
{
"math_id": 81,
"text": " n=11, m=5 "
}
] |
https://en.wikipedia.org/wiki?curid=14429995
|
14430019
|
Landau–Zener formula
|
Formula for the probability that a system will change between two energy states.
The Landau–Zener formula is an analytic solution to the equations of motion governing the transition dynamics of a two-state quantum system, with a time-dependent Hamiltonian varying such that the energy separation of the two states is a linear function of time. The formula, giving the probability of a diabatic (not adiabatic) transition between the two energy states, was published separately by Lev Landau, Clarence Zener, Ernst Stueckelberg, and Ettore Majorana, in 1932.
If the system starts, in the infinite past, in the lower energy eigenstate, we wish to calculate the probability of finding the system in the upper energy eigenstate in the infinite future (a so-called Landau–Zener transition). For infinitely slow variation of the energy difference (that is, a Landau–Zener velocity of zero), the adiabatic theorem tells us that no such transition will take place, as the system will always be in an instantaneous eigenstate of the Hamiltonian at that moment in time. At non-zero velocities, transitions occur with probability as described by the Landau–Zener formula.
Conditions and approximation.
Such transitions occur between states of the entire system; hence any description of the system must include all external influences, including collisions and external electric and magnetic fields. In order that the equations of motion for the system might be solved analytically, a set of simplifications are made, known collectively as the Landau–Zener approximation. The simplifications are as follows:
The first simplification makes this a semi-classical treatment. In the case of an atom in a magnetic field, the field strength becomes a classical variable which can be precisely measured during the transition. This requirement is quite restrictive as a linear change will not, in general, be the optimal profile to achieve the desired transition probability.
The second simplification allows us to make the substitution
formula_0
where formula_1 and formula_2 are the energies of the two states at time t, given by the diagonal elements of the Hamiltonian matrix, and formula_3 is a constant. For the case of an atom in a magnetic field this corresponds to a linear change in magnetic field. For a linear Zeeman shift this follows directly from point 1.
The final simplification requires that the time–dependent perturbation does not couple the diabatic states; rather, the coupling must be due to a static deviation from a formula_4 Coulomb potential, commonly described by a quantum defect.
Formula.
The details of Zener's solution are somewhat opaque, relying on a set of substitutions to put the equation of motion into the form of the Weber equation and using the known solution. A more transparent solution is provided by Curt Wittig using contour integration.
The key figure of merit in this approach is the Landau–Zener velocity:
formula_5
where q is the perturbation variable (electric or magnetic field, molecular bond-length, or any other perturbation to the system), and formula_6 and formula_7 are the energies of the two diabatic (crossing) states. A large formula_8 results in a large diabatic transition probability and vice versa.
Using the Landau–Zener formula the probability, formula_9, of a diabatic transition is given by
formula_10
The quantity formula_11 is the off-diagonal element of the two-level system's Hamiltonian coupling the bases, and as such it is half the distance between the two unperturbed eigenenergies at the avoided crossing, when formula_12.
Multistate problem.
The simplest generalization of the two-state Landau–Zener model is a multistate system with a Hamiltonian of the form
formula_13,
where "A" and "B" are Hermitian "N"x"N" matrices with time-independent elements. The goal of the multistate Landau–Zener theory is to determine elements of the scattering matrix and the transition probabilities between states of this model after evolution with such a Hamiltonian from negative infinite to positive infinite time. The transition probabilities are the absolute value squared of scattering matrix elements.
There are exact formulas, called hierarchy constraints, that provide analytical expressions for special elements of the scattering matrix in any multi-state Landau–Zener model. Special cases of these relations are known as the Brundobler–Elser (BE) formula,), and the no-go theorem. Discrete symmetries often lead to constraints that reduce the number of independent elements of the scattering matrix.
There are also integrability conditions that, when they are satisfied, lead to exact expressions for the entire scattering matrices in multistate Landau–Zener models. Numerous such completely solvable models have been identified, including:
Study of noise.
Applications of the Landau–Zener solution to the problems of quantum state preparation and manipulation with discrete degrees of freedom stimulated the study of noise and decoherence effects on the transition probability in a driven two-state system. Several compact analytical results have been derived to describe these effects, including the Kayanuma formula
for a strong diagonal noise, and Pokrovsky–Sinitsyn formula
for the coupling to a fast colored noise with off-diagonal components.
Using the Schwinger–Keldysh Green's function, a rather complete and comprehensive study on the effect of quantum noise in all parameter regimes were performed by Ao and Rammer in late 1980s, from weak to strong coupling, low to high temperature, slow to fast passage, etc. Concise analytical expressions were obtained in various limits, showing the rich behaviors of such problem.
The effects of nuclear spin bath and heat bath coupling on the Landau–Zener process was explored by Sinitsyn and Prokof'ev and Pokrovsky and Sun, respectively.
Exact results in multistate Landau–Zener theory (no-go theorem and BE-formula) can be applied to Landau–Zener systems which are coupled to baths composed of infinite many oscillators and/or spin baths (dissipative Landau–Zener transitions). They provide exact expressions for transition probabilities averaged over final bath states if the evolution begins from the ground state at zero temperature, see in Ref. for oscillator baths and for universal results including spin baths in Ref.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Delta E = E_2(t) - E_1(t) \\equiv \\alpha t, \\, "
},
{
"math_id": 1,
"text": "E_1(t)"
},
{
"math_id": 2,
"text": "E_2(t)"
},
{
"math_id": 3,
"text": "\\alpha"
},
{
"math_id": 4,
"text": "1/r"
},
{
"math_id": 5,
"text": "v_{\\rm LZ} = {\\frac{\\partial}{\\partial t}|E_2 - E_1| \\over \\frac{\\partial}{\\partial q}|E_2 - E_1|} \\approx \\frac{dq}{dt}, "
},
{
"math_id": 6,
"text": "E_1"
},
{
"math_id": 7,
"text": "E_2"
},
{
"math_id": 8,
"text": "v_{\\rm LZ}"
},
{
"math_id": 9,
"text": "P_{\\rm D}"
},
{
"math_id": 10,
"text": "\\begin{align}\n P_{\\rm D} &= e^{-2\\pi\\Gamma}\\\\\n\\Gamma &= {a^2/\\hbar \\over \\left|\\frac{\\partial}{\\partial t}(E_2 - E_1)\\right|} = {a^2/\\hbar \\over \\left|\\frac{dq}{dt}\\frac{\\partial}{\\partial q}(E_2 - E_1)\\right|}\\\\\n &= {a^2 \\over \\hbar|\\alpha|}\n\\end{align}"
},
{
"math_id": 11,
"text": "a"
},
{
"math_id": 12,
"text": "E_1 = E_2"
},
{
"math_id": 13,
"text": "H(t)=A+Bt"
},
{
"math_id": 14,
"text": "H=gS_x+btS_z"
}
] |
https://en.wikipedia.org/wiki?curid=14430019
|
14430199
|
Hilbert–Huang transform
|
Signal analysis tool
The Hilbert–Huang transform (HHT) is a way to decompose a signal into so-called intrinsic mode functions (IMF) along with a trend, and obtain instantaneous frequency data. It is designed to work well for data that is nonstationary and nonlinear. In contrast to other common transforms like the Fourier transform, the HHT is an algorithm that can be applied to a data set, rather than a theoretical tool.
The Hilbert–Huang transform (HHT), a NASA designated name, was proposed by Norden E. Huang et al. (1996, 1998, 1999, 2003, 2012). It is the result of the empirical mode decomposition (EMD) and the Hilbert spectral analysis (HSA). The HHT uses the EMD method to decompose a signal into so-called intrinsic mode functions (IMF) with a trend, and applies the HSA method to the IMFs to obtain instantaneous frequency data. Since the signal is decomposed in time domain and the length of the IMFs is the same as the original signal, HHT preserves the characteristics of the varying frequency. This is an important advantage of HHT since a real-world signal usually has multiple causes happening in different time intervals. The HHT provides a new method of analyzing nonstationary and nonlinear time series data.
Definition.
Empirical mode decomposition.
The fundamental part of the HHT is the empirical mode decomposition (EMD) method. Breaking down signals into various components, EMD can be compared with other analysis methods such as Fourier transform and Wavelet transform. Using the EMD method, any complicated data set can be decomposed into a finite and often small number of components. These components form a complete and nearly orthogonal basis for the original signal. In addition, they can be described as intrinsic mode functions (IMF).
Because the first IMF usually carries the most oscillating (high-frequency) components, it can be rejected to remove high-frequency components (e.g., random noise). EMD based smoothing algorithms have been widely used in seismic data processing, where high-quality seismic records are highly demanded.
Without leaving the time domain, EMD is adaptive and highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it can be applied to nonlinear and nonstationary processes.
Intrinsic mode functions.
An intrinsic mode function (IMF) is defined as a function that satisfies the following requirements:
It represents a generally simple oscillatory mode as a counterpart to the simple harmonic function. By definition, an IMF is any function with the same number of extrema and zero crossings, whose envelopes are symmetric with respect to zero. This definition guarantees a well-behaved Hilbert transform of the IMF.
Hilbert spectral analysis.
Hilbert spectral analysis (HSA) is a method for examining each IMF's instantaneous frequency as functions of time. The final result is a frequency-time distribution of signal amplitude (or energy), designated as the Hilbert spectrum, which permits the identification of localized features.
Techniques.
The Intrinsic Mode Function (IMF) amplitude and frequency can vary with time and it must satisfy the rule below:
Empirical mode decomposition.
The empirical mode decomposition (EMD) method is a necessary step to reduce any given data into a collection of intrinsic mode functions (IMF) to which the Hilbert spectral analysis can be applied.
IMF represents a simple oscillatory mode as a counterpart to the simple harmonic function, but it is much more general: instead of constant amplitude and frequency in a simple harmonic component, an IMF can have variable amplitude and frequency along the time axis.
The procedure of extracting an IMF is called sifting. The sifting process is as follows:
The upper and lower envelopes should cover all the data between them. Their mean is "m"1. The difference between the data and "m"1 is the first component "h"1:
formula_0
Ideally, "h"1 should satisfy the definition of an IMF, since the construction of h1 described above should have made it symmetric and having all maxima positive and all minima negative. After the first round of sifting, a crest may become a local maximum. New extrema generated in this way actually reveal the proper modes lost in the initial examination. In the subsequent sifting process, h1 can only be treated as a proto-IMF. In the next step, "h"1 is treated as data:
formula_1
After repeated sifting up to "k" times, h1 becomes an IMF, that is
formula_2
Then, "h"1k is designated as the first IMF component of the data:
formula_3
Stoppage criteria of the sifting process.
The stoppage criterion determines the number of sifting steps to produce an IMF. Following are the four existing stoppage criterion:
Standard deviation.
This criterion is proposed by Huang et al. (1998). It is similar to the Cauchy convergence test, and we define a sum of the difference, SD, as
formula_4
Then the sifting process stops when SD is smaller than a pre-given value.
S Number criterion.
This criterion is based on the so-called S-number, which is defined as the number of consecutive siftings for which the number of zero-crossings and extrema are equal or at most differing by one. Specifically, an S-number is pre-selected. The sifting process will stop only if, for S consecutive siftings, the numbers of zero-crossings and extrema stay the same, and are equal or at most differ by one.
Threshold method.
Proposed by Rilling, Flandrin and Gonçalvés, threshold method set two threshold values to guaranteeing globally small fluctuations in the meanwhile taking in account locally large excursions.
Energy difference tracking.
Proposed by Cheng, Yu and Yang, energy different tracking method utilized the assumption that the original signal is a composition of orthogonal signals, and calculate the energy based on the assumption. If the result of EMD is not an orthogonal basis of the original signal, the amount of energy will be different from the original energy.
Once a stoppage criterion is selected, the first IMF, c1, can be obtained. Overall, c1 should contain the finest scale or the shortest period component of the signal. We can, then, separate c1 from the rest of the data by formula_5 Since the residue, r1, still contains longer period variations in the data, it is treated as the new data and subjected to the same sifting process as described above.
This procedure can be repeated for all the subsequent rj's, and the result is
formula_6
The sifting process finally stops when the residue, rn, becomes a monotonic function from which no more IMF can be extracted. From the above equations, we can induce that
formula_7
Thus, a decomposition of the data into n-empirical modes is achieved. The components of the EMD are usually physically meaningful, for the characteristic scales are defined by the physical data. Flandrin et al. (2003) and Wu and Huang (2004) have shown that the EMD is equivalent to a dyadic filter bank.
Hilbert spectral analysis.
Having obtained the intrinsic mode function components, the instantaneous frequency can be computed using the Hilbert transform. After performing the Hilbert transform on each IMF component, the original data can be expressed as the real part, Real, in the following form:
formula_8
Current applications.
Two-Dimensional EMD.
In the above examples, all signals are one-dimensional signals, and in the case of two-dimensional signals, the Hilbert-Huang Transform can be applied for image and video processing in the following ways:
*How to determine the maximum value—should the edges of the image be considered, or should another method be used to define the maximum value?
*How to choose the progressive manner after identifying the maximum value. While Bezier curves may be effective in one-dimensional signals, they may not be directly applicable to two-dimensional signals.
Therefore, Nunes et al. used radial basis functions and the Riesz transform to handle Genuine Two-Dimensional EMD. The following is the form of the Riesz transform. For a complex function f on formula_9.
for "j" = 1,2...,"d".
The constant formula_10 is a dimension-normalized constant.
formula_11
Linderhed used Genuine Two-Dimensional EMD for image compression. Compared to other compression methods, this approach provides a lower distortion rate. Song and Zhang [2001], Damerval et al. [2005], and Yuan et al. [2008] used Delaunay triangulation to find the upper and lower bounds of the image. Depending on the requirements for defining maxima and selecting different progressive methods, different effects can be obtained.
Limitations.
Chen and Feng [2003] proposed a technique to improve the HHT procedure. The authors noted that the EMD is limited in distinguishing different components in narrow-band signals. The narrow band may contain either (a) components that have adjacent frequencies or (b) components that are not adjacent in frequency but for which one of the components has a much higher energy intensity than the other components. The improved technique is based on beating-phenomenon waves.
Datig and Schlurmann [2004] conducted a comprehensive study on the performance and limitations of HHT with particular applications to irregular water waves. The authors did extensive investigation into the spline interpolation. The authors discussed using additional points, both forward and backward, to determine better envelopes. They also performed a parametric study on the proposed improvement and showed significant improvement in the overall EMD computations. The authors noted that HHT is capable of differentiating between time-variant components from any given data. Their study also showed that HHT was able to distinguish between riding and carrier waves.
Huang and Wu [2008] reviewed applications of the Hilbert–Huang transformation emphasizing that the HHT theoretical basis is purely empirical, and noting that "one of the main drawbacks of EMD is mode mixing". They also outline outstanding open problems with HHT, which include: End effects of the EMD, Spline problems, Best IMF selection and uniqueness. Although the ensemble EMD (EEMD) may help mitigate the latter.
End effect.
End effect occurs at the beginning and end of the signal because there is no point before the first data point and after the last data point to be considered together. However, in most cases, these endpoints are not the extreme value of the signal. Therefore, when doing the EMD process of the HHT, the extreme envelope will diverge at the endpoints and cause significant errors.
This error distorts the IMF waveform at its endpoints. Furthermore, the error in the decomposition result accumulates through each repetition of the sifting process. When computing the instantaneous frequency and amplitude of IMFs, Fast Fourier Transform (FFT) result may cause Gibbs phenomenon and frequency leakage, leading to information loss.
Here are several methods are proposed to solve the end effect in HHT:
1. Characteristic wave extending method.
This method leverages the inherent variation trend of the signal to extend itself, resulting in extensions that closely resemble the characteristics of the original data.
This extension is based on the assumption that similar waveforms repeat themselves within the signal. Therefore, a triangular waveform best matching the signal's boundary is identified within the signal's waveform. Local values within the signal's boundary can then be predicted based on the corresponding local values of the triangular waveform.
Many signals exhibit internal repetition patterns. Leveraging this characteristic, the mirror extension method appends mirrored copies of the original signal to its ends. This simple and efficient approach significantly improves the accuracy of Intrinsic Mode Functions (IMFs) for periodic signals. However, it is not suitable for non-periodic signals and can introduce side effects. Several alternative strategies have been proposed to address these limitations
2. Data extending method.
design and compute some needed parameters from the original signal for building a particular mathematical model. After that, the model predicts the trend of the two endpoints.
This method utilizes machine learning techniques to tackle the end effect in HHT. Its advantages are adaptive, flexible, highly accurate, and effective for both periodic and non-periodic signals. Although computational complexity can be a concern, disregarding this factor reveals SVRM as a robust and effective solution for mitigating the end effect in HHT.
By formulating the input-output relationship as linear equations with time-varying coefficients, AR modeling enables statistical prediction of the missing values at the signal's endpoints. This method requires minimal computational resources and proves particularly effective for analyzing stationary signals. However, its accuracy diminishes for non-stationary signals, and the selection of an appropriate model order can significantly impact its effectiveness.
Leveraging the power of neural network learning, these methods offer a versatile and robust approach to mitigating the end effect in HHT. Various network architectures, including RBF-NN and GRNN , have emerged, demonstrating their ability to capture complex relationships within the signal and learn from large datasets.
Mode mixing problem.
Mode mixing problem happens during the EMD process. A straightforward implementation of the sifting procedure produces mode mixing due to IMF mode rectification. Specific signals may not be separated into the same IMFs every time. This problem makes it hard to implement feature extraction, model training, and pattern recognition since the feature is no longer fixed in one labeling index. Mode mixing problem can be avoided by including an intermittence test during the HHT process.
Masking Method.
Source:
The masking method improves EMD by allowing for the separation of similar frequency components through the following steps:
The optimal choice of amplitude depends on the frequencies
Overall, the masking method enhances EMD by providing a means to prevent mode mixing, improving the accuracy and applicability of EMD in signal analysis
Ensemble empirical mode decomposition (EEMD).
Source:
EEMD adds finite amplitude white noise to the original signal. After that, decompose the signal into IMFs using EMD. The processing steps of EEMD are developed as follows:
The effects of the decomposition using the EEMD are that the added white noise series cancel each other(or fill all the scale space uniformly). The noise also enables the EMD method to be a truly dyadic filter bank for any data, which means that a signal of a similar scale in a noisy data set could be contained in one IMF component, significantly reducing the chance of mode mixing. This approach preserves the physical uniqueness of decomposition and represents a major improvement over the EMD method.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X(t)-m_1 = h_1.\\,"
},
{
"math_id": 1,
"text": "h_{1} - m_{11} = h_{11}.\\,"
},
{
"math_id": 2,
"text": "h_{1(k-1)} - m_{1k} = h_{1k}.\\,"
},
{
"math_id": 3,
"text": "c_1 = h_{1k}.\\,"
},
{
"math_id": 4,
"text": "SD_k=\\sum_{t=0}^{T}\\frac{|h_{k-1}(t)-h_k(t)|^2}{h_{k-1}^2 (t)}.\\,"
},
{
"math_id": 5,
"text": "X(t)-c_1=r_1.\\,"
},
{
"math_id": 6,
"text": "r_{n-1}-c_n=r_n.\\,"
},
{
"math_id": 7,
"text": "X(t)=\\sum_{j=1}^n c_j+r_n.\\,"
},
{
"math_id": 8,
"text": "X(t)=\\text{Real}{\\sum_{j=1}^n a_j(t)e^{i\\int\\omega_j(t)dt}}.\\,"
},
{
"math_id": 9,
"text": "R^d"
},
{
"math_id": 10,
"text": "C_d"
},
{
"math_id": 11,
"text": "c_d = \\frac{1}{\\pi\\omega_{d-1}} = \\frac{\\Gamma[(d+1)/2]}{\\pi^{(d+1)/2}}."
},
{
"math_id": 12,
"text": "s(n)"
},
{
"math_id": 13,
"text": "x(n)"
}
] |
https://en.wikipedia.org/wiki?curid=14430199
|
14431176
|
Photochemical Reflectance Index
|
The Photochemical Reflectance Index (PRI) is a reflectance measurement developed by John Gamon during his tenure as a postdoctorate fellow supervised by Christopher Field at the Carnegie Institution for Science at Stanford University. The PRI is sensitive to changes in carotenoid pigments (e.g. xanthophyll pigments) in live foliage. Carotenoid pigments are indicative of photosynthetic light use efficiency, or the rate of carbon dioxide uptake by foliage per unit energy absorbed. As such, it is used in studies of vegetation productivity and stress. Because the PRI measures plant responses to stress, it can be used to assess general ecosystem health using satellite data or other forms of remote sensing. Applications include vegetation health in evergreen shrublands, forests, and agricultural crops prior to senescence. PRI is defined by the following equation using reflectance (ρ) at 531 and 570 nm wavelength:
formula_0
Some authors use
formula_1
The values range from –1 to 1.
|
[
{
"math_id": 0,
"text": "PRI=\\frac{(\\rho 531- \\rho 570)}{( \\rho 531+ \\rho 570)}"
},
{
"math_id": 1,
"text": "PRI=\\frac{( \\rho 570- \\rho 531)}{( \\rho 570+ \\rho 531)}"
}
] |
https://en.wikipedia.org/wiki?curid=14431176
|
14433598
|
Quadratic residue code
|
A quadratic residue code is a type of cyclic code.
Examples.
Examples of quadratic
residue codes include the formula_0 Hamming code
over formula_1, the formula_2 binary Golay code
over formula_1 and the formula_3 ternary Golay code
over formula_4.
Constructions.
There is a quadratic residue code of length formula_5
over the finite field formula_6 whenever formula_5
and formula_7 are primes, formula_5 is odd, and
formula_7 is a quadratic residue modulo formula_5.
Its generator polynomial as a cyclic code is given by
formula_8
where formula_9 is the set of quadratic residues of
formula_5 in the set formula_10 and
formula_11 is a primitive formula_5th root of
unity in some finite extension field of formula_6.
The condition that formula_7 is a quadratic residue
of formula_5 ensures that the coefficients of formula_12
lie in formula_6. The dimension of the code is
formula_13.
Replacing formula_14 by another primitive formula_5-th
root of unity formula_15 either results in the same code
or an equivalent code, according to whether or not formula_16
is a quadratic residue of formula_5.
An alternative construction avoids roots of unity. Define
formula_17
for a suitable formula_18. When formula_19
choose formula_20 to ensure that formula_21.
If formula_7 is odd, choose formula_22,
where formula_23 or formula_24 according to whether
formula_5 is congruent to formula_25 or formula_26
modulo formula_27. Then formula_28 also generates
a quadratic residue code; more precisely the ideal of
formula_29 generated by formula_28
corresponds to the quadratic residue code.
Weight.
The minimum weight of a quadratic residue code of length formula_5
is greater than formula_30; this is the square root bound.
Extended code.
Adding an overall parity-check digit to a quadratic residue code
gives an extended quadratic residue code. When
formula_31 (mod formula_27) an extended quadratic
residue code is self-dual; otherwise it is equivalent but not
equal to its dual. By the Gleason–Prange theorem (named for Andrew Gleason and Eugene Prange), the automorphism group of an extended quadratic residue
code has a subgroup which is isomorphic to
either formula_32 or formula_33.
Decoding Method.
Since late 1980, there are many algebraic decoding algorithms were developed for correcting errors on quadratic residue codes. These algorithms can achieve the (true) error-correcting capacity formula_34 of the quadratic residue codes with the code length up to 113. However, decoding of long binary quadratic residue codes and non-binary quadratic residue codes continue to be a challenge. Currently, decoding quadratic residue codes is still an active research area in the theory of error-correcting code.
|
[
{
"math_id": 0,
"text": "(7,4)"
},
{
"math_id": 1,
"text": "GF(2)"
},
{
"math_id": 2,
"text": "(23,12)"
},
{
"math_id": 3,
"text": "(11,6)"
},
{
"math_id": 4,
"text": "GF(3)"
},
{
"math_id": 5,
"text": "p"
},
{
"math_id": 6,
"text": "GF(l)"
},
{
"math_id": 7,
"text": "l"
},
{
"math_id": 8,
"text": "f(x)=\\prod_{j\\in Q}(x-\\zeta^j)"
},
{
"math_id": 9,
"text": "Q"
},
{
"math_id": 10,
"text": "\\{1,2,\\ldots,p-1\\}"
},
{
"math_id": 11,
"text": " \\zeta"
},
{
"math_id": 12,
"text": "f"
},
{
"math_id": 13,
"text": "(p+1)/2"
},
{
"math_id": 14,
"text": "\\zeta"
},
{
"math_id": 15,
"text": "\\zeta^r"
},
{
"math_id": 16,
"text": "r"
},
{
"math_id": 17,
"text": "g(x)=c+\\sum_{j\\in Q}x^j"
},
{
"math_id": 18,
"text": "c\\in GF(l)"
},
{
"math_id": 19,
"text": "l=2"
},
{
"math_id": 20,
"text": "c"
},
{
"math_id": 21,
"text": "g(1)=1"
},
{
"math_id": 22,
"text": "c=(1+\\sqrt{p^*})/2"
},
{
"math_id": 23,
"text": "p^*=p"
},
{
"math_id": 24,
"text": "-p"
},
{
"math_id": 25,
"text": "1"
},
{
"math_id": 26,
"text": "3"
},
{
"math_id": 27,
"text": "4"
},
{
"math_id": 28,
"text": "g(x)"
},
{
"math_id": 29,
"text": "F_l[X]/\\langle X^p-1\\rangle"
},
{
"math_id": 30,
"text": "\\sqrt{p}"
},
{
"math_id": 31,
"text": "p\\equiv 3"
},
{
"math_id": 32,
"text": "PSL_2(p)"
},
{
"math_id": 33,
"text": "SL_2(p)"
},
{
"math_id": 34,
"text": "\\lfloor(d-1)/2\\rfloor"
}
] |
https://en.wikipedia.org/wiki?curid=14433598
|
14436313
|
Adiabatic quantum computation
|
Type of quantum information processing
Adiabatic quantum computation (AQC) is a form of quantum computing which relies on the adiabatic theorem to perform calculations and is closely related to quantum annealing.
Description.
First, a (potentially complicated) Hamiltonian is found whose ground state describes the solution to the problem of interest. Next, a system with a simple Hamiltonian is prepared and initialized to the ground state. Finally, the simple Hamiltonian is adiabatically evolved to the desired complicated Hamiltonian. By the adiabatic theorem, the system remains in the ground state, so at the end, the state of the system describes the solution to the problem. Adiabatic quantum computing has been shown to be polynomially equivalent to conventional quantum computing in the circuit model.
The time complexity for an adiabatic algorithm is the time taken to complete the adiabatic evolution which is dependent on the gap in the energy eigenvalues (spectral gap) of the Hamiltonian. Specifically, if the system is to be kept in the ground state, the energy gap between the ground state and the first excited state of formula_0 provides an upper bound on the rate at which the Hamiltonian can be evolved at time formula_1. When the spectral gap is small, the Hamiltonian has to be evolved slowly. The runtime for the entire algorithm can be bounded by:
formula_2
where formula_3 is the minimum spectral gap for formula_0.
AQC is a possible method to get around the problem of energy relaxation. Since the quantum system is in the ground state, interference with the outside world cannot make it move to a lower state. If the energy of the outside world (that is, the "temperature of the bath") is kept lower than the energy gap between the ground state and the next higher energy state, the system has a proportionally lower probability of going to a higher energy state. Thus the system can stay in a single system eigenstate as long as needed.
Universality results in the adiabatic model are tied to quantum complexity and QMA-hard problems. The k-local Hamiltonian is QMA-complete for k ≥ 2. QMA-hardness results are known for physically realistic lattice models of qubits such as
formula_4
where formula_5 represent the Pauli matrices formula_6. Such models are used for universal adiabatic quantum computation. The Hamiltonians for the QMA-complete problem can also be restricted to act on a two dimensional grid of qubits or a line of quantum particles with 12 states per particle. If such models were found to be physically realizable, they too could be used to form the building blocks of a universal adiabatic quantum computer.
In practice, there are problems during a computation. As the Hamiltonian is gradually changed, the interesting parts (quantum behavior as opposed to classical) occur when multiple qubits are close to a tipping point. It is exactly at this point when the ground state (one set of qubit orientations) gets very close to a first energy state (a different arrangement of orientations). Adding a slight amount of energy (from the external bath, or as a result of slowly changing the Hamiltonian) could take the system out of the ground state, and ruin the calculation. Trying to perform the calculation more quickly increases the external energy; scaling the number of qubits makes the energy gap at the tipping points smaller.
Adiabatic quantum computation in satisfiability problems.
Adiabatic quantum computation solves satisfiability problems and other combinatorial search problems. Specifically, these kind of problems seek a state that satisfies
formula_7. This expression contains the satisfiability of M clauses, for which clause formula_8 has the value True or False, and can involve n bits. Each bit is a variable formula_9 such that formula_8 is a Boolean value function of formula_10. QAA solves this kind of problem using quantum adiabatic evolution. It starts with an Initial Hamiltonian formula_11:
formula_12
where formula_13 shows the Hamiltonian corresponding to the clause formula_8. Usually, the choice of formula_13 won't depend on different clauses, so only the total number of times each bit is involved in all clauses matters. Next, it goes through an adiabatic evolution, ending in the Problem Hamiltonian formula_14:
formula_15
where formula_16 is the satisfying Hamiltonian of clause C.
It has eigenvalues:
formula_17
For a simple path of adiabatic evolution with run time T, consider:
formula_18
and let formula_19. This results in:
formula_20, which is the adiabatic evolution Hamiltonian of the algorithm.
In accordance with the adiabatic theorem, start from the ground state of Hamiltonian formula_11 at the beginning, proceed through an adiabatic process, and end in the ground state of problem Hamiltonian formula_14.
Then measure the z-component of each of the n spins in the final state. This will produce a string formula_21 which is highly likely to be the result of the satisfiability problem. The run time T must be sufficiently long to assure correctness of the result. According to the adiabatic theorem, T is about formula_22, where
formula_23 is the minimum energy gap between ground state and first excited state.
Comparison to gate-based quantum computing.
Adiabatic quantum computing is equivalent in power to standard gate-based quantum computing that implements arbitrary unitary operations. However, the mapping challenge on gate-based quantum devices differs substantially from quantum annealers as logical variables are mapped only to single qubits and not to chains.
D-Wave quantum processors.
The D-Wave One is a device made by Canadian company D-Wave Systems, which claims that it uses quantum annealing to solve optimization problems. On 25 May 2011, Lockheed-Martin purchased a D-Wave One for about US$10 million. In May 2013, Google purchased a 512 qubit D-Wave Two.
The question of whether the D-Wave processors offer a speedup over a classical processor is still unanswered. Tests performed by researchers at Quantum Artificial Intelligence Lab (NASA), USC, ETH Zurich, and Google show that as of 2015, there is no evidence of a quantum advantage.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "H(t)"
},
{
"math_id": 1,
"text": "t"
},
{
"math_id": 2,
"text": "T = O\\left(\\frac{1}{g_{min}^2}\\right)"
},
{
"math_id": 3,
"text": "g_{min}"
},
{
"math_id": 4,
"text": "\nH = \\sum_{i}h_i Z_i + \\sum_{i<j}J^{ij}Z_iZ_j + \\sum_{i<j}K^{ij}X_iX_j\n"
},
{
"math_id": 5,
"text": "Z, X"
},
{
"math_id": 6,
"text": "\\sigma_z, \\sigma_x"
},
{
"math_id": 7,
"text": "\n\tC_1 \\wedge C_2 \\wedge \\cdots \\wedge C_M\n"
},
{
"math_id": 8,
"text": "C_i"
},
{
"math_id": 9,
"text": "x_j\\in \\{ 0,1\\}"
},
{
"math_id": 10,
"text": "x_1, x_2, \\dots , x_n"
},
{
"math_id": 11,
"text": "H_B"
},
{
"math_id": 12,
"text": "\n\tH_B=H_{B_1}+H_{B_2}+\\dots+H_{B_M}\n"
},
{
"math_id": 13,
"text": "H_{B_i}"
},
{
"math_id": 14,
"text": "H_P"
},
{
"math_id": 15,
"text": "\n\tH_P=\\sum\\limits_{C}^{} H_{P,C}\n"
},
{
"math_id": 16,
"text": "H_{P,C}"
},
{
"math_id": 17,
"text": "\n\th_C(z_{1C},z_{2C}\\dots z_{nC})=\n\t\\begin{cases} \n\t0 & \\mbox{clause } C \\mbox{ satisfied} \\\\ \n\t1 & \\mbox{clause } C \\mbox{ violated}\n\t\\end{cases}\n"
},
{
"math_id": 18,
"text": "\n\tH(t)=(1-t/T)H_{B}+(t/T)H_{P}\n"
},
{
"math_id": 19,
"text": "s=t/T"
},
{
"math_id": 20,
"text": "\n\t\\tilde{H}(s)=(1-s)H_{B}+sH_{P}\n"
},
{
"math_id": 21,
"text": "z_1,z_2,\\dots,z_n"
},
{
"math_id": 22,
"text": "\\varepsilon/g_\\mathrm{min}^{2}"
},
{
"math_id": 23,
"text": "\n\tg_\\mathrm{min}=\\min_{0\\le s\\le 1}(E_1(s)-E_0(s))\n"
}
] |
https://en.wikipedia.org/wiki?curid=14436313
|
14439377
|
OXGR1
|
Protein-coding gene in the species Homo sapiens
OXGR1, i.e., 2-oxoglutarate receptor 1 (also known as GPR99, cysteinyl leukotriene receptor E, i.e., CysLTE, and cysteinyl leukotriene receptor 3, i.e., CysLT3) is a G protein-coupled receptor located on the surface membranes of certain cells. It functions by binding one of its ligands and thereby becoming active in triggering pre-programmed responses in its parent cells. OXGR1 has been shown to be activated by α-ketoglutarate, itaconate, and three cysteinyl-containing leukotrienes (abbreviated as CysLTs), leukotriene E4 (i.e., LTE4), LTC4, and LTD4. α-Ketoglutarate and itaconate are the dianionic forms of α-ketoglutaric acid and itaconic acid, respectively. α-Ketoglutaric and itaconic acids are short-chain dicarboxylic acids that have two carboxyl groups (notated as -) both of which are bound to hydrogen (i.e., ). However, at the basic pH levels (i.e., pH>7) in virtually all animal tissues, α-ketoglutaric acid and itaconic acid exit almost exclusively as α-ketoglutarate and itaconate, i.e., with their carboxy residues being negatively charged (notated as -formula_0), because they are not bound to (see Conjugate acid-base theory). It is α-ketoglutarate and itaconate, not α-ketoglutaric or itaconic acids, which activate OXGR1.
History.
In 2001, a human gene projected to code for a G protein-coupled receptor (i.e., a receptor that stimulates cells by activating G proteins) was identified. Its protein product was classified as an orphan receptor, i.e., a receptor whose activating ligand and function are unknown. The projected amino acid sequence of the protein encoded by this gene bore similarities to the purinergic receptor, P2Y1, and therefore might, like P2Y1, be a receptor for purines. This study named the new receptor and its gene GPR80 and "GPR80", respectively. Shortly thereafter, a second study found this same gene, indicated that it coded for a G protein-coupled receptor, had an amino acid sequence similar to two purinergic receptors, P2Y1 and GPR91, and determined that a large series of purine nucleotides, other nucleotides, and derivatives of these compounds did not activate this receptor. The study named this receptor GPR99. A third study published in 2004 reported an orphan G protein-coupled receptor with an amino acid sequence similar to the P2Y receptor family of nucleotides was activated by two purines, adenosine and adenosine monophosphate. The study nominated this receptor to be a purinergic receptor and named it the P2Y15 receptor. However, a review in 2004 of these three studies by members of the International Union of Pharmacology Subcommittee for P2Y Receptor Nomenclature and Classification decided that GPR80/GPR99 is not a receptor for adenosine, adenosine monophosphate, or any other nucleotide. A fourth study, also published in 2004, found that GPR80/GPR99 -bearing cells responded to α-ketoglutarate. In 2013, IUPHAR accepted this report and the names OXGR1 and "OXGR1" for the α-ketoglutarate responsive receptor and its gene, respectively. In 2013, a fifth study found that LTE4, LTC4, and LTD4 activated OXGR1. Finally, a 2023 study provided evidence that itaconate also activated OXGR1.
"OXGR1" gene.
The human "OXGR1" gene is located on chromosome 13 at position 13q32.2; that is, it resides at position 32.2 (i.e., region 3, band 2, sub-band 2) on the "q" arm (i.e., long arm) of chromosome 13. "OXGR1" codes for a G protein coupled-receptor that is primarily linked to and activates heterotrimeric G proteins containing the Gq alpha subunit. When bound to one of its ligands, OXGR1 activates Gq alpha subunit-regulated cellular pathways (see Functions of the Gq alpha pathways) that stimulate the cellular responses that these pathways are programmed to elicit.
OXGR1 activating and inhibiting ligands.
Activating ligands.
OXGR1 is the receptor for α-ketoglutarate, LTE4, LTC4, LTD4, and itaconate. These ligands have the following relative potencies in stimulating responses in cultures of cells expressing human OXGR1:
LTE4 » LTC4 = LTD4 > α-ketoglutarate = itaconate
LTE4 is able to stimulate responses in at least some of its target cells at concentrations as low as a few picomoles/liter whereas LTC4, LTD4, α-ketoglutarate, and itaconate require far higher levels to do so.
The relative potencies that LTC4, LTD4, and LTE4 have in activating their target receptors, i.e., cysteinyl leukotriene receptor 1 (CysLTR1), cysteinyl leukotriene receptor 2 (CysLTR2), and OXGR1 are:
CysLTR1: LTD4 > LTC4 » LTE4
CysLTR2: LTC4 = LTD4 » LTE4
OXGR1: LTE4 > LTC4 > LTD4
These relationships suggest that CysTR1 and CysLTR2 are physiological receptors for LTD4 and LTC4 but due to its relative weakness in stimulating these two receptors, perhaps not or to a far lesser extent for LTE4. Indeed, the LTE4 concentrations needed to activate CysTR1 and CysLTR2 may be higher than those that normally occur "in vivo" (see Functions of OXGR1 in mediating the actions of LTE4, LTD4, and LTC4). These potency relationships suggest that the LTE4's actions are mediated primarily by OXGR1. The following findings support this suggestion. First, pretreatment of guinea pig trachea and human bronchial smooth muscle with LTE4 but not with LTC4 or LTD4 enhanced their smooth muscle contraction responses to histamine. This suggests LTE4's target receptor differs from the receptors targeted by LTC4 and LTD4. Second, LTE4 was as potent as LTC4 and LTD4 in eliciting vascular leakage when injected into the skin of guinea pigs and humans; the inhalation of LTE4 by asthmatic individuals caused the accumulation of eosinophils and basophils in their bronchial mucosa whereas the inhalation of LTD4 did not have this effect; and mice engineered to lack CysLTR1 and CysLTR2 receptors exhibited edema responses to the intradermal injection of LTC4, LTD4, and LTE4 but LTE4 was 64-fold more potent in triggering this response in these mice than in wild type mice. Since LTE4 should have been far less active than LTC4 or LTD4 in triggering vascular leakage, the recruitment of the cited cells into the lung, and causing vascular edema responses in mice lacking CysLT1 and CysLT2 receptors, these findings imply that the latter two receptors are not the primary receptors mediating LTF4' actions. And third, mice engineered to lack all three CysLTR1, CysLTR2, and OXGR1 receptors did not exhibit dermal edema responses to the injection of LTC4, LTD4, or LTE4 thereby indicating that at least one of these receptors was responsible for each of their actions. Overall, these findings suggest that LTE4 commonly acts through a different receptor than LTC4 and LTD4 and that this receptor is OXGR1. Indeed, studies have defined OXGR1 as the high affinity receptor for LTF4. Nonetheless, several studies have reported that cultures of certain types of inflammatory cells, e.g., the human LAD2 (but not LUVA) mast cell lines, T helper cell lymphocytes that have differentiated into Th2 cells, and mouse ILC2 lymphocytes (also termed type 2 innate lymphoid cells) The levels of LTE4 used in some of these studies may not develop in animals or humans. In all events, dysfunctions caused by deleting the "OXGR1" gene in cells, tissues or animals and dysfunctions in humans that are associated with a lack of a viable "OXGR1" gene implicate the lack of OXGR1 protein in the development of these dysfunctions.
Inhibiting ligand.
OXGR1 is inhibited by Montelukast, a well-known and clinically useful receptor antagonist, i.e., inhibitor, of CysLTR1 but not CysTR2 activation. (Inhibitors of CysLTR2 have not been identified.) In consequence, Montelukast blocks the binding and thereby the actions of LTE4, LTC4, and LTD4 that are mediated by OXGR1. It is presumed to act similarly to block the actions of α-ketoglutarate and itaconate on OXGR1. It is not yet known if other CysLTR1 inhibitors can mimic Montelukast in blocking OXGR1's responses to α-ketoglutarate and itaconate. Montelukast is used to treat various disorders including asthma, exercise-induced bronchoconstriction, allergic rhinitis, primary dysmenorrhea (i.e. menstrual cramps not associated with known causes, see causes of dysmenorrhea), and urticaria (see Functions of CysLTR1). While it is likely that its inhibition of CysLTR1 accounts for its effects in these diseases, the ability of these leukotrienes, particularly LTE4, to stimulate OXGR1 allows that Montelukast's effects on these conditions may be due at least in part to its ability to block OXGR1.
Expression.
Based on their content of the OXGR1 protein or mRNA that directs its synthesis, OXGR1 is expressed in human: a) kidney, placenta, and fetal brain; b) cells that promote allergic and other hypersensitivity reactions, i.e., eosinophils and mast cells; c) tissues involved in allergic and other hypersensitivity reactions such as the lung trachea, salivary glands, and nasal mucosa; and d) fibroblasts, i.e., cells that synthesize the extracellular matrix and collagen (when pathologically activated, these cells produce tissue fibrosis). In mice, Oxgr1 mRNA is highly expressed in kidneys, testes, smooth muscle tissues, nasal epithelial cells, and lung epithelial cells.
Functions.
Associated with OXGR1 gene defects or deficiencies.
The following studies have defined OXGR1 functions based on the presence of disorders in mice or humans that do not have a viable OXGR1 protein. It is not been determined which of OXGR1's ligands, if any, are responsible for stimulating OXGR1 to prevent these disorders.
Otitis media.
Mice lacking OXGPR1 protein due the knockout of their "OXGR1" gene developed (82% penetrance) otitis media (i.e., inflammation in their middle ears), mucus effusions in their middle ears, and hearing losses all which had many characteristics of human otitis media. The study did not find evidence that these mice had a middle ear bacterial infection. (Infection with "Streptococcus pneumoniae, Moraxella catarrhalis", or other bacteria is one of the most common causes of otitis media.) While the underlying mechanism for the development of this otitis has not been well-defined, the study suggests that OXER1 functions to prevent middle ear inflammations and "Oxgr1" gene knockout mice may be a good model to study and relate to human ear pathophysiology.
Goblet cells.
Mice lacking OXGR1 protein due the knockout of their OXGR1 gene had significantly fewer numbers of mucin-containing goblet cells in their nasal mucosa than control mice. "Cysltr1" gene knockout mice and "Cysltr2" gene knockout mice had normal numbers of these nasal goblet cells. This finding implicates OXGR1 in functioning to maintain higher numbers of airway goblet cells.
Kidney stones and nephrocalcinosis.
Majmunda et al. identified 6 individuals from different families with members that had histories of developing calcium-containing kidney stones (also termed nephrolithiasis) and/or nephrocalcinosis (i.e., the deposition of calcium-containing material in multiple sites throughout the kidney). Each of these 6 individuals had dominant variants in their "OXGR1" gene. These variant genes appeared (based on their "OXGR1" gene's DNA structure as defined by exome sequencing) to be unable to form an active OXGR1 protein. The study proposed that the "OXGR1" gene is a candidate for functioning to suppress the development of calcium-containing nephrolithiasis and nephrocalcinosis in humans.
Associated with α-ketoglutarate-regulated functions.
Studies in rodents have found that the ability of α-ketoglutarate to regulate various functions is dependent on its activation of OXGR1 (see OXGR1 receptor-dependent bioactions of α-ketoglutarate). These functions include: promoting normal kidney functions such as the absorption of key urinary ions and maintenance of acid base balance; regulating the development of glucose tolerance as defined by glucose tolerance tests; suppressing the development of diet-induced obesity; and suppressing the muscle atrophy response to excessive exercise.
Associated with LTE4-induced functions.
A study showed that LTE4, LTC4, and LTD4 produce similar levels of vascular leakage and localized tissue swelling when injected into the skin of guinea pigs or humans. Studies that examined the effects of using various doses of these LTs after injection into the earlobes of mice found that, in comparison to control mice, OXGR1 gene knockout mice showed virtually no response to injection of a low dose of LTE4, a greatly reduced response to injection of an intermediate dose of LTE4, and a somewhat delayed but otherwise similar response to a high dose of LTE4 (these doses were 0.008, 0.0625, and 0.5 nmols, respectively). The study concluded that lower levels of LTE4 act primarily through OXGR1 to cause vascular permeability and, since it is the major cysteinyl leukotriene that accumulates in inflamed tissues, suggested that OXGR1 may be a therapeutic target for treating inflammatory disorders. Another study found that the application of an extract of "Alternaria alternata" (a genus of fungi that infects plants and causes allergic diseases, infections, and toxic reactions in animals and humans) into the noses of mice caused their nasal epithelial cells to release mucin and their nasal submucosa to swell. (The nasal as well as lung epithelial cells of these mice expressed OXGR1). "OXGR1" gene knockout mice did not show these responses to the fungal toxin. The study also showed that a) "Cysltr1" and "Cysltr2" double gene knockout mice had full mucin release response to the toxin and b) Cstlr2 gene knockout mice had full submucosal swelling responses to the toxin but Csltr1 gene knockout mice did not show submucosal swelling responses to the toxin. The study concluded that LTE4's activation of OXGR1 controls key airway epithelial cell functions in mice and suggested that the inhibition of LTE4-induced OXGR1 activation may prove useful for treating asthma and other allergic and inflammatory disorders. A subsequent study examined the effects of LTE4-OXGR1 on a certain type of tuft cell. When located in intestinal mucosa, these tuft cells are termed tuft cells but when located in the nasal respiratory mucosa they are termed solitary chemosensory cells and when located in the trachea they are termed brush cells. Control mice that inhaled the mold "Alternaria alternata", the American house dust mite "Dermatophagoides farinae", or LTF4 developed increases in the number of their tracheal brush cells, release of the inflammation-promoting cytokine, interleukin 25, and lung inflammation whereas "OXGR1" gene knockout mice did not show these responses. These findings indicate that the activation of OXGR1 regulates airway: brush cell numbers, interleukin 25 release, and inflammation.
Associated with itaconate-regulated functions.
A study reported in 2023 was the first and to date (2024) only study indicating that itaconate's actions are mediated by activating OXGR1. This study showed that itaconate stimulated the nasal secretion of mucus when applied to the noses of mice, reduced the number of "Pseudomonas aeruginosa" bacteria in their lung tissue and bronchoalveolar lavage fluid (i.e., airway washing) in mice injected intranasally with these bacteria, and stimulated cultured mouse respiratory epithelium cells to raise their cytosolic Ca2+ levels (an indicator of cell activation). Itaconate was unable to induce these responses in "OXGR1" gene knockout mice or in the respiratory epithelial cells isolated from the "OXGR1" gene knockout mice. The study concluded that the activation of OXGR1 by itaconate contributes to regulating the pulmonary innate immune response to "Pseudomonas aeruginosa" and might also do so in other bacterial infections.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
"This article incorporates text from the United States National Library of Medicine, which is in the public domain."
|
[
{
"math_id": 0,
"text": "^{-}"
}
] |
https://en.wikipedia.org/wiki?curid=14439377
|
14440329
|
Channel-conductance-controlling ATPase
|
Class of enzymes
In enzymology, a channel-conductance-controlling ATPase (EC 3.6.3.49) is an enzyme that catalyzes the chemical reaction
ATP + H2O formula_0 ADP + phosphate
Thus, the two substrates of this enzyme are ATP and H2O, whereas its two products are ADP and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (channel-conductance-controlling).
Structural studies.
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1XMI and 1XMJ.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440329
|
14440350
|
Chloroplast protein-transporting ATPase
|
Class of enzymes
In enzymology, a chloroplast protein-transporting ATPase (EC 3.6.3.52) is an enzyme that catalyzes the chemical reaction
ATP + H2O formula_0 ADP + phosphate
Thus, the two substrates of this enzyme are ATP and H2O, whereas its two products are ADP and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (chloroplast protein-importing).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440350
|
14440385
|
DCTP diphosphatase
|
Class of enzymes
In enzymology, a dCTP diphosphatase (EC 3.6.1.12) is an enzyme that catalyzes the chemical reaction
dCTP + H2O formula_0 dCMP + diphosphate
Thus, the two substrates of this enzyme are dCTP and H2O, whereas its two products are dCMP and diphosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is dCTP nucleotidohydrolase. Other names in common use include deoxycytidine-triphosphatase, dCTPase, dCTP pyrophosphatase, deoxycytidine triphosphatase, deoxy-CTPase, and dCTPase. This enzyme participates in pyrimidine metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440385
|
14440399
|
Diphosphoinositol-polyphosphate diphosphatase
|
Class of enzymes
In enzymology, a diphosphoinositol-polyphosphate diphosphatase (EC 3.6.1.52) is an enzyme that catalyzes the chemical reaction
diphospho-myo-inositol polyphosphate + H2O formula_0 myo-inositol polyphosphate + phosphate
Thus, the two substrates of this enzyme are diphospho-myo-inositol polyphosphate and H2O, whereas its two products are myo-inositol polyphosphate and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is diphospho-myo-inositol-polyphosphate diphosphohydrolase. Other names in common use include diphosphoinositol-polyphosphate phosphohydrolase, and DIPP.
Structural studies.
As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 2DUK, 2FVV, and 2Q9P.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440399
|
14440413
|
Cl-transporting ATPase
|
Class of enzymes
In enzymology, a Cl-transporting ATPase (EC 3.6.3.11) is an enzyme that catalyzes the chemical reaction
ATP + H2O + Cl−out formula_0 ADP + phosphate + Cl−in
The 3 substrates of this enzyme are ATP, H2O, and Cl−, whereas its 3 products are ADP, phosphate, and Cl−.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (Cl−-importing). Other names in common use include Cl−-translocating ATPase, and Cl−-motive ATPase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440413
|
14440427
|
Dolichyldiphosphatase
|
Class of enzymes
In enzymology, a dolichyldiphosphatase (EC 3.6.1.43) is an enzyme that catalyzes the chemical reaction
dolichyl diphosphate + H2O formula_0 dolichyl phosphate + phosphate
Thus, the two substrates of this enzyme are dolichyl diphosphate and H2O, whereas its two products are dolichyl phosphate and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is dolichyl-diphosphate phosphohydrolase. Other names in common use include dolichol diphosphatase, dolichyl pyrophosphatase, dolichyl pyrophosphate phosphatase, dolichyl diphosphate phosphohydrolase, and Dol-P-P phosphohydrolase. This enzyme participates in n-glycan biosynthesis.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440427
|
14440452
|
DUTP diphosphatase
|
Enzyme
In Enzymology, a dUTP diphosphatase (EC 3.6.1.23) is an enzyme that catalyzes the chemical reaction
dUTP + H2O formula_0 dUMP + diphosphate
Thus, the two substrates of this enzyme are dUTP and H2O, whereas its two products are dUMP and diphosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is dUTP nucleotidohydrolase. Other names in common use include deoxyuridine-triphosphatase, dUTPase, dUTP pyrophosphatase, desoxyuridine 5'-triphosphate nucleotidohydrolase, and desoxyuridine 5'-triphosphatase. This enzyme participates in pyrimidine metabolism.
This enzyme has a dual function: on one hand, it removes dUTP from the deoxynucleotide pool, which reduces the probability of this base being incorporated into DNA by DNA polymerases, while on the other hand, it produces the dTTP precursor dUMP. Lack or inhibition of dUTPase action leads to harmful perturbations in the nucleotide pool resulting in increased uracil content of DNA that activates a hyperactive futile cycle of DNA repair.
Structural studies.
As of late 2007, 48 structures have been solved for this class of enzymes, with PDB accession codes 1DUC, 1DUD, 1DUN, 1DUP, 1DUT, 1EU5, 1EUW, 1F7D, 1F7K, 1F7N, 1F7O, 1F7P, 1F7Q, 1F7R, 1MQ7, 1OGH, 1OGK, 1OGL, 1PKH, 1PKJ, 1PKK, 1RN8, 1RNJ, 1SEH, 1SIX, 1SJN, 1SLH, 1SM8, 1SMC, 1SNF, 1SYL, 1VYQ, 1W2Y, 2BSY, 2BT1, 2CJE, 2D4L, 2D4M, 2D4N, 2HQU, 2HR6, 2HRM, 2OKB, 2OKD, 2OKE, 2OL0, 2OL1, and 2PY4.
There are at least two structurally distinct families of dUTPases. The crystal structure of human dUTPase reveals that each subunit of the dUTPase trimer folds into an eight-stranded jelly-roll beta barrel, with the C-terminal beta strands interchanged among the subunits. The structure is similar to that of the "Escherichia coli" enzyme, despite low sequence homology between the two enzymes.
The second family has a novel all-alpha fold, members of this family are unrelated to the all-beta fold found in dUTPases of the majority of organisms.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440452
|
14440477
|
Endopolyphosphatase
|
Class of enzymes
In enzymology, an endopolyphosphatase (EC 3.6.1.10) is an enzyme that catalyzes the chemical reaction
polyphosphate + n H2O formula_0 (n+1) oligophosphate
Thus, the two substrates of this enzyme are polyphosphate and H2O, whereas its product is oligophosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is polyphosphate polyphosphohydrolase. Other names in common use include polyphosphate depolymerase, metaphosphatase, polyphosphatase, and polymetaphosphatase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440477
|
14440529
|
FAD diphosphatase
|
Class of enzymes
In enzymology, a FAD diphosphatase (EC 3.6.1.18) is an enzyme that catalyzes the chemical reaction
FAD + H2O formula_0 AMP + FMN
Thus, the two substrates of this enzyme are FAD and H2O, whereas its two products are AMP and FMN.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is FAD nucleotidohydrolase. Other names in common use include FAD pyrophosphatase, riboflavin adenine dinucleotide pyrophosphatase, flavin adenine dinucleotide pyrophosphatase, riboflavine adenine dinucleotide pyrophosphatase, and flavine adenine dinucleotide pyrophosphatase. This enzyme participates in riboflavin metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440529
|
14440543
|
Fatty-acyl-CoA-transporting ATPase
|
Class of enzymes
In enzymology, a fatty-acyl-CoA-transporting ATPase (EC 7.6.2.4) is an enzyme that catalyzes the chemical reaction
ATP + H2O + fatty acyl CoAcis formula_0 ADP + phosphate + fatty acyl CoAtrans
The 3 substrates of this enzyme are ATP, H2O, and fatty acyl CoAcis, whereas its 3 products are ADP, phosphate, and fatty acyl CoAtrans.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (fatty-acyl-CoA-transporting).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440543
|
14440561
|
Fe3+-transporting ATPase
|
Enzyme
In enzymology, a Fe3+-transporting ATPase (EC 7.2.2.7) is an enzyme that catalyzes the chemical reaction
ATP + H2O + Fe3+out formula_0 ADP + phosphate + Fe3+in
The 3 substrates of this enzyme are ATP, H2O, and Fe3+, whereas its 3 products are ADP, phosphate, and Fe3+.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (ferric-ion-transporting). This enzyme participates in abc transporters - general.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440561
|
14440583
|
Glycerol-3-phosphate-transporting ATPase
|
Class of enzymes
In enzymology, a glycerol-3-phosphate-transporting ATPase (EC 3.6.3.20) is an enzyme that catalyzes the chemical reaction
ATP + H2O + glycerol-3-phosphateout formula_0 ADP + phosphate + glycerol-3-phosphatein
The 3 substrates of this enzyme are ATP, H2O, and glycerol-3-phosphate, whereas its 3 products are ADP, phosphate, and glycerol-3-phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (glycerol-3-phosphate-importing). This enzyme participates in abc transporters - general.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440583
|
14440607
|
Guanine-transporting ATPase
|
Class of enzymes
In enzymology, a guanine-transporting ATPase (EC 3.6.3.37) is an enzyme that catalyzes the chemical reaction
ATP + H2O + guanineout formula_0 ADP + phosphate + guaninein
The 3 substrates of this enzyme are ATP, H2O, and guanine, whereas its 3 products are ADP, phosphate, and guanine. This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (guanine-importing).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440607
|
14440625
|
Guanosine-5'-triphosphate,3'-diphosphate diphosphatase
|
Enzyme
In enzymology, a guanosine-5'-triphosphate,3'-diphosphate diphosphatase (EC 3.6.1.40) is an enzyme that catalyzes the chemical reaction
guanosine 5'-triphosphate,3'-diphosphate + H2O formula_0 guanosine 5'-diphosphate,3'-diphosphate + phosphate
Thus, the two substrates of this enzyme are guanosine 5'-triphosphate,3'-diphosphate and H2O, whereas its two products are guanosine 5'-diphosphate,3'-diphosphate and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is guanosine-5'-triphosphate,3'-diphosphate 5'-phosphohydrolase. Other names in common use include pppGpp 5'-phosphohydrolase, guanosine-5'-triphosphate,3'-diphosphate pyrophosphatase, guanosine 5'-triphosphate-3'-diphosphate 5'-phosphohydrolase, guanosine pentaphosphatase, guanosine pentaphosphate phosphatase, guanosine 5'-triphosphate 3'-diphosphate 5'-phosphatase, and guanosine pentaphosphate phosphohydrolase. This enzyme participates in purine metabolism.
Structural studies.
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1T6C and 1T6D.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440625
|
14440644
|
Guanosine-diphosphatase
|
Class of enzymes
In enzymology, a guanosine-diphosphatase (EC 3.6.1.42) is an enzyme that catalyzes the chemical reaction
GDP + H2O formula_0 GMP + phosphate
Thus, the two substrates of this enzyme are GDP and H2O, whereas its two products are GMP and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is GDP phosphohydrolase. This enzyme is also called GDPase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440644
|
14440666
|
Heme-transporting ATPase
|
Class of enzymes
In enzymology, a heme-transporting ATPase (EC 3.6.3.41) is an enzyme that catalyzes the chemical reaction
ATP + H2O + hemein formula_0 ADP + phosphate + hemeout
The 3 substrates of this enzyme are ATP, H2O, and heme, whereas its 3 products are ADP, phosphate, and heme.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (heme-exporting). This enzyme participates in abc transporters - general.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440666
|
14440741
|
Iron-chelate-transporting ATPase
|
Class of transport proteins
In enzymology, an iron-chelate-transporting ATPase (EC 3.6.3.34) is an enzyme that catalyzes the chemical reaction
ATP + H2O + iron chelateout formula_0 ADP + phosphate + iron chelatein
The 3 substrates of this enzyme are ATP, H2O, and iron chelate, whereas its 3 products are ADP, phosphate, and iron chelate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (iron-chelate-importing). This enzyme participates in abc transporters - general.
Structural studies.
As of late 2007, 3 structures have been solved for this class of enzymes, with PDB accession codes 1L2P, 1L6T, and 2IHY.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440741
|
14440790
|
Lipopolysaccharide-transporting ATPase
|
Class of enzymes
In enzymology, a lipopolysaccharide-transporting ATPase (EC 3.6.3.39) is an enzyme that catalyzes the chemical reaction
ATP + H2O + lipopolysaccharidein formula_0 ADP + phosphate + lipopolysaccharideout
The 3 substrates of this enzyme are ATP, H2O, and lipopolysaccharide, whereas its 3 products are ADP, phosphate, and lipopolysaccharide.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (lipopolysaccharide-exporting).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440790
|
14440815
|
M7G(5')pppN diphosphatase
|
Class of enzymes
In enzymology, a m7G(5')pppN diphosphatase (EC 3.6.1.30) is an enzyme that catalyzes the chemical reaction
7-methylguanosine 5'-triphospho-5'-polynucleotide + H2O formula_0 7-methylguanosine 5'-phosphate + polynucleotide
Thus, the two substrates of this enzyme are 7-methylguanosine 5'-triphospho-5'-polynucleotide and H2O, whereas its two products are 7-methylguanosine 5'-phosphate and polynucleotide.
This is the enzyme involved in the processing of amphetamines of the cathinone group, including mephedrone and khat.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is 7-methylguanosine-5'-triphospho-5'-polynucleotide 7-methylguanosine-5'-phosphohydrolase. Other names in common use include decapase, and m7G(5')pppN pyrophosphatase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440815
|
14440839
|
Maltose-transporting ATPase
|
Class of enzymes
In enzymology, a maltose-transporting ATPase (EC 3.6.3.19) is an enzyme that catalyzes the chemical reaction
ATP + H2O + maltoseout formula_0 ADP + phosphate + maltosein
The 3 substrates of this enzyme are ATP, H2O, and maltose, whereas its 3 products are ADP, phosphate, and maltose.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (maltose-importing). This enzyme is a member of the ABC Transporter family.
Structural studies.
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 2AWN and 2AWO.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440839
|
14440868
|
Manganese-transporting ATPase
|
Class of enzymes
In enzymology, a manganese-transporting ATPase (EC 3.6.3.35) is an enzyme that catalyzes the chemical reaction
ATP + H2O + Mn2+out formula_0 ADP + phosphate + Mn2+in
The 3 substrates of this enzyme are ATP, H2O, and Mn2+, whereas its 3 products are ADP, phosphate, and Mn2+.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is called ATP phosphohydrolase (manganese-importing). This enzyme is also known as ABC-type manganese permease complex.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440868
|
14440894
|
Mg2+-importing ATPase
|
Class of enzymes
In enzymology, a Mg2+-importing ATPase (EC 3.6.3.2) is an enzyme that catalyzes the chemical reaction
ATP + H2O + Mg2+out formula_0 ADP + phosphate + Mg2+in
The 3 substrates of this enzyme are ATP, H2O, and Mg2+, whereas its 3 products are ADP, phosphate, and Mg2+.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (Mg2+-importing).
The "mgtA" gene which encodes this enzyme is thought to be regulated by a magnesium responsive RNA element. A human enzyme was found in erythrocytes but the observation could not be confirmed.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
*
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440894
|
14440921
|
Microtubule-severing ATPase
|
Class of enzymes
In enzymology, a microtubule-severing ATPase (EC 3.6.4.3) is an enzyme that catalyzes the chemical reaction
ATP + H2O formula_0 ADP + phosphate
Thus, the two substrates of this enzyme are ATP and H2O, whereas its two products are ADP and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to facilitate cellular and subcellular movement. The systematic name of this enzyme class is ATP phosphohydrolase (tubulin-dimerizing).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440921
|
14440959
|
Molybdate-transporting ATPase
|
Class of enzymes
In enzymology, a molybdate-transporting ATPase (EC 3.6.3.29) is an enzyme that catalyzes the chemical reaction
ATP + H2O + molybdateout formula_0 ADP + phosphate + molybdatein
The 3 substrates of this enzyme are ATP, H2O, and molybdate, whereas its 3 products are ADP, phosphate, and molybdate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (molybdate-importing). This enzyme participates in abc transporters - general.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440959
|
14440981
|
Monosaccharide-transporting ATPase
|
Class of enzymes
In enzymology, a monosaccharide-transporting ATPase (EC 3.6.3.17) is an enzyme that catalyzes the chemical reaction
ATP + H2O + monosaccharideout formula_0 ADP + phosphate + monosaccharidein
The 3 substrates of this enzyme are ATP, H2O, and monosaccharide, whereas its 3 products are ADP, phosphate, and monosaccharide.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (monosaccharide-importing). This enzyme participates in abc transporters - general.
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 2GX6.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14440981
|
14441002
|
NAD+ diphosphatase
|
Class of enzymes
In enzymology, a NAD+ diphosphatase (EC 3.6.1.22) is an enzyme that catalyzes the chemical reaction
NAD+ + H2O formula_0 AMP + NMN
Thus, the two substrates of this enzyme are NAD+ and H2O, whereas its two products are AMP and NMN.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is NAD+ phosphohydrolase. Other names in common use include nicotinamide adenine dinucleotide pyrophosphatase, NADP+ pyrophosphatase, and NADH pyrophosphatase. This enzyme participates in nicotinate and nicotinamide metabolism.
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 2GB5.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14441002
|
14441023
|
Na+-exporting ATPase
|
Class of enzymes
In enzymology, a Na+-exporting ATPase (EC 3.6.3.7) is an enzyme that catalyzes the chemical reaction
ATP + H2O + Na+ formula_0 ADP + phosphate + Na+
The 3 substrates of this enzyme are ATP, H2O, and Na+, whereas its 3 products are ADP, phosphate, and Na+.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (Na+-exporting).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14441023
|
14441052
|
Na+-transporting two-sector ATPase
|
Class of enzymes
In enzymology, a Na+-transporting two-sector ATPase (EC 3.6.3.15) is an enzyme that catalyzes the chemical reaction
ATP + H2O formula_0 ADP + phosphate
Thus, the two substrates of this enzyme are ATP and H2O, whereas its two products are ADP and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (Na+-transporting).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14441052
|
14441073
|
Nickel-transporting ATPase
|
Class of enzymes
In enzymology, a nickel-transporting ATPase (EC 3.6.3.24) is an enzyme that catalyzes the chemical reaction
ATP + H2O + Ni2+out formula_0 ADP + phosphate + Ni2+in
The 3 substrates of this enzyme are ATP, H2O, and Ni2+, whereas its 3 products are ADP, phosphate, and Ni2+.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (nickel-importing).
Structural studies.
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1ZLQ and 2NOO.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14441073
|
14441089
|
Nitrate-transporting ATPase
|
Class of enzymes
In enzymology, a nitrate-transporting ATPase (EC 3.6.3.26) is an enzyme that catalyzes the chemical reaction
ATP + H2O + nitrateout formula_0 ADP + phosphate + nitratein
The 3 substrates of this enzyme are ATP, H2O, and nitrate, whereas its 3 products are ADP, phosphate, and nitrate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (nitrate-importing).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14441089
|
14441108
|
Nonpolar-amino-acid-transporting ATPase
|
Class of enzymes
In enzymology, a nonpolar-amino-acid-transporting ATPase (EC 3.6.3.22) is an enzyme that catalyzes the chemical reaction
ATP + H2O + nonpolar amino acidout formula_0 ADP + phosphate + nonpolar amino acidin
The 3 substrates of this enzyme are ATP, H2O, and nonpolar amino acid, whereas its 3 products are ADP, phosphate, and nonpolar amino acid.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (nonpolar-amino-acid-transporting).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14441108
|
14441126
|
Nucleoplasmin ATPase
|
Obsolete enzyme family
In enzymology, a nucleoplasmin ATPase (EC 3.6.4.11) is an enzyme that catalyzes the chemical reaction
ATP + H2O formula_0 ADP + phosphate
Thus, the two substrates of this enzyme are ATP and H2O, whereas its two products are ADP and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to facilitate cellular and subcellular movement. The systematic name of this enzyme class is ATP phosphohydrolase (nucleosome-assembling).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14441126
|
14441153
|
Nucleoside-diphosphatase
|
Group of proteins having nucleoside-diphosphatase activity
In enzymology, a nucleoside-diphosphatase (EC 3.6.1.6) is an enzyme that catalyzes the chemical reaction
a nucleoside diphosphate + H2O formula_0 a nucleotide + phosphate
Thus, the two substrates of this enzyme are nucleoside diphosphate and H2O, whereas its two products are nucleotide and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is nucleoside-diphosphate phosphohydrolase. Other names in common use include thiamine pyrophosphatase, UDPase, inosine diphosphatase, adenosine diphosphatase, IDPase, ADPase, adenosinepyrophosphatase, guanosine diphosphatase, guanosine 5'-diphosphatase, inosine 5'-diphosphatase, uridine diphosphatase, uridine 5'-diphosphatase, nucleoside diphosphate phosphatase, type B nucleoside diphosphatase, GDPase, CDPase, nucleoside 5'-diphosphatase, type L nucleoside diphosphatase, NDPase, and nucleoside diphosphate phosphohydrolase. This enzyme participates in purine metabolism and pyrimidine metabolism.
Structural studies.
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 2H2N and 2H2U.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14441153
|
14441189
|
Nucleoside-triphosphatase
|
Class of enzymes
In enzymology, a nucleoside-triphosphatase (NTPase) (EC 3.6.1.15) is an enzyme that catalyzes the chemical reaction
NTP + H2O formula_0 NDP + phosphate
Thus, the two substrates of this enzyme are NTP and H2O, whereas its two products are NDP and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is unspecific diphosphate phosphohydrolase. Other names in common use include nucleoside triphosphate phosphohydrolase, nucleoside-5-triphosphate phosphohydrolase, and nucleoside 5-triphosphatase. This enzyme participates in purine metabolism and thiamine metabolism.
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 2I3B.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14441189
|
14441206
|
Nucleoside-triphosphate diphosphatase
|
Class of enzymes
In enzymology, a nucleoside-triphosphate diphosphatase (EC 3.6.1.19) is an enzyme that catalyzes the chemical reaction
a nucleoside triphosphate + H2O formula_0 a nucleotide + diphosphate
Thus, the two substrates of this enzyme are nucleoside triphosphate and H2O, whereas its two products are nucleotide and diphosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is nucleoside-triphosphate diphosphohydrolase. This enzyme is also called nucleoside-triphosphate pyrophosphatase. This enzyme participates in purine metabolism and pyrimidine metabolism.
For example, enzyme deoxyribonucleoside triphosphate pyrophosphatase, encoded by YJR069C in "S. cerevisiae" and exhibiting (d)ITPase and (d)XTPase activities, hydrolyses ITP, dITP, XTP and dXTP releasing pyrophosphate and IMP, dIMP, XMP and dXMP, respectively.
Structural studies.
As of late 2007, 5 structures have been solved for this class of enzymes, with PDB accession codes 1V7R, 2CAR, 2E5X, 2I5D, and 2J4E.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14441206
|
14441572
|
Kadir–Brady saliency detector
|
The Kadir–Brady saliency detector extracts features of objects in images that are distinct and representative. It was invented by Timor Kadir and J. Michael Brady in 2001 and an affine invariant version was introduced by Kadir and Brady in 2004 and a robust version was designed by Shao et al. in 2007.
The detector uses the algorithms to more efficiently remove background noise and so more easily identify features which can be used in a 3D model. As the detector scans images it uses the three basics of global transformation, local perturbations and intra-class variations to define the areas of search, and identifies unique regions of those images rather than using the more traditional corner or blob searches. It attempts to be invariant to affine transformations and illumination changes.
This leads to a more object oriented search than previous methods and outperforms other detectors due to non blurring of the images, an ability to ignore slowly changing regions and a broader definition of surface geometry properties. As a result, the Kadir–Brady saliency detector is more capable at object recognition than other detectors whose main focus is on whole image correspondence.
Introduction.
Many computer vision and image processing applications work directly with the features extracted from an image, rather than the raw image; for example, for computing image correspondences, or for learning object categories. Depending on the applications, different characteristics are preferred. However, there are three broad classes of image change under which good performance may be required:
"Global transformation": Features should be repeatable across the expected class of global image transformations. These include both geometric and photometric transformations that arise due to changes in the imaging conditions. For example, region detection should be covariant with viewpoint as illustrated in Figure 1. In short, we require the segmentation to commute with viewpoint change. This property will be evaluated on the repeatability and accuracy of localization and region estimation.
"Local perturbations": Features should be insensitive to classes of semi-local image disturbances. For example, a feature responding to the eye of a human face should be unaffected by any motion of the mouth. A second class of disturbance is where a region neighbours a foreground/background boundary. The detector can be required to detect the foreground region despite changes in the background.
"Intra-class variations": Features should capture corresponding object parts under intra-class variations in objects. For example, the headlight of a car for different brands of car (imaged from the same viewpoint).
All Feature detection algorithms attempt to detect regions which are stable under the three types of image change described above. Instead of finding a corner, or blob, or any specific shape of region, the Kadir–Brady saliency detector looks for regions which are locally complex, and globally discriminative. Such regions usually correspond to regions more stable under these types of image change.
Information-theoretic saliency.
In the field of Information theory Shannon entropy is defined to quantify the complexity of a distribution "p" as formula_0. Therefore, higher entropy means "p" is more complex hence more unpredictable.
To measure the complexity of an image region formula_1 around point formula_2 with shape formula_3,
a descriptor formula_4 that takes on values formula_5
(e.g., in an 8 bit grey level image, D would range from 0 to 255 for each pixel) is defined
so that formula_6, the probability of descriptor value formula_7
occurs in region formula_1 can be computed.
Further, the entropy of image region formula_8 can compute as
formula_9
Using this entropy equation we can further calculate formula_10 for every point formula_2
and region shape formula_3. A more complex region, like the eye region, has a more complex distributor and hence higher entropy.
formula_11 is a good measure for local complexity. Entropy only measures the statistic of the local attribute. It does not measure the spatial arrangement of the local attribute. However, these four regions are not equally discriminative under scale change. This observation is used to define measure on discriminative in subsections.
The following subsections will discuss different methods to select regions with high local complexity and greater discrimination between different regions.
Similarity-invariant saliency.
The first version of the Kadir–Brady saliency detector[10] only finds Salient regions invariant under similarity transformation. The algorithm finds circle regions with different scales. In other words, given formula_12, where s is the scale parameter of a circle region formula_3, the algorithm selects a set of circle regions, formula_13.
The method consists of three steps:
The final saliency formula_17 is the product of formula_18 and formula_19.
For each x the method picks a scale formula_15 and calculates salient score formula_17.
By comparing formula_17 of different points formula_2 the detector can rank the saliency of points and pick the most representative ones.
Affine-invariant saliency.
Previous method is invariant to the similarity group of geometric transformations and to photometric shifts. However, as mentioned in the opening remarks, the ideal detector should detect region invariant up to viewpoint change. There are several detector [] can detect affine invariant region which is a better approximation of viewpoint change than similarity transformation.
To detect affine invariant region, the detector need to detect ellipse as in figure 4.
formula_3 now is parameterized by three parameter (s, "ρ", "θ"), where "ρ" is the axis ratio and "θ" the orientation of the ellipse.
This modification increases the search space of the previous algorithm from a scale to a set of parameters and therefore the complexity of the affine invariant saliency detector increases. In practice the affine invariant saliency detector starts with the set of points and scales generated from the similarity invariant saliency detector then iteratively approximates the suboptimal parameters.
Comparison.
Although similarity invariant saliency detector is faster than Affine invariant saliency detector it also has the drawback of favoring isotropic structure, since the discriminative measure formula_20 is measured over isotropic scale.
To summarize: Affine invariant saliency detector is invariant to affine transformation and able to detect more generate salient regions.
Salient volume.
It is intuitive to pick points from a higher salient score directly and stop when a certain number of threshold on "number of points" or "salient score" is satisfied. Natural images contain noise and motion blur which both act as randomisers and generally increase entropy, affecting previously low entropy values more than high entropy values.
A more robust method would be to pick regions rather than points in entropy space. Although the individual pixels within a salient region may be affected at any given instant, by the noise, it is unlikely to affect all of them in such a way that the region as a whole becomes non-salient.
It is also necessary to analyze the whole saliency space such that each salient feature is represented. A global threshold approach would result in highly salient features in one part of the image dominating the rest. A local threshold approach would require the setting of another scale parameter.
A simple clustering algorithm meets these two requirements are used at the end of the algorithm. It works by selecting highly salient points that have local support i.e. nearby points with similar saliency and scale. Each region must be sufficiently distant from all others (in R3) to qualify as a separate entity. For robustness we use a representation that includes all of the points in a selected region. The method works as follows:
The algorithm is implemented as GreedyCluster1.m in matlab by Dr. Timor Kadir
Performance evaluation.
In the field of computer vision different feature detectors have been evaluated by several tests. The most profound evaluation is published in the International Journal of Computer Vision in 2006.
The following subsection discuss the performance of Kadir–Brady saliency detector on a subset of a test in the paper.
Performance under global transformation.
In order to measure the consistency of a region detected on the same object or scene across images under global transformation, repeatability score, which is first proposed by Mikolajczyk and Cordelia Schmid in [18, 19] is calculated as follows:
Firstly, overlap error formula_21 of a pair of corresponding ellipses formula_22 and formula_23 each on different images is defined:
formula_24
where A is the locally linearized affine transformation of the homography between the two images,
and formula_25 and formula_26
represent the area of intersection and union of the ellipses respectively.
Notice formula_22 is scaled into a fix scale to take the count of size variation of different detected region. Only if formula_21 is smaller than certain formula_27, the pair of ellipses are deemed to correspond.
Then the repeatability score for a given pair of images is computed as the ratio between the number of region-to-region correspondences and the smaller of the number of regions in the pair of images, where only the regions located in the part of the scene present in both images are counted. In general we would like a detector to have a high repeatability score and a large number of correspondences.
The specific global transformations tested in the test dataset are:
The performance of Kadir–Brady saliency detector is inferior to most of other detectors mainly because the number of points detected is usually lower than other detectors.
The precise procedure is given in the Matlab code from Detector evaluation
Performance under intra-class variation and image perturbations.
In the task of object class categorization, the ability of detecting similar regions given intra-class variation and image perturbations across object instance is very critical. Repeatability measures over intra-class variation and image perturbations is proposed. The following subsection will introduce the definition and discuss the performance.
Intra-class variation test.
Suppose there are a set of images of the same object class e.g., motorbikes. A region detection operator which is unaffected by intra-class variation will reliably select regions on corresponding parts of all the objects — say the wheels, engine or seat for motorbikes.
Repeatability over intra-class variation is measuring the (average) number of correct correspondences over the set of images, where the correct correspondences is established by manual selection.
A region is matched if it fulfills three requirements:
In detail the average correspondence score S is measured as follows.
N regions are detected on each image of the M images in the dataset. Then for a particular reference image, i, the correspondence score formula_28 is given by the proportion of corresponding to detected regions for all the other images in the dataset, i.e.:
formula_29
The scoreformula_28 is computed for M/2 different selections of the reference image and averaged to give S. The score is evaluated as a function of the number of detected regions N.
The Kadir–Brady saliency detector gives the highest score across three test classes which are motorbike, car and face.
The saliency detector indicates that most detections are near the object. In contrast, other detectors maps show a much more diffuse pattern over the entire area caused by poor localization and false responses to background clutter.
Image perturbations test.
In order to test insensitivity to image perturbation the data set is split into two parts: the first contains images with a uniform background and the second images with varying degrees of background clutter. If the detector is robust to background clutter then the average correspondence score S should be similar for both subsets of images.
In this test saliency detector also outperforms other detectors due to three reasons:
The saliency detector is most useful in the task of object recognition, whereas several other detectors are more useful in the task of computing image correspondences. However, in the task of 3D object recognition where all three types of image change are combined, saliency detector might still be powerful.
|
[
{
"math_id": 0,
"text": "p \\log p \\,"
},
{
"math_id": 1,
"text": "\\{x,R\\}"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "R"
},
{
"math_id": 4,
"text": "D"
},
{
"math_id": 5,
"text": "{d_1 ,\\dots , d_r }"
},
{
"math_id": 6,
"text": "P_{D}(d_i,x,R)"
},
{
"math_id": 7,
"text": "d_i"
},
{
"math_id": 8,
"text": "R_x"
},
{
"math_id": 9,
"text": " H_{D}(x,R) = -\\sum_{i \\in (1\\dots r)} P_{D}(d_i,x,R) \\log P_{D}(d_i,x,R)."
},
{
"math_id": 10,
"text": "H_{D}(x,R)"
},
{
"math_id": 11,
"text": " H_{D}(x,R)"
},
{
"math_id": 12,
"text": "H_{D}(x,s)"
},
{
"math_id": 13,
"text": "\\{x_i,s_i;i=1\\dots N\\}"
},
{
"math_id": 14,
"text": "H_{D}(x,s) = -\\sum_{i \\in (1\\dots r)} P_{D}(d_i,x,s) \\log P_{D}(d_i,x,s)/10"
},
{
"math_id": 15,
"text": "s_p"
},
{
"math_id": 16,
"text": "W_D(x,s) = \\sum_{i \\in (1\\dots r)} |\\frac{\\partial}{\\partial s}P_{D,}(d_i,x,s)| "
},
{
"math_id": 17,
"text": "Y_D(x,s_p)"
},
{
"math_id": 18,
"text": "H_D(x,s_p)"
},
{
"math_id": 19,
"text": "W_D(x,s_p)"
},
{
"math_id": 20,
"text": "W_D"
},
{
"math_id": 21,
"text": "\\epsilon"
},
{
"math_id": 22,
"text": "\\mu_a"
},
{
"math_id": 23,
"text": "\\mu_b"
},
{
"math_id": 24,
"text": "\\epsilon = 1 - \\frac{\\mu_a \\cap (A^T \\mu_b A)}{\\mu_a \\cup (A^T \\mu_b A)}"
},
{
"math_id": 25,
"text": "\\mu_a \\cap (A^T \\mu_b A)"
},
{
"math_id": 26,
"text": "\\mu_a \\cup (A^T \\mu_b A)"
},
{
"math_id": 27,
"text": "\\epsilon_0"
},
{
"math_id": 28,
"text": "S_i"
},
{
"math_id": 29,
"text": "Si = \\frac{\\text{Total number of matches}}{\\text{Total number of detected regions}}=\\frac{N_{M}^{i}}{N (M-1)}"
}
] |
https://en.wikipedia.org/wiki?curid=14441572
|
144417
|
Comoving and proper distances
|
Measurement of distance
In standard cosmology, comoving distance and proper distance (or physical distance) are two closely related distance measures used by cosmologists to define distances between objects. "Comoving distance" factors out the expansion of the universe, giving a distance that does not change in time due to the expansion of space (though this may change due to other, local factors, such as the motion of a galaxy within a cluster). "Proper distance" roughly corresponds to where a distant object would be at a specific moment of cosmological time, which can change over time due to the expansion of the universe. Comoving distance and proper distance are defined to be equal at the present time. At other times, the Universe's expansion results in the proper distance changing, while the comoving distance remains constant.
Comoving coordinates.
Although general relativity allows the formulation of the laws of physics using arbitrary coordinates, some coordinate choices are more natural or easier to work with. Comoving coordinates are an example of such a natural coordinate choice. They assign constant spatial coordinate values to observers who perceive the universe as isotropic. Such observers are called "comoving" observers because they move along with the Hubble flow.
A comoving observer is the only observer who will perceive the universe, including the cosmic microwave background radiation, to be isotropic. Non-comoving observers will see regions of the sky systematically blue-shifted or red-shifted. Thus isotropy, particularly isotropy of the cosmic microwave background radiation, defines a special local frame of reference called the comoving frame. The velocity of an observer relative to the local comoving frame is called the peculiar velocity of the observer.
Most large lumps of matter, such as galaxies, are nearly comoving, so that their peculiar velocities (owing to gravitational attraction) are small compared to their Hubble-flow velocity seen by observers in moderately nearby galaxies, (i.e. as seen from galaxies just outside the group local to the observed "lump of matter").
The comoving time coordinate is the elapsed time since the Big Bang according to a clock of a comoving observer and is a measure of cosmological time. The comoving spatial coordinates tell where an event occurs while cosmological time tells when an event occurs. Together, they form a complete coordinate system, giving both the location and time of an event.
Space in comoving coordinates is usually referred to as being "static", as most bodies on the scale of galaxies or larger are approximately comoving, and comoving bodies have static, unchanging comoving coordinates. So for a given pair of comoving galaxies, while the proper distance between them would have been smaller in the past and will become larger in the future due to the expansion of space, the comoving distance between them remains "constant" at all times.
The expanding Universe has an increasing scale factor which explains how constant comoving distances are reconciled with proper distances that increase with time.
Comoving distance and proper distance.
Comoving distance is the distance between two points measured along a path defined at the present cosmological time. For objects moving with the Hubble flow, it is deemed to remain constant in time. The comoving distance from an observer to a distant object (e.g. galaxy) can be computed by the following formula (derived using the Friedmann–Lemaître–Robertson–Walker metric):
formula_0
where "a"("t"′) is the scale factor, "t"e is the time of emission of the photons detected by the observer, "t" is the present time, and "c" is the speed of light in vacuum.
Despite being an integral over time, this expression gives the correct distance that would be measured by a hypothetical tape measure at fixed time "t", i.e. the "proper distance" (as defined below) after accounting for the time-dependent "comoving speed of light" via the inverse scale factor term formula_1 in the integrand. By "comoving speed of light", we mean the velocity of light "through" comoving coordinates [formula_2] which is time-dependent even though "locally", at any point along the null geodesic of the light particles, an observer in an inertial frame always measures the speed of light as formula_3 in accordance with special relativity. For a derivation see "Appendix A: Standard general relativistic definitions of expansion and horizons" from Davis & Lineweaver 2004. In particular, see "eqs". 16–22 in the referenced 2004 paper [note: in that paper the scale factor formula_4 is defined as a quantity with the dimension of distance while the radial coordinate formula_5 is dimensionless.]
Definitions.
Many textbooks use the symbol formula_6 for the comoving distance. However, this formula_6 must be distinguished from the coordinate distance formula_7 in the commonly used comoving coordinate system for a FLRW universe where the metric takes the form (in reduced-circumference polar coordinates, which only works half-way around a spherical universe):
formula_8
In this case the comoving coordinate distance formula_7 is related to formula_6 by:
formula_9
Most textbooks and research papers define the comoving distance between comoving observers to be a fixed unchanging quantity independent of time, while calling the dynamic, changing distance between them "proper distance". On this usage, comoving and proper distances are numerically equal at the current age of the universe, but will differ in the past and in the future; if the comoving distance to a galaxy is denoted formula_6, the proper distance formula_10 at an arbitrary time formula_11 is simply given by
formula_12
where formula_13 is the scale factor (e.g. Davis & Lineweaver 2004). The proper distance formula_10 between two galaxies at time "t" is just the distance that would be measured by rulers between them at that time.
Uses of the proper distance.
Cosmological time is identical to locally measured time for an observer at a fixed comoving spatial position, that is, in the local comoving frame. Proper distance is also equal to the locally measured distance in the comoving frame for nearby objects. To measure the proper distance between two distant objects, one imagines that one has many comoving observers in a straight line between the two objects, so that all of the observers are close to each other, and form a chain between the two distant objects. All of these observers must have the same cosmological time. Each observer measures their distance to the nearest observer in the chain, and the length of the chain, the sum of distances between nearby observers, is the total proper distance.
It is important to the definition of both comoving distance and proper distance in the cosmological sense (as opposed to proper length in special relativity) that all observers have the same cosmological age. For instance, if one measured the distance along a straight line or spacelike geodesic between the two points, observers situated between the two points would have different cosmological ages when the geodesic path crossed their own world lines, so in calculating the distance along this geodesic one would not be correctly measuring comoving distance or cosmological proper distance. Comoving and proper distances are not the same concept of distance as the concept of distance in special relativity. This can be seen by considering the hypothetical case of a universe empty of mass, where both sorts of distance can be measured. When the density of mass in the FLRW metric is set to zero (an empty 'Milne universe'), then the cosmological coordinate system used to write this metric becomes a non-inertial coordinate system in the Minkowski spacetime of special relativity where surfaces of constant Minkowski proper-time τ appear as hyperbolas in the Minkowski diagram from the perspective of an inertial frame of reference. In this case, for two events which are simultaneous according to the cosmological time coordinate, the value of the cosmological proper distance is not equal to the value of the proper length between these same events, which would just be the distance along a straight line between the events in a Minkowski diagram (and a straight line is a geodesic in flat Minkowski spacetime), or the coordinate distance between the events in the inertial frame where they are simultaneous.
If one divides a change in proper distance by the interval of cosmological time where the change was measured (or takes the derivative of proper distance with respect to cosmological time) and calls this a "velocity", then the resulting "velocities" of galaxies or quasars can be above the speed of light, "c". Such superluminal expansion is not in conflict with special or general relativity nor the definitions used in physical cosmology. Even light itself does not have a "velocity" of "c" in this sense; the total velocity of any object can be expressed as the sum formula_14 where formula_15 is the recession velocity due to the expansion of the universe (the velocity given by Hubble's law) and formula_16 is the "peculiar velocity" measured by local observers (with formula_17 and formula_18, the dots indicating a first derivative), so for light formula_16 is equal to "c" (−"c" if the light is emitted towards our position at the origin and +"c" if emitted away from us) but the total velocity formula_19 is generally different from "c". Even in special relativity the coordinate speed of light is only guaranteed to be "c" in an inertial frame; in a non-inertial frame the coordinate speed may be different from "c". In general relativity no coordinate system on a large region of curved spacetime is "inertial", but in the local neighborhood of any point in curved spacetime we can define a "local inertial frame" in which the local speed of light is "c" and in which massive objects such as stars and galaxies always have a local speed smaller than "c". The cosmological definitions used to define the velocities of distant objects are coordinate-dependent – there is no general coordinate-independent definition of velocity between distant objects in general relativity. How best to describe and popularize that expansion of the universe is (or at least was) very likely proceeding – at the greatest scale – at above the speed of light, has caused a minor amount of controversy. One viewpoint is presented in Davis and Lineweaver, 2004.
Short distances vs. long distances.
Within small distances and short trips, the expansion of the universe during the trip can be ignored. This is because the travel time between any two points for a non-relativistic moving particle will just be the proper distance (that is, the comoving distance measured using the scale factor of the universe at the time of the trip rather than the scale factor "now") between those points divided by the velocity of the particle. If the particle is moving at a relativistic velocity, the usual relativistic corrections for time dilation must be made.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\chi = \\int_{t_e}^t c \\; \\frac{\\mathrm{d} t'}{a(t')} "
},
{
"math_id": 1,
"text": "1 / a(t')"
},
{
"math_id": 2,
"text": "c / a(t')"
},
{
"math_id": 3,
"text": "c"
},
{
"math_id": 4,
"text": "R(t')"
},
{
"math_id": 5,
"text": "\\chi "
},
{
"math_id": 6,
"text": "\\chi"
},
{
"math_id": 7,
"text": "r"
},
{
"math_id": 8,
"text": "ds^2 = -c^2 \\, d\\tau^2 = -c^2 \\, dt^2 + a(t)^2 \\left( \\frac{dr^2}{1 - \\kappa r^2} + r^2 \\left(d\\theta^2 + \\sin^2 \\theta \\, d\\phi^2 \\right)\\right)."
},
{
"math_id": 9,
"text": "\\chi = \\begin{cases}\n|\\kappa|^{-1/2}\\sinh^{-1} \\sqrt{|\\kappa|} r , & \\text{if } \\kappa<0 \\ \\text{(a negatively curved ‘hyperbolic’ universe)} \\\\\nr, & \\text{if } \\kappa=0 \\ \\text{(a spatially flat universe)} \\\\\n|\\kappa|^{-1/2}\\sin^{-1} \\sqrt{|\\kappa|} r , & \\text{if } \\kappa>0 \\ \\text{(a positively curved ‘spherical’ universe)}\n\\end{cases}"
},
{
"math_id": 10,
"text": "d(t)"
},
{
"math_id": 11,
"text": "t"
},
{
"math_id": 12,
"text": "d(t) = a(t) \\chi"
},
{
"math_id": 13,
"text": "a(t)"
},
{
"math_id": 14,
"text": "v_\\text{tot} = v_\\text{rec} + v_\\text{pec}"
},
{
"math_id": 15,
"text": "v_\\text{rec}"
},
{
"math_id": 16,
"text": "v_\\text{pec}"
},
{
"math_id": 17,
"text": "v_\\text{rec} = \\dot{a}(t) \\chi(t)"
},
{
"math_id": 18,
"text": "v_\\text{pec} = a(t) \\dot{\\chi}(t)"
},
{
"math_id": 19,
"text": "v_\\text{tot}"
}
] |
https://en.wikipedia.org/wiki?curid=144417
|
144428
|
Degenerate matter
|
Type of dense exotic matter in physics
Degenerate matter occurs when the Pauli exclusion principle significantly alters a state of matter at low temperature. The term is used in astrophysics to refer to dense stellar objects such as white dwarfs and neutron stars, where thermal pressure alone is not enough to avoid gravitational collapse. The term also applies to metals in the Fermi gas approximation.
Degenerate matter is usually modelled as an ideal Fermi gas, an ensemble of non-interacting fermions. In a quantum mechanical description, particles limited to a finite volume may take only a discrete set of energies, called quantum states. The Pauli exclusion principle prevents identical fermions from occupying the same quantum state. At lowest total energy (when the thermal energy of the particles is negligible), all the lowest energy quantum states are filled. This state is referred to as full degeneracy. This degeneracy pressure remains non-zero even at absolute zero temperature. Adding particles or reducing the volume forces the particles into higher-energy quantum states. In this situation, a compression force is required, and is made manifest as a resisting pressure. The key feature is that this degeneracy pressure does not depend on the temperature but only on the density of the fermions. Degeneracy pressure keeps dense stars in equilibrium, independent of the thermal structure of the star.
A degenerate mass whose fermions have velocities close to the speed of light (particle kinetic energy larger than its rest mass energy) is called relativistic degenerate matter.
The concept of degenerate stars, stellar objects composed of degenerate matter, was originally developed in a joint effort between Arthur Eddington, Ralph Fowler and Arthur Milne. Eddington had suggested that the atoms in Sirius B were almost completely ionised and closely packed. Fowler described white dwarfs as composed of a gas of particles that became degenerate at low temperature; he also pointed out that ordinary atoms are broadly similar in regards to the filling of energy levels by fermions. Milne proposed that degenerate matter is found in most of the nuclei of stars, not only in compact stars.
Concept.
Degenerate matter exhibits quantum mechanical properties when a fermion system temperature approaches absolute zero. These properties result from a combination of the Pauli exclusion principle and quantum confinement. The Pauli principle allows only one fermion in each quantum state and the confinement ensures that energy of these states increases as they are filled. The lowest states fill up and fermions are forced to occupy high energy states even at low temperature.
While the Pauli principle and Fermi-Dirac distribution applies to all matter, the interesting cases for degenerate matter involve systems of many fermions. These cases can be understood with the help of the Fermi gas model. Examples include electrons in metals and in white dwarf stars and neutrons in neutron stars. The electrons are confined by Coulomb attraction to positive ion cores; the neutrons are confined by gravitation attraction. The fermions, forced in to higher levels by the Pauli principle, exert pressure preventing further compression.
The allocation or distribution of fermions into quantum states ranked by energy is called the Fermi-Dirac distribution. Degenerate matter exhibits the results of Fermi-Dirac distribution.
Degeneracy pressure.
Unlike a classical ideal gas, whose pressure is proportional to its temperature
formula_0
where "P" is pressure, "k"B is the Boltzmann constant, "N" is the number of particles (typically atoms or molecules), "T" is temperature, and "V" is the volume, the pressure exerted by degenerate matter depends only weakly on its temperature. In particular, the pressure remains nonzero even at absolute zero temperature. At relatively low densities, the pressure of a fully degenerate gas can be derived by treating the system as an ideal Fermi gas, in this way
formula_1
where "m" is the mass of the individual particles making up the gas. At very high densities, where most of the particles are forced into quantum states with relativistic energies, the pressure is given by
formula_2
where "K" is another proportionality constant depending on the properties of the particles making up the gas.
All matter experiences both normal thermal pressure and degeneracy pressure, but in commonly encountered gases, thermal pressure dominates so much that degeneracy pressure can be ignored. Likewise, degenerate matter still has normal thermal pressure; the degeneracy pressure dominates to the point that temperature has a negligible effect on the total pressure. The adjacent figure shows the thermal pressure (red line) and total pressure (blue line) in a Fermi gas, with the difference between the two being the degeneracy pressure. As the temperature falls, the density and the degeneracy pressure increase, until the degeneracy pressure contributes most of the total pressure.
While degeneracy pressure usually dominates at extremely high densities, it is the ratio between degenerate pressure and thermal pressure which determines degeneracy. Given a sufficiently drastic increase in temperature (such as during a red giant star's helium flash), matter can become non-degenerate without reducing its density.
Degeneracy pressure contributes to the pressure of conventional solids, but these are not usually considered to be degenerate matter because a significant contribution to their pressure is provided by electrical repulsion of atomic nuclei and the screening of nuclei from each other by electrons. The free electron model of metals derives their physical properties by considering the conduction electrons alone as a degenerate gas, while the majority of the electrons are regarded as occupying bound quantum states. This solid state contrasts with degenerate matter that forms the body of a white dwarf, where most of the electrons would be treated as occupying free particle momentum states.
Exotic examples of degenerate matter include neutron degenerate matter, strange matter, metallic hydrogen and white dwarf matter.
Degenerate gases.
Degenerate gases are gases composed of fermions such as electrons, protons, and neutrons rather than molecules of ordinary matter. The electron gas in ordinary metals and in the interior of white dwarfs are two examples. Following the Pauli exclusion principle, there can be only one fermion occupying each quantum state. In a degenerate gas, all quantum states are filled up to the Fermi energy. Most stars are supported against their own gravitation by normal thermal gas pressure, while in white dwarf stars the supporting force comes from the degeneracy pressure of the electron gas in their interior. In neutron stars, the degenerate particles are neutrons.
A fermion gas in which all quantum states below a given energy level are filled is called a fully degenerate fermion gas. The difference between this energy level and the lowest energy level is known as the Fermi energy.
Electron degeneracy.
In an ordinary fermion gas in which thermal effects dominate, most of the available electron energy levels are unfilled and the electrons are free to move to these states. As particle density is increased, electrons progressively fill the lower energy states and additional electrons are forced to occupy states of higher energy even at low temperatures. Degenerate gases strongly resist further compression because the electrons cannot move to already filled lower energy levels due to the Pauli exclusion principle. Since electrons cannot give up energy by moving to lower energy states, no thermal energy can be extracted. The momentum of the fermions in the fermion gas nevertheless generates pressure, termed "degeneracy pressure".
Under high densities, matter becomes a degenerate gas when all electrons are stripped from their parent atoms. The core of a star, once hydrogen burning nuclear fusion reactions stops, becomes a collection of positively charged ions, largely helium and carbon nuclei, floating in a sea of electrons, which have been stripped from the nuclei. Degenerate gas is an almost perfect conductor of heat and does not obey ordinary gas laws. White dwarfs are luminous not because they are generating energy but rather because they have trapped a large amount of heat which is gradually radiated away. Normal gas exerts higher pressure when it is heated and expands, but the pressure in a degenerate gas does not depend on the temperature. When gas becomes super-compressed, particles position right up against each other to produce degenerate gas that behaves more like a solid. In degenerate gases the kinetic energies of electrons are quite high and the rate of collision between electrons and other particles is quite low, therefore degenerate electrons can travel great distances at velocities that approach the speed of light. Instead of temperature, the pressure in a degenerate gas depends only on the speed of the degenerate particles; however, adding heat does not increase the speed of most of the electrons, because they are stuck in fully occupied quantum states. Pressure is increased only by the mass of the particles, which increases the gravitational force pulling the particles closer together. Therefore, the phenomenon is the opposite of that normally found in matter where if the mass of the matter is increased, the object becomes bigger. In degenerate gas, when the mass is increased, the particles become spaced closer together due to gravity (and the pressure is increased), so the object becomes smaller. Degenerate gas can be compressed to very high densities, typical values being in the range of 10,000 kilograms per cubic centimeter.
There is an upper limit to the mass of an electron-degenerate object, the Chandrasekhar limit, beyond which electron degeneracy pressure cannot support the object against collapse. The limit is approximately 1.44 solar masses for objects with typical compositions expected for white dwarf stars (carbon and oxygen with two baryons per electron). This mass cut-off is appropriate only for a star supported by ideal electron degeneracy pressure under Newtonian gravity; in general relativity and with realistic Coulomb corrections, the corresponding mass limit is around 1.38 solar masses. The limit may also change with the chemical composition of the object, as it affects the ratio of mass to number of electrons present. The object's rotation, which counteracts the gravitational force, also changes the limit for any particular object. Celestial objects below this limit are white dwarf stars, formed by the gradual shrinking of the cores of stars that run out of fuel. During this shrinking, an electron-degenerate gas forms in the core, providing sufficient degeneracy pressure as it is compressed to resist further collapse. Above this mass limit, a neutron star (primarily supported by neutron degeneracy pressure) or a black hole may be formed instead.
Neutron degeneracy.
Neutron degeneracy is analogous to electron degeneracy and exists in neutron stars, which are partially supported by the pressure from a degenerate neutron gas. Neutron stars are formed either directly from the supernova of stars with masses between 10 and 25 "M"☉ (solar masses), or by white dwarfs acquiring a mass in excess of the Chandrasekhar limit of 1.44 "M"☉, usually either as a result of a merger or by feeding off of a close binary partner. Above the Chandrasekhar limit, the gravitational pressure at the core exceeds the electron degeneracy pressure, and electrons begin to combine with protons to produce neutrons (via inverse beta decay, also termed electron capture). The result is an extremely compact star composed of "nuclear matter", which is predominantly a degenerate neutron gas with a small admixture of degenerate proton and electron gases.
Neutrons in a degenerate neutron gas are spaced much more closely than electrons in an electron-degenerate gas because the more massive neutron has a much shorter wavelength at a given energy. This phenomenon is compounded by the fact that the pressures within neutron stars are much higher than those in white dwarfs. The pressure increase is caused by the fact that the compactness of a neutron star causes gravitational forces to be much higher than in a less compact body with similar mass. The result is a star with a diameter on the order of a thousandth that of a white dwarf.
The properties of neutron matter set an upper limit to the mass of a neutron star, the Tolman–Oppenheimer–Volkoff limit, which is analogous to the Chandrasekhar limit for white dwarf stars.
Proton degeneracy.
Sufficiently dense matter containing protons experiences proton degeneracy pressure, in a manner similar to the electron degeneracy pressure in electron-degenerate matter: protons confined to a sufficiently small volume have a large uncertainty in their momentum due to the Heisenberg uncertainty principle. However, because protons are much more massive than electrons, the same momentum represents a much smaller velocity for protons than for electrons. As a result, in matter with approximately equal numbers of protons and electrons, proton degeneracy pressure is much smaller than electron degeneracy pressure, and proton degeneracy is usually modelled as a correction to the equations of state of electron-degenerate matter.
Quark degeneracy.
At densities greater than those supported by neutron degeneracy, quark matter is expected to occur. Several variations of this hypothesis have been proposed that represent quark-degenerate states. Strange matter is a degenerate gas of quarks that is often assumed to contain strange quarks in addition to the usual up and down quarks. Color superconductor materials are degenerate gases of quarks in which quarks pair up in a manner similar to Cooper pairing in electrical superconductors. The equations of state for the various proposed forms of quark-degenerate matter vary widely, and are usually also poorly defined, due to the difficulty of modelling strong force interactions.
Quark-degenerate matter may occur in the cores of neutron stars, depending on the equations of state of neutron-degenerate matter. It may also occur in hypothetical quark stars, formed by the collapse of objects above the Tolman–Oppenheimer–Volkoff mass limit for neutron-degenerate objects. Whether quark-degenerate matter forms at all in these situations depends on the equations of state of both neutron-degenerate matter and quark-degenerate matter, both of which are poorly known. Quark stars are considered to be an intermediate category between neutron stars and black holes.
History.
Quantum mechanics uses the word 'degenerate' in two ways: degenerate energy levels and as the low temperature ground state limit for states of matter. The electron degeneracy pressure occurs in the ground state systems which are non-degenerate in energy levels. The term "degeneracy" derives from work on the specific heat of gases that pre-dates the use of the term in quantum mechanics.
In 1914 Walther Nernst described the reduction of the specific heat of gases at very low temperature as "degeneration"; he attributed this to quantum effects. In subsequent work in various papers on quantum thermodynamics by Albert Einstein, by Max Planck, and by Erwin Schrödinger, the effect at low temperatures came to be called "gas degeneracy". A fully degenerate gas has no volume dependence on pressure when temperature approaches absolute zero.
Early in 1927 Enrico Fermi and separately Llewellyn Thomas developed a semi-classical model for electrons in a metal. The model treated the electrons as a gas. Later in 1927, Arnold Sommerfeld applied the Pauli principle via Fermi-Dirac statistics to this electron gas model, computing the specific heat of metals; the result became Fermi gas model for metals. Sommerfeld called the low temperature region with quantum effects a "wholly degenerate gas".
Also in 1927 Ralph H. Fowler applied Fermi's model to the puzzle of the stability of white dwarf stars. This approach was extended to relativistic models by later studies and with the work of Subrahmanyan Chandrasekhar became the accepted model for star stability.
Citations.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "P=k_{\\rm B}\\frac{NT}{V}, "
},
{
"math_id": 1,
"text": "P=\\frac{(3\\pi^2)^{2/3}\\hbar^2}{5m} \\left(\\frac{N}{V}\\right)^{5/3},"
},
{
"math_id": 2,
"text": "P=K\\left(\\frac{N}{V}\\right)^{4/3},"
}
] |
https://en.wikipedia.org/wiki?curid=144428
|
14446614
|
Frank–Read source
|
Model for the generation of specific dislocations in crystals under deformation
In materials science, a Frank–Read source is a mechanism explaining the generation of multiple dislocations in specific well-spaced slip planes in crystals when they are deformed. When a crystal is deformed, in order for slip to occur, dislocations must be generated in the material. This implies that, during deformation, dislocations must be primarily generated in these planes. Cold working of metal increases the number of dislocations by the Frank–Read mechanism. Higher dislocation density increases yield strength and causes work hardening of metals.
The mechanism of dislocation generation was proposed by and named after British physicist Charles Frank and Thornton Read.
History.
Charles Frank detailed the history of the discovery from his perspective in "Proceedings of the Royal Society" in 1980.
In 1950 Charles Frank, who was then a research fellow in the physics department at the University of Bristol, visited the United States to participate in a conference on crystal plasticity in Pittsburgh. Frank arrived in the United States well in advance of the conference to spend time at a naval laboratory and to give a lecture at Cornell University. When, during his travels in Pennsylvania, Frank visited Pittsburgh, he received a letter from fellow scientist Jock Eshelby suggesting that he read a recent paper by Gunther Leibfried. Frank was supposed to board a train to Cornell to give his lecture at Cornell, but before departing for Cornell he went to the library at Carnegie Institute of Technology to obtain a copy of the paper. The library did not yet have the journal with Leibfried's paper, but the staff at the library believed that the journal could be in the recently arrived package from Germany. Frank decided to wait for the library to open the package, which did indeed contain the journal. Upon reading the paper he took a train to Cornell, where he was told to pass the time until 5:00, as the faculty was in meeting. Frank decided to take a walk between 3:00 and 5:00. During those two hours, while considering the Leibfried paper, he formulated the theory for what was later named the Frank–Read source.
A couple of days later, he traveled to the conference on crystal plasticity in Pittsburgh where he ran into Thornton Read in the hotel lobby. Upon encountering each other, the two scientists immediately discovered that they had come up with the same idea for dislocation generation almost simultaneously (Frank during his walk at Cornell, and Thornton Read during tea the previous Wednesday) and decided to write a joint paper on the topic. The mechanism for dislocation generation described in that paper is now known as the Frank–Read source.
Mechanism.
The Frank–Read source is a mechanism based on dislocation multiplication in a slip plane under shear stress.
Consider a straight dislocation in a crystal slip plane with its two ends, A and B, pinned. If a shear stress formula_0 is exerted on the slip plane then a force formula_1, where "b" is the Burgers vector of the dislocation and "x" is the distance between the pinning sites A and B, is exerted on the dislocation line as a result of the shear stress. This force acts perpendicularly to the line, inducing the dislocation to lengthen and curve into an arc.
The bending force caused by the shear stress is opposed by the line tension of the dislocation, which acts on each end of the dislocation along the direction of the dislocation line away from A and B with a magnitude of formula_2, where G is the shear modulus. If the dislocation bends, the ends of the dislocation make an angle with the horizontal between A and B, which gives the line tensions acting along the ends a vertical component acting directly against the force induced by the shear stress. If sufficient shear stress is applied and the dislocation bends, the vertical component from the line tensions, which acts directly against the force caused by the shear stress, grows as the dislocation approaches a semicircular shape.
When the dislocation becomes a semicircle, all of the line tension is acting against the bending force induced by the shear stress, because the line tension is perpendicular to the horizontal between A and B. For the dislocation to reach this point, it is thus evident that the equation:
formula_3
must be satisfied, and from this we can solve for the shear stress:
formula_4
This is the stress required to generate dislocation from a Frank–Read source. If the shear stress increases any further and the dislocation passes the semicircular equilibrium state, it will spontaneously continue to bend and grow, spiraling around the A and B pinning points, until the segments spiraling around the A and B pinning points collide and cancel. The process results in a dislocation loop around A and B in the slip plane which expands under continued shear stress, and also in a new dislocation line between A and B which, under renewed or continued shear, can continue to generate dislocation loops in the manner just described.
A Frank–Read loop can thus generate many dislocations in a plane in a crystal under applied stress. The Frank–Read source mechanism explains why dislocations are primarily generated on certain slip planes; dislocations are primarily generated in just those planes with Frank–Read sources. It is important to note that if the shear stress does not exceed:
formula_4
and the dislocation does not bend past the semicircular equilibrium state, it will not form a dislocation loop and instead revert to its original state.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\tau"
},
{
"math_id": 1,
"text": "F=\\tau \\cdot bx "
},
{
"math_id": 2,
"text": "Gb^2"
},
{
"math_id": 3,
"text": "F=\\tau \\cdot bx =2Gb^2"
},
{
"math_id": 4,
"text": "\\tau=\\frac{2Gb} x "
}
] |
https://en.wikipedia.org/wiki?curid=14446614
|
14447713
|
Valentinus Otho
|
German mathematician and astronomer
Valentinus Otho (also Valentin Otto; born around 1545–46 possibly in Magdeburg – 8 April 1603 in Heidelberg) was a German mathematician and astronomer.
Life.
In 1573 he came to Wittenberg, proposing to Johannes Praetorius an approximation of pi as formula_0 (now known as "milü", as named by its first discover, the Chinese mathematician Zu Chongzhi).
In 1575 he supported Georg Joachim Rheticus in his trigonometric tables. The next year they went to Kaschau in Hungary where Rheticus died. Thus, Otho inherited the De revolutionibus manuscript of Nicolaus Copernicus that Rheticus had published in 1543 in Nuremberg.
Otho became Professor for mathematics in Wittenberg, but when the rulers of Saxony did not support the tables, he moved to Heidelberg where Elector Friedrich IV sponsored the '’Opus Palatinum de Triangulis’’ in 1596.
|
[
{
"math_id": 0,
"text": " \\pi \\approx \\tfrac {355} {113} "
}
] |
https://en.wikipedia.org/wiki?curid=14447713
|
14448807
|
FLAME clustering
|
Type of algorithm for data clustering
Fuzzy clustering by Local Approximation of MEmberships (FLAME) is a data clustering algorithm that defines clusters in the dense parts of a dataset and performs cluster assignment solely based on the neighborhood relationships among objects. The key feature of this algorithm is that the neighborhood relationships among neighboring objects in the feature space are used to constrain the memberships of neighboring objects in the fuzzy membership space.
Description of the FLAME algorithm.
The FLAME algorithm is mainly divided into three steps:
The optimization problem in FLAME.
The Local/Neighborhood Approximation of Fuzzy Memberships is a procedure to minimize the Local/Neighborhood Approximation Error (LAE/NAE) defined as the following:
formula_0
where formula_1 is the set of all type 3 objects, formula_2 is the fuzzy membership vector of object formula_3, formula_4 is the set of nearest neighbors of formula_3, and formula_5 with formula_6 are the coefficients reflecting the relative proximities of the nearest neighbors.
The NAE can be minimized by solving the following linear equations with unique solution which is the unique global minimum of NAE with value zero:
formula_7
where formula_8 is the number of CSOs plus one (for the outlier group). The following iterative procedure can be used to solve these linear equations:
formula_9
|
[
{
"math_id": 0,
"text": "\nE(\\{\\boldsymbol{p}\\})=\\sum_{\\boldsymbol{x}\\in\\boldsymbol{X}} \\bigg\\| \\boldsymbol{p(x)}-\\sum_{ \\boldsymbol{y \\in \\mathcal{N}(x)} } w_{\\boldsymbol{xy}} \\boldsymbol{p(y)} \\bigg\\|^2\n"
},
{
"math_id": 1,
"text": "\\boldsymbol{X}"
},
{
"math_id": 2,
"text": "\\boldsymbol{p(x)}"
},
{
"math_id": 3,
"text": "\\boldsymbol{x}"
},
{
"math_id": 4,
"text": "\\mathcal{N}(x)"
},
{
"math_id": 5,
"text": "w_{\\boldsymbol{xy}}"
},
{
"math_id": 6,
"text": "\\sum_{\\boldsymbol{y\\in \\mathcal{N}(x)}}w_{\\boldsymbol{xy}}=1"
},
{
"math_id": 7,
"text": "\np_k(\\boldsymbol{x})-\\sum_{\\boldsymbol{y\\in \\mathcal{N}(x)}} w_{ \\boldsymbol{xy} } p_k(\\boldsymbol{y}) = 0, \\quad\\forall{\\boldsymbol{x}\\in \\boldsymbol{X} },\\quad k=1,...,M\n"
},
{
"math_id": 8,
"text": "M"
},
{
"math_id": 9,
"text": "\n{\\boldsymbol{p}^{t+1}(\\boldsymbol{x})} = \\sum_{ \\boldsymbol{y\\in \\mathcal{N}(x)} }\nw_{\\boldsymbol{xy}} {\\boldsymbol{p}^t (\\boldsymbol{y})}\n"
}
] |
https://en.wikipedia.org/wiki?curid=14448807
|
1444970
|
Directional statistics
|
Subdiscipline of statistics
Directional statistics (also circular statistics or spherical statistics) is the subdiscipline of statistics that deals with directions (unit vectors in Euclidean space, R"n"), axes (lines through the origin in R"n") or rotations in R"n". More generally, directional statistics deals with observations on compact Riemannian manifolds including the Stiefel manifold.
The fact that 0 degrees and 360 degrees are identical angles, so that for example 180 degrees is not a sensible mean of 2 degrees and 358 degrees, provides one illustration that special statistical methods are required for the analysis of some types of data (in this case, angular data). Other examples of data that may be regarded as directional include statistics involving temporal periods (e.g. time of day, week, month, year, etc.), compass directions, dihedral angles in molecules, orientations, rotations and so on.
Circular distributions.
Any probability density function (pdf) formula_0 on the line can be "wrapped" around the circumference of a circle of unit radius. That is, the pdf of the wrapped variable
formula_1
is
formula_2
This concept can be extended to the multivariate context by an extension of the simple sum to a number of formula_3 sums that cover all dimensions in the feature space:
formula_4
where formula_5 is the formula_6-th Euclidean basis vector.
The following sections show some relevant circular distributions.
von Mises circular distribution.
The "von Mises distribution" is a circular distribution which, like any other circular distribution, may be thought of as a wrapping of a certain linear probability distribution around the circle. The underlying linear probability distribution for the von Mises distribution is mathematically intractable; however, for statistical purposes, there is no need to deal with the underlying linear distribution. The usefulness of the von Mises distribution is twofold: it is the most mathematically tractable of all circular distributions, allowing simpler statistical analysis, and it is a close approximation to the wrapped normal distribution, which, analogously to the linear normal distribution, is important because it is the limiting case for the sum of a large number of small angular deviations. In fact, the von Mises distribution is often known as the "circular normal" distribution because of its ease of use and its close relationship to the wrapped normal distribution.
The pdf of the von Mises distribution is: formula_7 where formula_8 is the modified Bessel function of order 0.
Circular uniform distribution.
The probability density function (pdf) of the "circular uniform distribution" is given by
formula_9
It can also be thought of as formula_10 of the von Mises above.
Wrapped normal distribution.
The pdf of the "wrapped normal distribution" (WN) is:
formula_11
where μ and σ are the mean and standard deviation of the unwrapped distribution, respectively and formula_12 is the Jacobi theta function:
formula_13 where formula_14 and formula_15
Wrapped Cauchy distribution.
The pdf of the "wrapped Cauchy distribution" (WC) is:
formula_16
where formula_17 is the scale factor and formula_18 is the peak position.
Wrapped Lévy distribution.
The pdf of the "wrapped Lévy distribution" (WL) is:
formula_19
where the value of the summand is taken to be zero when formula_20, formula_21 is the scale factor and formula_22 is the location parameter.
Projected normal distribution.
The projected normal distribution is a circular distribution representing the direction of a random variable with multivariate normal distribution, obtained by radial projection of the variable over the unit (n-1)-sphere. Due to this, and unlike other commonly used circular distributions, it is not symmetric nor unimodal.
Distributions on higher-dimensional manifolds.
There also exist distributions on the two-dimensional sphere (such as the Kent distribution), the "N"-dimensional sphere (the von Mises–Fisher distribution) or the torus (the bivariate von Mises distribution).
The matrix von Mises–Fisher distribution is a distribution on the Stiefel manifold, and can be used to construct probability distributions over rotation matrices.
The Bingham distribution is a distribution over axes in "N" dimensions, or equivalently, over points on the ("N" − 1)-dimensional sphere with the antipodes identified. For example, if "N" = 2, the axes are undirected lines through the origin in the plane. In this case, each axis cuts the unit circle in the plane (which is the one-dimensional sphere) at two points that are each other's antipodes. For "N" = 4, the Bingham distribution is a distribution over the space of unit quaternions (versors). Since a versor corresponds to a rotation matrix, the Bingham distribution for "N" = 4 can be used to construct probability distributions over the space of rotations, just like the Matrix-von Mises–Fisher distribution.
These distributions are for example used in geology, crystallography and bioinformatics.
Moments.
The raw vector (or trigonometric) moments of a circular distribution are defined as
formula_23
where formula_24 is any interval of length formula_25, formula_26 is the PDF of the circular distribution, and formula_27. Since the integral formula_26 is unity, and the integration interval is finite, it follows that the moments of any circular distribution are always finite and well defined.
Sample moments are analogously defined:
formula_28
The population resultant vector, length, and mean angle are defined in analogy with the corresponding sample parameters.
formula_29
formula_30
formula_31
In addition, the lengths of the higher moments are defined as:
formula_32
while the angular parts of the higher moments are just formula_33. The lengths of all moments will lie between 0 and 1.
Measures of location and spread.
Various measures of central tendency and statistical dispersion may be defined for both the population and a sample drawn from that population.
Central tendency.
The most common measure of location is the circular mean. The population circular mean is simply the first moment of the distribution while the sample mean is the first moment of the sample. The sample mean will serve as an unbiased estimator of the population mean.
When data is concentrated, the median and mode may be defined by analogy to the linear case, but for more dispersed or multi-modal data, these concepts are not useful.
Dispersion.
The most common measures of circular spread are:
Distribution of the mean.
Given a set of "N" measurements formula_42 the mean value of "z" is defined as:
formula_43
which may be expressed as
formula_44
where
formula_45
or, alternatively as:
formula_46
where
formula_47
The distribution of the mean angle (formula_48) for a circular pdf "P"("θ") will be given by:
formula_49
where formula_24 is over any interval of length formula_25 and the integral is subject to the constraint that formula_50 and formula_51 are constant, or, alternatively, that formula_52 and formula_48 are constant.
The calculation of the distribution of the mean for most circular distributions is not analytically possible, and in order to carry out an analysis of variance, numerical or mathematical approximations are needed.
The central limit theorem may be applied to the distribution of the sample means. (main article: Central limit theorem for directional statistics). It can be shown that the distribution of formula_53 approaches a bivariate normal distribution in the limit of large sample size.
Goodness of fit and significance testing.
For cyclic data – (e.g., is it uniformly distributed) :
|
[
{
"math_id": 0,
"text": "\\ p(x)"
},
{
"math_id": 1,
"text": "\\theta = x_w=x \\bmod 2\\pi\\ \\ \\in (-\\pi,\\pi]"
},
{
"math_id": 2,
"text": "p_w(\\theta) = \\sum_{k=-\\infty}^{\\infty}{p(\\theta+2\\pi k)}."
},
{
"math_id": 3,
"text": "F"
},
{
"math_id": 4,
"text": "p_w(\\boldsymbol\\theta) = \\sum_{k_1=-\\infty}^{\\infty} \\cdots \\sum_{k_F=-\\infty}^\\infty {p(\\boldsymbol\\theta + 2\\pi k_1\\mathbf{e}_1 + \\dots + 2\\pi k_F\\mathbf{e}_F)}"
},
{
"math_id": 5,
"text": "\\mathbf{e}_k = (0, \\dots, 0, 1, 0, \\dots, 0)^{\\mathsf{T}}"
},
{
"math_id": 6,
"text": "k"
},
{
"math_id": 7,
"text": "f(\\theta;\\mu,\\kappa) = \\frac{e^{\\kappa\\cos(\\theta-\\mu)}}{2\\pi I_0(\\kappa)}"
},
{
"math_id": 8,
"text": "I_0"
},
{
"math_id": 9,
"text": "U(\\theta) = \\frac 1 {2\\pi}."
},
{
"math_id": 10,
"text": "\\kappa = 0"
},
{
"math_id": 11,
"text": "\nWN(\\theta;\\mu,\\sigma) = \\frac{1}{\\sigma \\sqrt{2\\pi}} \\sum^{\\infty}_{k=-\\infty} \\exp \\left[\\frac{-(\\theta - \\mu - 2\\pi k)^2}{2 \\sigma^2} \\right] = \\frac{1}{2\\pi}\\vartheta\\left(\\frac{\\theta-\\mu}{2\\pi},\\frac{i\\sigma^2}{2\\pi}\\right)\n"
},
{
"math_id": 12,
"text": "\\vartheta(\\theta,\\tau)"
},
{
"math_id": 13,
"text": "\n\\vartheta(\\theta,\\tau) = \\sum_{n=-\\infty}^\\infty (w^2)^n q^{n^2} \n"
},
{
"math_id": 14,
"text": "w \\equiv e^{i\\pi \\theta}"
},
{
"math_id": 15,
"text": "q \\equiv e^{i\\pi\\tau}."
},
{
"math_id": 16,
"text": "WC(\\theta;\\theta_0,\\gamma) = \\sum_{n=-\\infty}^\\infty \\frac{\\gamma}{\\pi(\\gamma^2+(\\theta+2\\pi n-\\theta_0)^2)}\n= \\frac{1}{2\\pi}\\,\\,\\frac{\\sinh\\gamma}{\\cosh\\gamma-\\cos(\\theta-\\theta_0)}"
},
{
"math_id": 17,
"text": "\\gamma"
},
{
"math_id": 18,
"text": "\\theta_0"
},
{
"math_id": 19,
"text": "f_{WL}(\\theta;\\mu,c) = \\sum_{n=-\\infty}^\\infty \\sqrt{\\frac{c}{2\\pi}}\\,\\frac{e^{-c/2(\\theta+2\\pi n-\\mu)}}{(\\theta+2\\pi n-\\mu)^{3/2}}"
},
{
"math_id": 20,
"text": "\\theta+2\\pi n-\\mu \\le 0"
},
{
"math_id": 21,
"text": "c"
},
{
"math_id": 22,
"text": "\\mu"
},
{
"math_id": 23,
"text": "\nm_n=\\operatorname E(z^n)=\\int_\\Gamma P(\\theta) z^n \\, d\\theta\n"
},
{
"math_id": 24,
"text": "\\Gamma"
},
{
"math_id": 25,
"text": "2\\pi"
},
{
"math_id": 26,
"text": "P(\\theta)"
},
{
"math_id": 27,
"text": "z=e^{i \\theta}"
},
{
"math_id": 28,
"text": "\n\\overline{m}_n=\\frac{1}{N}\\sum_{i=1}^N z_i^n.\n"
},
{
"math_id": 29,
"text": "\n\\rho=m_1\n"
},
{
"math_id": 30,
"text": "\nR=|m_1|\n"
},
{
"math_id": 31,
"text": "\n\\theta_n=\\operatorname{Arg}(m_n).\n"
},
{
"math_id": 32,
"text": "\nR_n=|m_n|\n"
},
{
"math_id": 33,
"text": "(n \\theta_n) \\bmod 2\\pi"
},
{
"math_id": 34,
"text": "\n\\overline{\\operatorname{Var}(z)} = 1 - \\overline{R}\n"
},
{
"math_id": 35,
"text": "\n\\operatorname{Var}(z) = 1 - R\n"
},
{
"math_id": 36,
"text": "\nS(z) = \\sqrt{\\ln(1/R^2)} = \\sqrt{-2\\ln(R)}\n"
},
{
"math_id": 37,
"text": "\n\\overline{S}(z) = \\sqrt{\\ln(1/{\\overline{R}}^2)} = \\sqrt{-2\\ln({\\overline{R}})}\n"
},
{
"math_id": 38,
"text": "S(z)"
},
{
"math_id": 39,
"text": "S(z)^2 = 2 \\operatorname{Var}(z)"
},
{
"math_id": 40,
"text": "\\delta = \\frac{1-R_2}{2R^2}"
},
{
"math_id": 41,
"text": "\n\\overline{\\delta}=\\frac{1-{\\overline{R}_2}}{2{\\overline{R}}^2}\n"
},
{
"math_id": 42,
"text": "z_n=e^{i\\theta_n}"
},
{
"math_id": 43,
"text": "\n\\overline{z}=\\frac{1}{N}\\sum_{n=1}^N z_n\n"
},
{
"math_id": 44,
"text": "\n\\overline{z} = \\overline{C}+i\\overline{S}\n"
},
{
"math_id": 45,
"text": "\n\\overline{C} = \\frac{1}{N}\\sum_{n=1}^N \\cos(\\theta_n) \\text{ and } \\overline{S} = \\frac{1}{N}\\sum_{n=1}^N \\sin(\\theta_n)\n"
},
{
"math_id": 46,
"text": "\n\\overline{z} = \\overline{R}e^{i\\overline{\\theta}}\n"
},
{
"math_id": 47,
"text": "\n\\overline{R} = \\sqrt{{\\overline{C}}^2+{\\overline{S}}^2} \\text{ and } \\overline{\\theta} = \\arctan (\\overline{S} / \\overline{C}).\n"
},
{
"math_id": 48,
"text": "\\overline{\\theta}"
},
{
"math_id": 49,
"text": "\nP(\\overline{C},\\overline{S}) \\, d\\overline{C} \\, d\\overline{S} =\nP(\\overline{R},\\overline{\\theta}) \\, d\\overline{R} \\, d\\overline{\\theta} = \n\\int_\\Gamma \\cdots \\int_\\Gamma \\prod_{n=1}^N \\left[ P(\\theta_n) \\, d\\theta_n \\right]\n"
},
{
"math_id": 50,
"text": "\\overline{S}"
},
{
"math_id": 51,
"text": "\\overline{C}"
},
{
"math_id": 52,
"text": "\\overline{R}"
},
{
"math_id": 53,
"text": "[\\overline{C},\\overline{S}]"
}
] |
https://en.wikipedia.org/wiki?curid=1444970
|
14451712
|
Halstead complexity measures
|
Software maintainability index
Halstead complexity measures are software metrics introduced by Maurice Howard Halstead in 1977 as part of his treatise on establishing an empirical science of software development.
Halstead made the observation that metrics of the software should reflect the implementation or expression of algorithms in different languages, but be independent of their execution on a specific platform.
These metrics are therefore computed statically from the code.
Halstead's goal was to identify measurable properties of software, and the relations between them.
This is similar to the identification of measurable properties of matter (like the volume, mass, and pressure of a gas) and the relationships between them (analogous to the gas equation).
Thus his metrics are actually not just complexity metrics.
Calculation.
For a given problem, let:
From these numbers, several measures can be calculated:
The difficulty measure is related to the difficulty of the program to write or understand, e.g. when doing code review.
The effort measure translates into actual coding time using the following relation,
Halstead's delivered bugs (B) is an estimate for the number of errors in the implementation.
Example.
Consider the following C program:
main()
int a, b, c, avg;
scanf("%d %d %d", &a, &b, &c);
avg = (a+b+c)/3;
printf("avg = %d", avg);
The distinct operators (formula_0) are:
codice_0, codice_1, codice_2, codice_3, codice_4,
codice_5, codice_6, codice_7, codice_8, codice_9, codice_10, codice_11
The distinct operands (formula_1) are:
codice_12, codice_13, codice_14, codice_15, codice_16, codice_17, codice_18
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\,\\eta_1"
},
{
"math_id": 1,
"text": "\\,\\eta_2"
},
{
"math_id": 2,
"text": "\\,N_1"
},
{
"math_id": 3,
"text": "\\,N_2"
},
{
"math_id": 4,
"text": "\\eta = \\eta_1 + \\eta_2 \\,"
},
{
"math_id": 5,
"text": "N = N_1 + N_2 \\,"
},
{
"math_id": 6,
"text": "\\hat{N} = \\eta_1 \\log_2 \\eta_1 + \\eta_2 \\log_2 \\eta_2 "
},
{
"math_id": 7,
"text": "V = N \\times \\log_2 \\eta "
},
{
"math_id": 8,
"text": "D = { \\eta_1 \\over 2 } \\times { N_2 \\over \\eta_2 } "
},
{
"math_id": 9,
"text": "E = D \\times V "
},
{
"math_id": 10,
"text": "T = {E \\over 18}"
},
{
"math_id": 11,
"text": "B = {E^{2 \\over 3} \\over 3000}"
},
{
"math_id": 12,
"text": "B = {V \\over 3000}"
},
{
"math_id": 13,
"text": "\\eta_1 = 12"
},
{
"math_id": 14,
"text": "\\eta_2 = 7"
},
{
"math_id": 15,
"text": "\\eta = 19"
},
{
"math_id": 16,
"text": "N_1 = 27"
},
{
"math_id": 17,
"text": "N_2 = 15"
},
{
"math_id": 18,
"text": "N = 42"
},
{
"math_id": 19,
"text": "\\hat{N} = 12 \\times log_2 12 + 7 \\times log_2 7 = 62.67"
},
{
"math_id": 20,
"text": "V = 42 \\times log_2 19 = 178.4"
},
{
"math_id": 21,
"text": "D = { 12 \\over 2 } \\times { 15 \\over 7 } = 12.85"
},
{
"math_id": 22,
"text": "E = 12.85 \\times 178.4 = 2292.44"
},
{
"math_id": 23,
"text": "T = { 2292.44 \\over 18 } = 127.357"
},
{
"math_id": 24,
"text": "B = { 2292.44 ^ { 2 \\over 3 } \\over 3000 } = 0.05"
}
] |
https://en.wikipedia.org/wiki?curid=14451712
|
14451755
|
Nucleotide diphosphatase
|
Class of enzymes
In enzymology, a nucleotide diphosphatase (EC 3.6.1.9) is an enzyme that catalyzes the chemical reaction
a dinucleotide + H2O formula_0 2 mononucleotides
Thus, the two substrates of this enzyme are dinucleotide and H2O, whereas its product is mononucleotide.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is dinucleotide nucleotidohydrolase. Other names in common use include nucleotide pyrophosphatase, and nucleotide-sugar pyrophosphatase. This enzyme participates in 5 metabolic pathways: purine metabolism, starch and sucrose metabolism, riboflavin metabolism, nicotinate and nicotinamide metabolism, and pantothenate and coa biosynthesis.
Structural studies.
As of late 2007[ [update]], 5 structures have been solved for this class of enzymes, with PDB accession codes 1NQY, 1NQZ, 2GSN, 2GSO, and 2GSU.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14451755
|
1445176
|
Inverse distance weighting
|
Type of deterministic method for multivariate interpolation
Inverse distance weighting (IDW) is a type of deterministic method for multivariate interpolation with a known scattered set of points. The assigned values to unknown points are calculated with a weighted average of the values available at the known points. This method can also be used to create spatial weights matrices in spatial autocorrelation analyses (e.g. Moran's "I").
The name given to this type of method was motivated by the weighted average applied, since it resorts to the inverse of the distance to each known point ("amount of proximity") when assigning weights.
Definition of the problem.
The expected result is a discrete assignment of the unknown function formula_0 in a study region:
formula_1
where formula_2 is the study region.
The set of formula_3 known data points can be described as a list of tuples:
formula_4
The function is to be "smooth" (continuous and once differentiable), to be exact (formula_5) and to meet the user's intuitive expectations about the phenomenon under investigation. Furthermore, the function should be suitable for a computer application at a reasonable cost (nowadays, a basic implementation will probably make use of parallel resources).
Shepard's method.
Historical reference.
At the Harvard Laboratory for Computer Graphics and Spatial Analysis, beginning in 1965, a varied collection of scientists converged to rethink, among other things, what are now called geographic information systems.
The motive force behind the Laboratory, Howard Fisher, conceived an improved computer mapping program that he called SYMAP, which, from the start, Fisher wanted to improve on the interpolation. He showed Harvard College freshmen his work on SYMAP, and many of them participated in Laboratory events. One freshman, Donald Shepard, decided to overhaul the interpolation in SYMAP, resulting in his famous article from 1968.
Shepard's algorithm was also influenced by the theoretical approach of William Warntz and others at the Lab who worked with spatial analysis. He conducted a number of experiments with the exponent of distance, deciding on something closer to the gravity model (exponent of -2). Shepard implemented not just basic inverse distance weighting, but also allowed barriers (permeable and absolute) to interpolation.
Other research centers were working on interpolation at this time, particularly University of Kansas and their SURFACE II program. Still, the features of SYMAP were state-of-the-art, even though programmed by an undergraduate.
Basic form.
Given a set of sample points formula_6, the IDW interpolation function formula_7 is defined as:
formula_8
where
formula_9
is a simple IDW weighting function, as defined by Shepard, x denotes an interpolated (arbitrary) point, x"i" is an interpolating (known) point, formula_10 is a given distance (metric operator) from the known point x"i" to the unknown point x, "N" is the total number of known points used in interpolation and formula_11 is a positive real number, called the power parameter.
Here weight decreases as distance increases from the interpolated points. Greater values of formula_11 assign greater influence to values closest to the interpolated point, with the result turning into a mosaic of tiles (a Voronoi diagram) with nearly constant interpolated value for large values of "p". For two dimensions, power parameters formula_12 cause the interpolated values to be dominated by points far away, since with a density formula_13 of data points and neighboring points between distances formula_14 to formula_15, the summed weight is approximately
formula_16
which diverges for formula_17 and formula_18. For "M" dimensions, the same argument holds for formula_19. For the choice of value for "p", one can consider the degree of smoothing desired in the interpolation, the density and distribution of samples being interpolated, and the maximum distance over which an individual sample is allowed to influence the surrounding ones.
"Shepard's method" is a consequence of minimization of a functional related to a measure of deviations between tuples of interpolating points {x, "u"} and "i" tuples of interpolated points {x"i", "ui"}, defined as:
formula_20
derived from the minimizing condition:
formula_21
The method can easily be extended to other dimensional spaces and it is in fact a generalization of Lagrange approximation into a multidimensional spaces. A modified version of the algorithm designed for trivariate interpolation was developed by Robert J. Renka and is available in Netlib as algorithm 661 in the TOMS Library.
Modified Shepard's method.
Another modification of Shepard's method calculates interpolated value using only nearest neighbors within "R"-sphere (instead of full sample). Weights are slightly modified in this case:
formula_22
When combined with fast spatial search structure (like kd-tree), it becomes efficient "N" log "N" interpolation method suitable for large-scale problems.
|
[
{
"math_id": 0,
"text": "u"
},
{
"math_id": 1,
"text": "u(x): x \\to \\mathbb{R}, \\quad x \\in \\mathbf{D} \\sub \\mathbb{R}^n,"
},
{
"math_id": 2,
"text": "\\mathbf{D}"
},
{
"math_id": 3,
"text": "N"
},
{
"math_id": 4,
"text": "[(x_1, u_1), (x_2, u_2), ..., (x_N, u_N)]."
},
{
"math_id": 5,
"text": "u(x_i) = u_i"
},
{
"math_id": 6,
"text": "\\{ \\mathbf{x}_i, u_i | \\text{for } \\mathbf{x}_i \\in \\mathbb{R}^n, u_i \\in \\mathbb{R}\\}_{i=1}^N"
},
{
"math_id": 7,
"text": "u(\\mathbf{x}): \\mathbb{R}^n \\to \\mathbb{R}"
},
{
"math_id": 8,
"text": "u(\\mathbf{x}) = \\begin{cases}\n \\dfrac{\\sum_{i = 1}^{N}{ w_i(\\mathbf{x}) u_i } }{ \\sum_{i = 1}^{N}{ w_i(\\mathbf{x}) } }, & \\text{if } d(\\mathbf{x},\\mathbf{x}_i) \\neq 0 \\text{ for all } i, \\\\\n u_i, & \\text{if } d(\\mathbf{x},\\mathbf{x}_i) = 0 \\text{ for some } i,\n\\end{cases} "
},
{
"math_id": 9,
"text": "w_i(\\mathbf{x}) = \\frac{1}{d(\\mathbf{x},\\mathbf{x}_i)^p}"
},
{
"math_id": 10,
"text": "d"
},
{
"math_id": 11,
"text": "p"
},
{
"math_id": 12,
"text": "p \\leq 2"
},
{
"math_id": 13,
"text": "\\rho"
},
{
"math_id": 14,
"text": "r_0"
},
{
"math_id": 15,
"text": "R"
},
{
"math_id": 16,
"text": "\\sum_j w_j \\approx \\int_{r_0}^R \\frac{2\\pi r\\rho \\,dr}{r^p} = 2\\pi\\rho\\int_{r_0}^R r^{1-p} \\,dr,"
},
{
"math_id": 17,
"text": "R\\rightarrow\\infty"
},
{
"math_id": 18,
"text": "p\\leq2"
},
{
"math_id": 19,
"text": "p\\leq M"
},
{
"math_id": 20,
"text": "\\phi(\\mathbf{x}, u) = \\left( \\sum_{i = 0}^{N}{\\frac{(u-u_i)^2}{d(\\mathbf{x},\\mathbf{x}_i)^p}} \\right)^{\\frac{1}{p}} ,"
},
{
"math_id": 21,
"text": "\\frac{\\partial \\phi(\\mathbf{x}, u)}{\\partial u} = 0."
},
{
"math_id": 22,
"text": "w_k(\\mathbf{x}) = \\left( \\frac{\\max(0,R-d(\\mathbf{x},\\mathbf{x}_k))}{R d(\\mathbf{x},\\mathbf{x}_k)} \\right)^2."
}
] |
https://en.wikipedia.org/wiki?curid=1445176
|
14451780
|
ABC-type oligopeptide transporter
|
Class of enzymes
The ABC-type oligopeptide transporter (EC 7.4.2.6) or oligopeptide permease is an enzyme that catalyzes the chemical reaction
ATP + H2O + oligopeptide(out) formula_0 ADP + phosphate + oligopeptide(in)
The 3 substrates of this enzyme are ATP, H2O, and oligopeptide, whereas its 3 products are ADP, phosphate, and oligopeptide.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (ABC-type, oligopeptide-importing).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14451780
|
14451822
|
Oligosaccharide-diphosphodolichol diphosphatase
|
Class of enzymes
In enzymology, an oligosaccharide-diphosphodolichol diphosphatase (EC 3.6.1.44) is an enzyme that catalyzes the chemical reaction
oligosaccharide-diphosphodolichol + H2O formula_0 oligosaccharide phosphate + dolichyl phosphate
Thus, the two substrates of this enzyme are oligosaccharide-diphosphodolichol and H2O, whereas its two products are oligosaccharide phosphate and dolichyl phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is oligosaccharide-diphosphodolichol phosphodolichohydrolase. This enzyme is also called oligosaccharide-diphosphodolichol pyrophosphatase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14451822
|
14451861
|
Oligosaccharide-transporting ATPase
|
Class of enzymes
In enzymology, an oligosaccharide-transporting ATPase (EC 3.6.3.18) is an enzyme that catalyzes the chemical reaction
ATP + H2O + oligosaccharideout formula_0 ADP + phosphate + oligosaccharidein
The 3 substrates of this enzyme are ATP, H2O, and oligosaccharide, whereas its 3 products are ADP, phosphate, and oligosaccharide.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (disaccharide-importing).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14451861
|
14451894
|
Peptide-transporting ATPase
|
Class of enzymes
In enzymology, a peptide-transporting ATPase (EC 3.6.3.43) is an enzyme that catalyzes the chemical reaction
ATP + H2O + peptidein formula_0 ADP + phosphate + peptideout
The 3 substrates of this enzyme are ATP, H2O, and peptide, whereas its 3 products are ADP, phosphate, and peptide.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (peptide-exporting).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14451894
|
14451930
|
Peroxisome-assembly ATPase
|
Class of enzymes
In enzymology, a peroxisome-assembly ATPase (EC 3.6.4.7) is an enzyme that catalyzes the chemical reaction
ATP + H2O formula_0 ADP + phosphate
Thus, the two substrates of this enzyme are ATP and H2O, whereas its two products are ADP and phosphate. Its function is to transport components of the peroxisome in and out of the organelle.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to facilitate cellular and subcellular movement. The systematic name of this enzyme class is ATP phosphohydrolase (peroxisome-assembling). This enzyme is also called peroxisome assembly factor-2.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14451930
|
14451974
|
Phosphate-transporting ATPase
|
Class of enzymes
In enzymology, a phosphate-transporting ATPase (EC 3.6.3.27) is an enzyme that catalyzes the chemical reaction
ATP + H2O + phosphate(out) formula_0 ADP + phosphate + phosphate(in)
The 3 substrates of this enzyme are ATP, H2O, and phosphate, whereas its two products are ADP and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (phosphate-importing). This enzyme is also called ABC phosphate transporter. This enzyme participates in abc transporters - general.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14451974
|
14452008
|
Phosphoadenylylsulfatase
|
Class of enzymes
In enzymology, a phosphoadenylylsulfatase (EC 3.6.2.2) is an enzyme that catalyzes the chemical reaction
3'-phosphoadenylyl sulfate + H2O formula_0 adenosine 3',5'-bisphosphate + sulfate
Thus, the two substrates of this enzyme are 3'-phosphoadenylyl sulfate and H2O, whereas its two products are adenosine 3',5'-bisphosphate and sulfate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in sulfonyl-containing anhydrides. The systematic name of this enzyme class is 3'-phosphoadenylyl-sulfate sulfohydrolase. Other names in common use include 3-phosphoadenylyl sulfatase, 3-phosphoadenosine 5-phosphosulfate sulfatase, PAPS sulfatase, and 3'-phosphoadenylylsulfate sulfohydrolase. This enzyme participates in sulfur metabolism. It employs one cofactor, manganese.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452008
|
14452047
|
Phospholipid-translocating ATPase
|
Class of enzymes
In enzymology, a phospholipid-translocating ATPase (EC 3.6.3.1) is an enzyme that catalyzes the chemical reaction
ATP + H2O + phospholipid in formula_0 ADP + phosphate + phospholipid out
The 3 substrates of this enzyme are ATP, H2O, and phospholipid, whereas its 3 products are ADP, phosphate, and phospholipid.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (phospholipid-flipping). Other names in common use include Mg2+-ATPase, flippase, and aminophospholipid-transporting ATPase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452047
|
14452082
|
Phosphonate-transporting ATPase
|
Enzyme
In enzymology, a phosphonate-transporting ATPase (EC 3.6.3.28) is an enzyme that catalyzes the chemical reaction
ATP + H2O + phosphonateout formula_0 ADP + phosphate + phosphonatein
The 3 substrates of this enzyme are ATP, H2O, and phosphonate, whereas its 3 products are ADP, phosphate, and phosphonate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (phosphonate-transporting).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452082
|
14452097
|
Phosphoribosyl-ATP diphosphatase
|
Class of enzymes
In enzymology, a phosphoribosyl-ATP diphosphatase (EC 3.6.1.31) is an enzyme that catalyzes the chemical reaction
1-(5-phosphoribosyl)-ATP + H2O formula_0 1-(5-phosphoribosyl)-AMP + diphosphate
Thus, the two substrates of this enzyme are 1-(5-phosphoribosyl)-ATP and H2O, whereas its two products are 1-(5-phosphoribosyl)-AMP and diphosphate.
This enzyme participates in histidine metabolism. It employs one cofactor, H+.
Nomenclature.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is 1-(5-phosphoribosyl)-ATP diphosphohydrolase. Other names in common use include phosphoribosyl-ATP pyrophosphatase, and phosphoribosyladenosine triphosphate pyrophosphatase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452097
|
14452137
|
Polyamine-transporting ATPase
|
In enzymology, a polyamine-transporting ATPase (EC 3.6.3.31) is an enzyme that catalyzes the chemical reaction
ATP + H2O + polyamineout formula_0 ADP + phosphate + polyaminein
The 3 substrates of this enzyme are ATP, H2O, and polyamine, whereas its 3 products are ADP, phosphate, and polyamine.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (polyamine-importing). This enzyme participates in abc transporters - general.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452137
|
14452159
|
Proteasome ATPase
|
Class of enzymes
In enzymology, a proteasome ATPase (EC 3.6.4.8) is an enzyme that catalyzes the chemical reaction
ATP + H2O formula_0 ADP + phosphate
Thus, the two substrates of this enzyme are ATP and H2O, whereas its two products are ADP and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to facilitate cellular and subcellular movement. The systematic name of this enzyme class is ATP phosphohydrolase (polypeptide-degrading).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452159
|
14452171
|
Protein-secreting ATPase
|
In enzymology, a protein-secreting ATPase (EC 3.6.3.50) is an enzyme that catalyzes the chemical reaction
ATP + H2O formula_0 ADP + phosphate
Thus, the two substrates of this enzyme are ATP and H2O, whereas its two products are ADP and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (protein-secreting).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452171
|
14452195
|
Quaternary-amine-transporting ATPase
|
In enzymology, a quaternary-amine-transporting ATPase (EC 3.6.3.32) is an enzyme that catalyzes the chemical reaction
ATP + H2O + quaternary amineout formula_0 ADP + phosphate + quaternary aminein
The 3 substrates of this enzyme are ATP, H2O, and quaternary amine, whereas its 3 products are ADP, phosphate, and quaternary amine.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (quaternary-amine-importing). This enzyme participates in abc transporters - general.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452195
|
14452224
|
Sulfate-transporting ATPase
|
In enzymology, a sulfate-transporting ATPase (EC 3.6.3.25) is an enzyme that catalyzes the chemical reaction
ATP + H2O + sulfateout formula_0 ADP + phosphate + sulfatein
The 3 substrates of this enzyme are ATP, H2O, and sulfate, whereas its 3 products are ADP, phosphate, and sulfate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (sulfate-importing). This enzyme participates in abc transporters - general.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452224
|
14452252
|
Taurine-transporting ATPase
|
In enzymology, a taurine-transporting ATPase (EC 3.6.3.36) is an enzyme that catalyzes the chemical reaction.
ATP + H2O + taurineout formula_0 ADP + phosphate + taurinein
The 3 substrates of this enzyme are ATP, H2O, and taurine, whereas its 3 products are ADP, phosphate, and taurine.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (taurine-importing).
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452252
|
14452276
|
Teichoic-acid-transporting ATPase
|
In enzymology, a teichoic-acid-transporting ATPase (EC 3.6.3.40) is an enzyme that catalyzes the chemical reaction
ATP + H2O + teichoic acidin formula_0 ADP + phosphate + teichoic acidout
The 3 substrates of this enzyme are ATP, H2O, and teichoic acid, whereas its 3 products are ADP, phosphate, and teichoic acid.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (teichoic-acid-exporting). This enzyme participates in abc transporters - general.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452276
|
14452306
|
Thiamine-triphosphatase
|
Thiamine-triphosphatase is an enzyme involved in thiamine metabolism. It catalyzes the chemical reaction
thiamine triphosphate + H2O formula_0 thiamine diphosphate + phosphate
This enzyme belongs to the family of acid anhydride hydrolases, specifically those acting on phosphorus-containing anhydrides. Its systematic name is thiamine triphosphate phosphohydrolase.
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 2JMU.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452306
|
14452321
|
Thymidine-triphosphatase
|
Enzyme
In enzymology, a thymidine-triphosphatase (EC 3.6.1.39) is an enzyme that catalyzes the chemical reaction
dTTP + H2O formula_0 dTDP + phosphate
Thus, the two substrates of this enzyme are dTTP and H2O, whereas its two products are dTDP and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is dTTP nucleotidohydrolase. Other names in common use include thymidine triphosphate nucleotidohydrolase, dTTPase, and deoxythymidine-5'-triphosphatase. This enzyme participates in pyrimidine metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452321
|
14452340
|
Trimetaphosphatase
|
In enzymology, a trimetaphosphatase (EC 3.6.1.2) is an enzyme that catalyzes the chemical reaction
trimetaphosphate + H2O formula_0 triphosphate
Thus, the two substrates of this enzyme are trimetaphosphate and H2O, whereas its product is triphosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is trimetaphosphate hydrolase. This enzyme is also called inorganic trimetaphosphatase. This enzyme participates in pyrimidine metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452340
|
14452352
|
Triphosphatase
|
In enzymology, a triphosphatase (EC 3.6.1.25) is an enzyme that catalyzes the chemical reaction
triphosphate + H2O formula_0 diphosphate + phosphate
Thus, the two substrates of this enzyme are triphosphate and H2O, whereas its two products are diphosphate and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is triphosphate phosphohydrolase. This enzyme is also called inorganic triphosphatase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452352
|
14452366
|
UDP-sugar diphosphatase
|
Class of enzymes
In enzymology, an UDP-sugar diphosphatase (EC 3.6.1.45) is an enzyme that catalyzes the chemical reaction
UDP-sugar + H2O formula_0 UMP + alpha-D-aldose 1-phosphate
Thus, the two substrates of this enzyme are UDP-sugar and H2O, whereas its two products are UMP and alpha-D-aldose 1-phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is UDP-sugar sugarphosphohydrolase. Other names in common use include nucleosidediphosphate-sugar pyrophosphatase, nucleosidediphosphate-sugar diphosphatase, UDP-sugar hydrolase, and UDP-sugar pyrophosphatase.
Structural studies.
As of late 2007, 6 structures have been solved for this class of enzymes, with PDB accession codes 1HO5, 1HP1, 1HPU, 1OI8, 1OID, and 1OIE.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452366
|
14452385
|
Undecaprenyl-diphosphatase
|
Class of enzymes
In enzymology, an undecaprenyl-diphosphatase (EC 3.6.1.27) is an enzyme that catalyzes the chemical reaction
undecaprenyl diphosphate + H2O formula_0 undecaprenyl phosphate + phosphate
Thus, the two substrates of this enzyme are undecaprenyl diphosphate and H2O, whereas its two products are undecaprenyl phosphate and phosphate. The enzymatic activity is enhanced by divalent cations, particularly Ca2+.
In many bacteria, this enzyme is a membrane protein that participates in peptidoglycan biosynthesis. The enzyme has been implicated in conferring resistance to the antibiotic bacitracin.
Nomenclature.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides in phosphorus-containing anhydrides. The systematic name of this enzyme class is undecaprenyl-diphosphate phosphohydrolase. Other names in common use include Undecaprenyl-pyrophosphate phosphatase (Uppp), UPP phosphatase, BacA, C55-isoprenyl diphosphatase, C55-isoprenyl pyrophosphatase, and isoprenyl pyrophosphatase.
Note: The enzyme Uppp/BacA (EC 3.6.1.27) has occasionally been incorrectly termed an "undecaprenol kinase". However, that name should be reserved for a distinct enzyme (EC 2.7.1.66), which catalyses the addition of a phosphate group from ATP to undecaprenol (C55-isoprenyl alcohol).
Structure.
X-ray crystal structures of the membrane-form of the enzyme from "E. coli" are available (PDB IDs: 5OON, 6CB2).
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452385
|
14452404
|
Vesicle-fusing ATPase
|
Protein family
In enzymology, a vesicle-fusing ATPase (EC 3.6.4.6) is an enzyme that catalyzes the chemical reaction
ATP + H2O formula_0 ADP + phosphate
Thus, the two substrates of this enzyme are ATP and H2O, whereas its two products are ADP and phosphate.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to facilitate cellular and subcellular movement. The systematic name of this enzyme class is ATP phosphohydrolase (vesicle-fusing).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452404
|
14452417
|
Vitamin B12-transporting ATPase
|
In enzymology, a vitamin B12-transporting ATPase (EC 3.6.3.33) is an enzyme that catalyzes the chemical reaction
ATP + H2O + vitamin B12out formula_0 ADP + phosphate + vitamin B12in
The 3 substrates of this enzyme are ATP, H2O, and vitamin B12, whereas its 3 products are ADP, phosphate, and vitamin B12.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (vitamin B12-importing). This enzyme participates in abc transporters - general.
Structural studies.
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1L7V and 2QI9.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452417
|
14452441
|
Xenobiotic-transporting ATPase
|
Enzyme
In enzymology, a xenobiotic-transporting ATPase (EC 3.6.3.44) is an enzyme that catalyzes the chemical reaction
ATP + H2O + xenobioticin formula_0 ADP + phosphate + xenobioticout
The 3 substrates of this enzyme are ATP, H2O, and xenobiotic, whereas its 3 products are ADP, phosphate, and xenobiotic.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (xenobiotic-exporting). Other names in common use include multidrug-resistance protein, MDR protein, P-glycoprotein, pleiotropic-drug-resistance protein, PDR protein, steroid-transporting ATPase, and ATP phosphohydrolase (steroid-exporting).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452441
|
14452459
|
Zn2+-exporting ATPase
|
In enzymology, a Zn2+-exporting ATPase (EC 3.6.3.5) is an enzyme that catalyzes the chemical reaction
ATP + H2O + Zn2+"in" formula_0 ADP + phosphate + Zn2+"out"
The 3 substrates of this enzyme are ATP, H2O, and Zn2+, whereas its 3 products are ADP, phosphate, and Zn2+.
This enzyme belongs to the family of hydrolases, specifically those acting on acid anhydrides to catalyse transmembrane movement of substances. The systematic name of this enzyme class is ATP phosphohydrolase (Zn2+-exporting). Other names in common use include Zn(II)-translocating P-type ATPase, P1B-type ATPase, and AtHMA4 (the "A. thaliana" protein).
Structural studies.
As of late 2007, two structures have been solved for this class of enzymes, with PDB accession codes 1MWY and 1MWZ. Moreover, nanobodies have recently been raised against a zinc-transporting ATPase (ZntA) which are able to bind and inhibit the ATPase activity, showing potential for further structural studies.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14452459
|
14453066
|
11-cis-retinyl-palmitate hydrolase
|
Class of enzymes
The enzyme 11-"cis"-retinyl-palmitate hydrolase (EC 3.1.1.63) catalyzes the reaction
11-"cis"-retinyl palmitate + H2O formula_0 11-"cis"-retinol + palmitate
This enzyme belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name is 11-"cis"-retinyl-palmitate acylhydrolase. Other names in common use include 11-"cis"-retinol palmitate esterase, and RPH. This enzyme participates in retinol metabolism. This enzyme has at least one effector, Bile salt.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14453066
|
14453080
|
1,4-lactonase
|
Class of enzymes
The enzyme 1,4-lactonase (EC 3.1.1.25) catalyzes the generic reaction
a 1,4-lactone + H2O formula_0 a 4-hydroxyacid
This enzyme belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name is 1,4-lactone hydroxyacylhydrolase. It is also called γ-lactonase. It participates in galactose metabolism and ascorbate and aldarate metabolism. It employs one cofactor, Ca2+.
Structural studies.
As of late 2007, three structures have been solved for this class of enzymes, with PDB accession codes 2DG0, 2DG1, and 2DSO.
Applications.
In a study by Chen et al. a 1,4-lactonase was expressed in "E. coli" and used as a highly efficient biocatalyst for asymmetric synthesis of chiral compounds.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14453080
|
14453096
|
1-alkyl-2-acetylglycerophosphocholine esterase
|
Class of enzymes
The enzyme 1-alkyl-2-acetylglycerophosphocholine esterase (EC 3.1.1.47) catalyzes the reaction
1-alkyl-2-acetyl-"sn"-glycero-3-phosphocholine + H2O formula_0 1-alkyl-"sn"-glycero-3-phosphocholine + acetate
The former is also known as platelet-activating factor. There are multiple enzymes with this function:
This enzyme belongs to the family of hydrolases, specifically those acting on carboxylic ester bonds. The systematic name of this enzyme class is 1-alkyl-2-acetyl-"sn"-glycero-3-phosphocholine acetohydrolase. Other names in common use include 1-alkyl-2-acetyl-sn-glycero-3-phosphocholine acetylhydrolase, and alkylacetyl-GPC:acetylhydrolase. This enzyme participates in ether lipid metabolism.
Structural studies.
As of late 2007, 7 structures have been solved for this class of enzymes, with PDB accession codes 1BWP, 1BWQ, 1BWR, 1ES9, 1FXW, 1VYH, and 1WAB.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14453096
|
14453111
|
2',3'-cyclic-nucleotide 2'-phosphodiesterase
|
Class of enzymes
The enzyme 2′,3′-cyclic-nucleotide 2'-phosphodiesterase (EC 3.1.4.16) catalyzes the reaction
nucleoside 2′,3′-cyclic phosphate + H2O formula_0 nucleoside 3′-phosphate
This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric diester bonds. The systematic name is nucleoside-2′,3′-cyclic-phosphate 3'-nucleotidohydrolase. Other names in common use include ribonucleoside 2′,3′-cyclic phosphate diesterase, 2′,3′-cyclic AMP phosphodiesterase, 2′,3′-cyclic nucleotidase, cyclic 2′,3′-nucleotide 2′-phosphodiesterase, cyclic 2′,3′-nucleotide phosphodiesterase, 2′,3′-cyclic nucleoside monophosphate phosphodiesterase, 2′,3′-cyclic AMP 2′-phosphohydrolase, cyclic phosphodiesterase:3′-nucleotidase, 2′,3′-cyclic nucleotide phosphohydrolase, 2′:3′-cyclic phosphodiesterase, and 2′:3′-cyclic nucleotide phosphodiesterase:3'-nucleotidase. This enzyme participates in purine metabolism and pyrimidine metabolism.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14453111
|
14453144
|
2-Carboxy-D-arabinitol-1-phosphatase
|
Class of enzymes
The enzyme 2-carboxy--arabinitol-1-phosphatase (CA1Pase; EC 3.1.3.63) catalyzes the reaction
2-carboxy--arabinitol 1-phosphate + H2O formula_0 2-carboxy--arabinitol + phosphate
This enzyme belongs to the family of hydrolases, to be specific, those acting on phosphoric monoester bonds. The systematic name is 2-carboxy--arabinitol-1-phosphate 1-phosphohydrolase.
In biology.
The best-studied 2-Carboxy--arabinitol-1-phosphate phosphatase is the enzyme that inactivates the RuBisCO inhibitor 2-carboxy--arabinitol-1-phosphate (CA1P).
When light levels are high, the inactivation occurs after CA1P has been released from RuBisCO by RuBisCO activase. As CA1P is present in many but not all plants, CA1P-mediated regulation of RuBisCO is not universal for all photosynthetic life. Amino acid sequences of the CA1Pase enzymes from wheat, French bean, tobacco, and "Arabidopsis thaliana" reveal that the enzymes contain 2 different domains, indicating that it is a multifunctional enzyme.
CA1Pase enzyme activity varies between different species due to their regulation by different redox-active compounds, such as glutathione. However, it is yet to be determined whether this process occurs "in vivo". Wheat CA1Pase heterologously expressed in "E. coli" is also able to dephosphorylate the RuBisCO inhibitor -glycero-2,3-diulose-1,5-bisphosphate.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14453144
|
14453161
|
2-deoxyglucose-6-phosphatase
|
Class of enzymes
The enzyme 2-deoxyglucose-6-phosphatase (EC 3.1.3.68) catalyzes the reaction
2-deoxy--glucose 6-phosphate + H2O formula_0 2-deoxy--glucose + phosphate
This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is 2-deoxy--glucose-6-phosphate phosphohydrolase. This enzyme is also called 2-deoxyglucose-6-phosphate phosphatase.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14453161
|
14453179
|
2-phosphosulfolactate phosphatase
|
Class of enzymes
The enzyme 2-phosphosulfolactate phosphatase (EC 3.1.3.71) catalyzes the reaction
(2"R")-2-phospho-3-sulfolactate + H2O formula_0 (2"R")-3-sulfolactate + phosphate
This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name ("R")-2-phospho-3-sulfolactate phosphohydrolase. Other names in common use include (2"R")-phosphosulfolactate phosphohydrolase, and ComB phosphatase.
Structural studies.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1VR0.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14453179
|
14453215
|
3'(2'),5'-bisphosphate nucleotidase
|
Class of enzymes
The enzyme 3′(2′),5′-bisphosphate nucleotidase (EC 3.1.3.7) catalyzes the reaction
adenosine 3′,5′-bisphosphate + H2O formula_0 AMP + phosphate
This enzyme belongs to the family of hydrolases, specifically those acting on phosphoric monoester bonds. The systematic name is adenosine-3′(2′),5′-bisphosphate 3′(2′)-phosphohydrolase. Other names in common use include phosphoadenylate 3′-nucleotidase, 3′-phosphoadenylylsulfate 3′-phosphatase, and 3′(2′),5′-bisphosphonucleoside 3′(2′)-phosphohydrolase. This enzyme participates in sulfur metabolism.
Structural studies.
As of late 2007, 6 structures have been solved for this class of enzymes, with PDB accession codes 1JP4, 1K9Y, 1K9Z, 1KA0, 1KA1, and 1QGX.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=14453215
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.