id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
59052
|
Ensemble (mathematical physics)
|
Idealization of a large number of atomic-sized systems
In physics, specifically statistical mechanics, an ensemble (also statistical ensemble) is an idealization consisting of a large number of virtual copies (sometimes infinitely many) of a system, considered all at once, each of which represents a possible state that the real system might be in. In other words, a statistical ensemble is a set of systems of particles used in statistical mechanics to describe a single
system. The concept of an ensemble was introduced by J. Willard Gibbs in 1902.
A thermodynamic ensemble is a specific variety of statistical ensemble that, among other properties, is in statistical equilibrium (defined below), and is used to derive the properties of thermodynamic systems from the laws of classical or quantum mechanics.
Physical considerations.
The ensemble formalises the notion that an experimenter repeating an experiment again and again under the same macroscopic conditions, but unable to control the microscopic details, may expect to observe a range of different outcomes.
The notional size of ensembles in thermodynamics, statistical mechanics and quantum statistical mechanics can be very large, including every possible microscopic state the system could be in, consistent with its observed macroscopic properties. For many important physical cases, it is possible to calculate averages directly over the whole of the thermodynamic ensemble, to obtain explicit formulas for many of the thermodynamic quantities of interest, often in terms of the appropriate partition function.
The concept of an equilibrium or stationary ensemble is crucial to many applications of statistical ensembles. Although a mechanical system certainly evolves over time, the ensemble does not necessarily have to evolve. In fact, the ensemble will not evolve if it contains all past and future phases of the system. Such a statistical ensemble, one that does not change over time, is called "stationary" and can be said to be in "statistical equilibrium".
Main types.
The study of thermodynamics is concerned with systems that appear to human perception to be "static" (despite the motion of their internal parts), and which can be described simply by a set of macroscopically observable variables. These systems can be described by statistical ensembles that depend on a few observable parameters, and which are in statistical equilibrium. Gibbs noted that different macroscopic constraints lead to different types of ensembles, with particular statistical characteristics.
""We may imagine a great number of systems of the same nature, but differing in the configurations and velocities which they have at a given instant, and differing in not merely infinitesimally, but it may be so as to embrace every conceivable combination of configuration and velocities..." J. W. Gibbs" (1903)
Three important thermodynamic ensembles were defined by Gibbs:
The calculations that can be made using each of these ensembles are explored further in their respective articles.
Other thermodynamic ensembles can be also defined, corresponding to different physical requirements, for which analogous formulae can often similarly be derived.
For example, in the reaction ensemble, particle number fluctuations are only allowed to occur according to the stoichiometry of the chemical reactions which are present in the system.
Equivalence.
In thermodynamic limit all ensembles should produce identical observables due to Legendre transforms, deviations to this rule occurs under conditions that state-variables are non-convex, such as small molecular measurements.
Representations.
The precise mathematical expression for a statistical ensemble has a distinct form depending on the type of mechanics under consideration (quantum or classical). In the classical case, the ensemble is a probability distribution over the microstates. In quantum mechanics, this notion, due to von Neumann, is a way of assigning a probability distribution over the results of each complete set of commuting observables.
In classical mechanics, the ensemble is instead written as a probability distribution in phase space; the microstates are the result of partitioning phase space into equal-sized units, although the size of these units can be chosen somewhat arbitrarily.
Requirements for representations.
Putting aside for the moment the question of how statistical ensembles are generated operationally, we should be able to perform the following two operations on ensembles "A", "B" of the same system:
Under certain conditions, therefore, equivalence classes of statistical ensembles have the structure of a convex set.
Quantum mechanical.
A statistical ensemble in quantum mechanics (also known as a mixed state) is most often represented by a density matrix, denoted by formula_0. The density matrix provides a fully general tool that can incorporate both quantum uncertainties (present even if the state of the system were completely known) and classical uncertainties (due to a lack of knowledge) in a unified manner. Any physical observable X in quantum mechanics can be written as an operator, formula_1. The expectation value of this operator on the statistical ensemble formula_2 is given by the following trace:
formula_3
This can be used to evaluate averages (operator formula_1), variances (using operator formula_4), covariances (using operator formula_5), etc. The density matrix must always have a trace of 1: formula_6 (this essentially is the condition that the probabilities must add up to one).
In general, the ensemble evolves over time according to the von Neumann equation.
Equilibrium ensembles (those that do not evolve over time, formula_7) can be written solely as a function of conserved variables. For example, the microcanonical ensemble and canonical ensemble are strictly functions of the total energy, which is measured by the total energy operator formula_8 (Hamiltonian). The grand canonical ensemble is additionally a function of the particle number, measured by the total particle number operator formula_9. Such equilibrium ensembles are a diagonal matrix in the orthogonal basis of states that simultaneously diagonalize each conserved variable. In bra–ket notation, the density matrix is
formula_10
where the |"ψ""i"⟩, indexed by i, are the elements of a complete and orthogonal basis. (Note that in other bases, the density matrix is not necessarily diagonal.)
Classical mechanical.
In classical mechanics, an ensemble is represented by a probability density function defined over the system's phase space. While an individual system evolves according to Hamilton's equations, the density function (the ensemble) evolves over time according to Liouville's equation.
In a mechanical system with a defined number of parts, the phase space has "n" generalized coordinates called "q"1, ... "q""n", and "n" associated canonical momenta called "p"1, ... "p""n". The ensemble is then represented by a joint probability density function "ρ"("p"1, ... "p""n", "q"1, ... "q""n").
If the number of parts in the system is allowed to vary among the systems in the ensemble (as in a grand ensemble where the number of particles is a random quantity), then it is a probability distribution over an extended phase space that includes further variables such as particle numbers "N"1 (first kind of particle), "N"2 (second kind of particle), and so on up to "N""s" (the last kind of particle; "s" is how many different kinds of particles there are). The ensemble is then represented by a joint probability density function "ρ"("N"1, ... "N""s", "p"1, ... "p""n", "q"1, ... "q""n"). The number of coordinates "n" varies with the numbers of particles.
Any mechanical quantity "X" can be written as a function of the system's phase. The expectation value of any such quantity is given by an integral over the entire phase space of this quantity weighted by "ρ":
formula_11
The condition of probability normalization applies, requiring
formula_12
Phase space is a continuous space containing an infinite number of distinct physical states within any small region. In order to connect the probability "density" in phase space to a probability "distribution" over microstates, it is necessary to somehow partition the phase space into blocks that are distributed representing the different states of the system in a fair way. It turns out that the correct way to do this simply results in equal-sized blocks of canonical phase space, and so a microstate in classical mechanics is an extended region in the phase space of canonical coordinates that has a particular volume. In particular, the probability density function in phase space, "ρ", is related to the probability distribution over microstates, "P" by a factor
formula_13
where
Since "h" can be chosen arbitrarily, the notional size of a microstate is also arbitrary. Still, the value of "h" influences the offsets of quantities such as entropy and chemical potential, and so it is important to be consistent with the value of "h" when comparing different systems.
Correcting overcounting in phase space.
Typically, the phase space contains duplicates of the same physical state in multiple distinct locations. This is a consequence of the way that a physical state is encoded into mathematical coordinates; the simplest choice of coordinate system often allows a state to be encoded in multiple ways. An example of this is a gas of identical particles whose state is written in terms of the particles' individual positions and momenta: when two particles are exchanged, the resulting point in phase space is different, and yet it corresponds to an identical physical state of the system. It is important in statistical mechanics (a theory about physical states) to recognize that the phase space is just a mathematical construction, and to not naively overcount actual physical states when integrating over phase space. Overcounting can cause serious problems:
It is in general difficult to find a coordinate system that uniquely encodes each physical state. As a result, it is usually necessary to use a coordinate system with multiple copies of each state, and then to recognize and remove the overcounting.
A crude way to remove the overcounting would be to manually define a subregion of phase space that includes each physical state only once and then exclude all other parts of phase space. In a gas, for example, one could include only those phases where the particles' "x" coordinates are sorted in ascending order. While this would solve the problem, the resulting integral over phase space would be tedious to perform due to its unusual boundary shape. (In this case, the factor "C" introduced above would be set to "C"
1, and the integral would be restricted to the selected subregion of phase space.)
A simpler way to correct the overcounting is to integrate over all of phase space but to reduce the weight of each phase in order to exactly compensate the overcounting. This is accomplished by the factor "C" introduced above, which is a whole number that represents how many ways a physical state can be represented in phase space. Its value does not vary with the continuous canonical coordinates, so overcounting can be corrected simply by integrating over the full range of canonical coordinates, then dividing the result by the overcounting factor. However, "C" does vary strongly with discrete variables such as numbers of particles, and so it must be applied before summing over particle numbers.
As mentioned above, the classic example of this overcounting is for a fluid system containing various kinds of particles, where any two particles of the same kind are indistinguishable and exchangeable. When the state is written in terms of the particles' individual positions and momenta, then the overcounting related to the exchange of identical particles is corrected by using
formula_14
This is known as "correct Boltzmann counting".
Ensembles in statistics.
The formulation of statistical ensembles used in physics has now been widely adopted in other fields, in part because it has been recognized that the canonical ensemble or Gibbs measure serves to maximize the entropy of a system, subject to a set of constraints: this is the principle of maximum entropy. This principle has now been widely applied to problems in linguistics, robotics, and the like.
In addition, statistical ensembles in physics are often built on a principle of locality: that all interactions are only between neighboring atoms or nearby molecules. Thus, for example, lattice models, such as the Ising model, model ferromagnetic materials by means of nearest-neighbor interactions between spins. The statistical formulation of the principle of locality is now seen to be a form of the Markov property in the broad sense; nearest neighbors are now Markov blankets. Thus, the general notion of a statistical ensemble with nearest-neighbor interactions leads to Markov random fields, which again find broad applicability; for example in Hopfield networks.
Ensemble average.
In statistical mechanics, the ensemble average is defined as the mean of a quantity that is a function of the microstate of a system, according to the distribution of the system on its micro-states in this ensemble.
Since the ensemble average is dependent on the ensemble chosen, its mathematical expression varies from ensemble to ensemble. However, the mean obtained for a given physical quantity does not depend on the ensemble chosen at the thermodynamic limit.
The grand canonical ensemble is an example of an open system.
Classical statistical mechanics.
For a classical system in thermal equilibrium with its environment, the "ensemble average" takes the form of an integral over the phase space of the system:
formula_15
where
formula_16 is the ensemble average of the system property "A",
formula_17 is formula_18, known as thermodynamic beta,
"H" is the Hamiltonian of the classical system in terms of the set of coordinates formula_19 and their conjugate generalized momenta formula_20,
formula_21 is the volume element of the classical phase space of interest.
The denominator in this expression is known as the partition function and is denoted by the letter "Z".
Quantum statistical mechanics.
In quantum statistical mechanics, for a quantum system in thermal equilibrium with its environment, the weighted average takes the form of a sum over quantum energy states, rather than a continuous integral:
formula_22
Canonical ensemble average.
The generalized version of the partition function provides the complete framework for working with ensemble averages in thermodynamics, information theory, statistical mechanics and quantum mechanics.
The microcanonical ensemble represents an isolated system in which energy ("E"), volume ("V") and the number of particles ("N") are all constant. The canonical ensemble represents a closed system which can exchange energy ("E") with its surroundings (usually a heat bath), but the volume ("V") and the number of particles ("N") are all constant. The grand canonical ensemble represents an open system which can exchange energy ("E") and particles ("N") with its surroundings, but the volume ("V") is kept constant.
Operational interpretation.
In the discussion given so far, while rigorous, we have taken for granted that the notion of an ensemble is valid a priori, as is commonly done in physical context. What has not been shown is that the ensemble "itself" (not the consequent results) is a precisely defined object mathematically. For instance,
In this section, we attempt to partially answer this question.
Suppose we have a "preparation procedure" for a system in a physics lab: For example, the procedure might involve a physical apparatus and some protocols for manipulating the apparatus. As a result of this preparation procedure, some system is produced and maintained in isolation for some small period of time. By repeating this laboratory preparation procedure we obtain a sequence of systems "X"1, "X"2, ...,"X""k", which in our mathematical idealization, we assume is an infinite sequence of systems. The systems are similar in that they were all produced in the same way. This infinite sequence is an ensemble.
In a laboratory setting, each one of these prepped systems might be used as input for "one" subsequent "testing procedure". Again, the testing procedure involves a physical apparatus and some protocols; as a result of the testing procedure we obtain a "yes" or "no" answer. Given a testing procedure "E" applied to each prepared system, we obtain a sequence of values Meas ("E", "X"1), Meas ("E", "X"2), ..., Meas ("E", "X""k"). Each one of these values is a 0 (or no) or a 1 (yes).
Assume the following time average exists:
formula_23
For quantum mechanical systems, an important assumption made in the quantum logic approach to quantum mechanics is the identification of "yes–no" questions to the lattice of closed subspaces of a Hilbert space. With some additional technical assumptions one can then infer that states are given by density operators "S" so that:
formula_24
We see this reflects the definition of quantum states in general: A quantum state is a mapping from the observables to their expectation values.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\hat\\rho"
},
{
"math_id": 1,
"text": "\\hat X"
},
{
"math_id": 2,
"text": "\\rho"
},
{
"math_id": 3,
"text": "\\langle X \\rangle = \\operatorname{Tr}(\\hat X \\rho)."
},
{
"math_id": 4,
"text": "\\hat X^2"
},
{
"math_id": 5,
"text": "\\hat X \\hat Y"
},
{
"math_id": 6,
"text": "\\operatorname{Tr}{\\hat\\rho} = 1"
},
{
"math_id": 7,
"text": "d\\hat\\rho / dt = 0"
},
{
"math_id": 8,
"text": "\\hat H"
},
{
"math_id": 9,
"text": "\\hat N"
},
{
"math_id": 10,
"text": "\\hat\\rho = \\sum_i P_i |\\psi_i\\rangle \\langle\\psi_i|,"
},
{
"math_id": 11,
"text": "\\langle X \\rangle = \\sum_{N_1 = 0}^{\\infty} \\ldots \\sum_{N_s = 0}^{\\infty} \\int \\ldots \\int \\rho X \\, dp_1 \\ldots dq_n."
},
{
"math_id": 12,
"text": "\\sum_{N_1 = 0}^{\\infty} \\ldots \\sum_{N_s = 0}^{\\infty} \\int \\ldots \\int \\rho \\, dp_1 \\ldots dq_n = 1."
},
{
"math_id": 13,
"text": "\\rho = \\frac{1}{h^n C} P,"
},
{
"math_id": 14,
"text": "C = N_1! N_2! \\ldots N_s!."
},
{
"math_id": 15,
"text": "\\bar{A} = \\frac{\\int{Ae^{-\\beta H(q_1, q_2, \\dots, q_M, p_1, p_2, \\dots, p_N)} \\,d\\tau}}{\\int{e^{-\\beta H(q_1, q_2, \\dots, q_M, p_1, p_2, \\dots, p_N)} \\,d\\tau}},"
},
{
"math_id": 16,
"text": "\\bar{A}"
},
{
"math_id": 17,
"text": "\\beta"
},
{
"math_id": 18,
"text": "\\frac{1}{kT}"
},
{
"math_id": 19,
"text": "q_i"
},
{
"math_id": 20,
"text": "p_i"
},
{
"math_id": 21,
"text": "d\\tau"
},
{
"math_id": 22,
"text": "\\bar{A} = \\frac{\\sum_i A_ie^{-\\beta E_i}}{\\sum_i e^{-\\beta E_i}}."
},
{
"math_id": 23,
"text": " \\sigma(E) = \\lim_{N \\rightarrow \\infty} \\frac{1}{N} \\sum_{k=1}^N \\operatorname{Meas}(E, X_k) "
},
{
"math_id": 24,
"text": " \\sigma(E) = \\operatorname{Tr}(E S). "
}
] |
https://en.wikipedia.org/wiki?curid=59052
|
59052958
|
Interference channel
|
In information theory, the interference channel is the basic model used to analyze the effect of interference in communication channels. The model consists of two pairs of users communicating through a shared channel. The problem of interference between two mobile users in close proximity or crosstalk between two parallel landlines are two examples where this model is applicable.
Unlike in the point-to-point channel, where the amount of information that can be sent through the channel is limited by the noise that distorts the transmitted signal, in the interference channel the presence of the signal from the other user may also impair the communication. However, since the transmitted signals are not purely random (otherwise they would not be decodable), the receivers may be able to reduce the effect of the interference by partially or totally decoding the undesired signal.
Discrete memoryless interference channel.
The mathematical model for this channel is the following:
where, for formula_0:
The capacity of this channel model is not known in general; only for special cases of formula_9 the capacity has been calculated, e.g., in the case of strong interference or deterministic channels.
References.
<templatestyles src="Reflist/styles.css" />
Further references.
<templatestyles src="Refbegin/styles.css" />
Extensions.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "i\\in\\{1,2\\}"
},
{
"math_id": 1,
"text": "W_i"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "X_i"
},
{
"math_id": 4,
"text": "X_i^n"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "Y_i"
},
{
"math_id": 7,
"text": "Y_i^n"
},
{
"math_id": 8,
"text": "\\hat{W}_i"
},
{
"math_id": 9,
"text": "p(y_1,y_2|x_1,x_2)"
}
] |
https://en.wikipedia.org/wiki?curid=59052958
|
5905830
|
Nonclassical light
|
Light that cannot be described using classical electromagnetism
Nonclassical light is light that cannot be described using classical electromagnetism; its characteristics are described by the quantized electromagnetic field and quantum mechanics.
The most common described forms of nonclassical light are the following:
Glauber–Sudarshan P representation.
The density matrix for any state of light can be written as:
formula_0
where formula_1 is a coherent state. A "classical" state of light is one in which formula_2 is a probability density function. If it is not, the state is said to be nonclassical.
Aspects of formula_2 that would make it nonclassical are:
The matter is not quite simple. According to Mandel and Wolf: "The different coherent states are not [mutually] orthogonal, so that even if formula_2 behaved like a true probability density [function], it would not describe probabilities of mutually exclusive states."
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Citation bibliography.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\widehat{\\rho} = \\int P(\\alpha) |{\\alpha}\\rangle \\langle {\\alpha}| \\rm{d}^2 \\alpha,"
},
{
"math_id": 1,
"text": "\\scriptstyle|\\alpha\\rangle "
},
{
"math_id": 2,
"text": "\\scriptstyle P(\\alpha) \\,"
}
] |
https://en.wikipedia.org/wiki?curid=5905830
|
5906
|
Carbon dioxide
|
Chemical compound with formula CO₂
<templatestyles src="Chembox/styles.css"/>
Chemical compound
Carbon dioxide is a chemical compound with the chemical formula . It is made up of molecules that each have one carbon atom covalently double bonded to two oxygen atoms. It is found in the gas state at room temperature, and as the source of available carbon in the carbon cycle, atmospheric CO2 is the primary carbon source for life on Earth. In the air, carbon dioxide is transparent to visible light but absorbs infrared radiation, acting as a greenhouse gas. Carbon dioxide is soluble in water and is found in groundwater, lakes, ice caps, and seawater. When carbon dioxide dissolves in water, it forms carbonate and mainly bicarbonate (), which causes ocean acidification as atmospheric CO2 levels increase.
It is a trace gas in Earth's atmosphere at 421 parts per million (ppm), or about 0.04% (as of May 2022) having risen from pre-industrial levels of 280 ppm or about 0.025%. Burning fossil fuels is the primary cause of these increased CO2 concentrations and also the primary cause of climate change.
Its concentration in Earth's pre-industrial atmosphere since late in the Precambrian was regulated by organisms and geological phenomena. Plants, algae and cyanobacteria use energy from sunlight to synthesize carbohydrates from carbon dioxide and water in a process called photosynthesis, which produces oxygen as a waste product. In turn, oxygen is consumed and CO2 is released as waste by all aerobic organisms when they metabolize organic compounds to produce energy by respiration. CO2 is released from organic materials when they decay or combust, such as in forest fires.
Carbon dioxide is 53% more dense than dry air, but is long lived and thoroughly mixes in the atmosphere. About half of excess CO2 emissions to the atmosphere are absorbed by land and ocean carbon sinks. These sinks can become saturated and are volatile, as decay and wildfires result in the CO2 being released back into the atmosphere. CO2 is eventually sequestered (stored for the long term) in rocks and organic deposits like coal, petroleum and natural gas. Sequestered CO2 is released into the atmosphere through burning fossil fuels or naturally by volcanoes, hot springs, geysers, and when carbonate rocks dissolve in water or react with acids.
CO2 is a versatile industrial material, used, for example, as an inert gas in welding and fire extinguishers, as a pressurizing gas in air guns and oil recovery, and as a supercritical fluid solvent in decaffeination and supercritical drying. It is a byproduct of fermentation of sugars in bread, beer and wine making, and is added to carbonated beverages like seltzer and beer for effervescence. It has a sharp and acidic odor and generates the taste of soda water in the mouth, but at normally encountered concentrations it is odorless.
<templatestyles src="Template:TOC limit/styles.css" />
Chemical and physical properties.
Carbon dioxide cannot be liquefied at atmospheric pressure. Low-temperature carbon dioxide is commercially used in its solid form, commonly known as "dry ice". The solid-to-gas phase transition occurs at 194.7 Kelvin and is called sublimation.
Structure, bonding and molecular vibrations.
The symmetry of a carbon dioxide molecule is linear and centrosymmetric at its equilibrium geometry. The length of the carbon–oxygen bond in carbon dioxide is 116.3 pm, noticeably shorter than the roughly 140 pm length of a typical single C–O bond, and shorter than most other C–O multiply bonded functional groups such as carbonyls. Since it is centrosymmetric, the molecule has no electric dipole moment.
As a linear triatomic molecule, CO2 has four vibrational modes as shown in the diagram. In the symmetric and the antisymmetric stretching modes, the atoms move along the axis of the molecule. There are two bending modes, which are degenerate, meaning that they have the same frequency and same energy, because of the symmetry of the molecule. When a molecule touches a surface or touches another molecule, the two bending modes can differ in frequency because the interaction is different for the two modes. Some of the vibrational modes are observed in the infrared (IR) spectrum: the antisymmetric stretching mode at wavenumber 2349 cm−1 (wavelength 4.25 μm) and the degenerate pair of bending modes at 667 cm−1 (wavelength 15 μm). The symmetric stretching mode does not create an electric dipole so is not observed in IR spectroscopy, but it is detected in Raman spectroscopy at 1388 cm−1 (wavelength 7.2 μm).
In the gas phase, carbon dioxide molecules undergo significant vibrational motions and do not keep a fixed structure. However, in a Coulomb explosion imaging experiment, an instantaneous image of the molecular structure can be deduced. Such an experiment has been performed for carbon dioxide. The result of this experiment, and the conclusion of theoretical calculations based on an ab initio potential energy surface of the molecule, is that none of the molecules in the gas phase are ever exactly linear. This counter-intuitive result is trivially due to the fact that the nuclear motion volume element vanishes for linear geometries. This is so for all molecules except diatomic molecules.
In aqueous solution.
Carbon dioxide is soluble in water, in which it reversibly forms (carbonic acid), which is a weak acid, because its ionization in water is incomplete.
The hydration equilibrium constant of carbonic acid is, at 25 °C:
formula_0
Hence, the majority of the carbon dioxide is not converted into carbonic acid, but remains as CO2 molecules, not affecting the pH.
The relative concentrations of CO2, , and the deprotonated forms (bicarbonate) and (carbonate) depend on the pH. As shown in a Bjerrum plot, in neutral or slightly alkaline water (pH > 6.5), the bicarbonate form predominates (>50%) becoming the most prevalent (>95%) at the pH of seawater. In very alkaline water (pH > 10.4), the predominant (>50%) form is carbonate. The oceans, being mildly alkaline with typical pH = 8.2–8.5, contain about 120 mg of bicarbonate per liter.
Being diprotic, carbonic acid has two acid dissociation constants, the first one for the dissociation into the bicarbonate (also called hydrogen carbonate) ion ():
"K"a1 = 2.5 × 10−4 mol/L; p"K"a1 = 3.6 at 25 °C.
This is the "true" first acid dissociation constant, defined as
formula_1
where the denominator includes only covalently bound and does not include hydrated CO2(aq). The much smaller and often-quoted value near 4.16 × 10−7 (or pKa1 = 6.38) is an "apparent" value calculated on the (incorrect) assumption that all dissolved CO2 is present as carbonic acid, so that
formula_2
Since most of the dissolved CO2 remains as CO2 molecules, "K"a1(apparent) has a much larger denominator and a much smaller value than the true "K"a1.
The bicarbonate ion is an amphoteric species that can act as an acid or as a base, depending on pH of the solution. At high pH, it dissociates significantly into the carbonate ion ():
"K"a2 = 4.69 × 10−11 mol/L; p"K"a2 = 10.329
In organisms, carbonic acid production is catalysed by the enzyme known as carbonic anhydrase.
In addition to altering its acidity, the presence of carbon dioxide in water also affects its electrical properties. When carbon dioxide dissolves in desalinated water, the electrical conductivity increases significantly from below 1 μS/cm to nearly 30 μS/cm. When heated, the water begins to gradually lose the conductivity induced by the presence of formula_3 , especially noticeable as temperatures exceed 30 °C.
The temperature dependence of the electrical conductivity of fully deionized water without formula_4 saturation is comparably low in relation to these data.
Chemical reactions.
CO2 is a potent electrophile having an electrophilic reactivity that is comparable to benzaldehyde or strongly electrophilic α,β-unsaturated carbonyl compounds. However, unlike electrophiles of similar reactivity, the reactions of nucleophiles with CO2 are thermodynamically less favored and are often found to be highly reversible. The reversible reaction of carbon dioxide with amines to make carbamates is used in CO2 scrubbers and has been suggested as a possible starting point for carbon capture and storage by amine gas treating.
Only very strong nucleophiles, like the carbanions provided by Grignard reagents and organolithium compounds react with CO2 to give carboxylates:
where M = Li or MgBr and R = alkyl or aryl.
In metal carbon dioxide complexes, CO2 serves as a ligand, which can facilitate the conversion of CO2 to other chemicals.
The reduction of CO2 to CO is ordinarily a difficult and slow reaction:
The redox potential for this reaction near pH 7 is about −0.53 V "versus" the standard hydrogen electrode. The nickel-containing enzyme carbon monoxide dehydrogenase catalyses this process.
Photoautotrophs (i.e. plants and cyanobacteria) use the energy contained in sunlight to photosynthesize simple sugars from CO2 absorbed from the air and water:
Physical properties.
Carbon dioxide is colorless. At low concentrations, the gas is odorless; however, at sufficiently high concentrations, it has a sharp, acidic odor. At standard temperature and pressure, the density of carbon dioxide is around 1.98 kg/m3, about 1.53 times that of air.
Carbon dioxide has no liquid state at pressures below 0.51795(10) MPa (5.11177(99) atm). At a pressure of 1 atm (0.101325 MPa), the gas deposits directly to a solid at temperatures below 194.6855(30) K (−78.4645(30) °C) and the solid sublimes directly to a gas above this temperature. In its solid state, carbon dioxide is commonly called dry ice.
Liquid carbon dioxide forms only at pressures above 0.51795(10) MPa (5.11177(99) atm); the triple point of carbon dioxide is 216.592(3) K (−56.558(3) °C) at 0.51795(10) MPa (5.11177(99) atm) (see phase diagram). The critical point is 304.128(15) K (30.978(15) °C) at 7.3773(30) MPa (72.808(30) atm). Another form of solid carbon dioxide observed at high pressure is an amorphous glass-like solid. This form of glass, called "carbonia", is produced by supercooling heated CO2 at extreme pressures (40–48 GPa, or about 400,000 atmospheres) in a diamond anvil. This discovery confirmed the theory that carbon dioxide could exist in a glass state similar to other members of its elemental family, like silicon dioxide (silica glass) and germanium dioxide. Unlike silica and germania glasses, however, carbonia glass is not stable at normal pressures and reverts to gas when pressure is released.
At temperatures and pressures above the critical point, carbon dioxide behaves as a supercritical fluid known as supercritical carbon dioxide.
Table of thermal and physical properties of saturated liquid carbon dioxide:
Table of thermal and physical properties of carbon dioxide (CO2) at atmospheric pressure:
Biological role.
Carbon dioxide is an end product of cellular respiration in organisms that obtain energy by breaking down sugars, fats and amino acids with oxygen as part of their metabolism. This includes all plants, algae and animals and aerobic fungi and bacteria. In vertebrates, the carbon dioxide travels in the blood from the body's tissues to the skin (e.g., amphibians) or the gills (e.g., fish), from where it dissolves in the water, or to the lungs from where it is exhaled. During active photosynthesis, plants can absorb more carbon dioxide from the atmosphere than they release in respiration.
Photosynthesis and carbon fixation.
Carbon fixation is a biochemical process by which atmospheric carbon dioxide is incorporated by plants, algae and cyanobacteria into energy-rich organic molecules such as glucose, thus creating their own food by photosynthesis. Photosynthesis uses carbon dioxide and water to produce sugars from which other organic compounds can be constructed, and oxygen is produced as a by-product.
Ribulose-1,5-bisphosphate carboxylase oxygenase, commonly abbreviated to RuBisCO, is the enzyme involved in the first major step of carbon fixation, the production of two molecules of 3-phosphoglycerate from CO2 and ribulose bisphosphate, as shown in the diagram at left.
RuBisCO is thought to be the single most abundant protein on Earth.
Phototrophs use the products of their photosynthesis as internal food sources and as raw material for the biosynthesis of more complex organic molecules, such as polysaccharides, nucleic acids, and proteins. These are used for their own growth, and also as the basis of the food chains and webs that feed other organisms, including animals such as ourselves. Some important phototrophs, the coccolithophores synthesise hard calcium carbonate scales. A globally significant species of coccolithophore is "Emiliania huxleyi" whose calcite scales have formed the basis of many sedimentary rocks such as limestone, where what was previously atmospheric carbon can remain fixed for geological timescales.
Plants can grow as much as 50% faster in concentrations of 1,000 ppm CO2 when compared with ambient conditions, though this assumes no change in climate and no limitation on other nutrients. Elevated CO2 levels cause increased growth reflected in the harvestable yield of crops, with wheat, rice and soybean all showing increases in yield of 12–14% under elevated CO2 in FACE experiments.
Increased atmospheric CO2 concentrations result in fewer stomata developing on plants which leads to reduced water usage and increased water-use efficiency. Studies using FACE have shown that CO2 enrichment leads to decreased concentrations of micronutrients in crop plants. This may have knock-on effects on other parts of ecosystems as herbivores will need to eat more food to gain the same amount of protein.
The concentration of secondary metabolites such as phenylpropanoids and flavonoids can also be altered in plants exposed to high concentrations of CO2.
Plants also emit CO2 during respiration, and so the majority of plants and algae, which use C3 photosynthesis, are only net absorbers during the day. Though a growing forest will absorb many tons of CO2 each year, a mature forest will produce as much CO2 from respiration and decomposition of dead specimens (e.g., fallen branches) as is used in photosynthesis in growing plants. Contrary to the long-standing view that they are carbon neutral, mature forests can continue to accumulate carbon and remain valuable carbon sinks, helping to maintain the carbon balance of Earth's atmosphere. Additionally, and crucially to life on earth, photosynthesis by phytoplankton consumes dissolved CO2 in the upper ocean and thereby promotes the absorption of CO2 from the atmosphere.
Toxicity.
Carbon dioxide content in fresh air (averaged between sea-level and 10 kPa level, i.e., about altitude) varies between 0.036% (360 ppm) and 0.041% (412 ppm), depending on the location.
CO2 is an asphyxiant gas and not classified as toxic or harmful in accordance with Globally Harmonized System of Classification and Labelling of Chemicals standards of United Nations Economic Commission for Europe by using the OECD Guidelines for the Testing of Chemicals. In concentrations up to 1% (10,000 ppm), it will make some people feel drowsy and give the lungs a stuffy feeling. Concentrations of 7% to 10% (70,000 to 100,000 ppm) may cause suffocation, even in the presence of sufficient oxygen, manifesting as dizziness, headache, visual and hearing dysfunction, and unconsciousness within a few minutes to an hour. The physiological effects of acute carbon dioxide exposure are grouped together under the term hypercapnia, a subset of asphyxiation.
Because it is heavier than air, in locations where the gas seeps from the ground (due to sub-surface volcanic or geothermal activity) in relatively high concentrations, without the dispersing effects of wind, it can collect in sheltered/pocketed locations below average ground level, causing animals located therein to be suffocated. Carrion feeders attracted to the carcasses are then also killed. Children have been killed in the same way near the city of Goma by CO2 emissions from the nearby volcano Mount Nyiragongo. The Swahili term for this phenomenon is .
Adaptation to increased concentrations of CO2 occurs in humans, including modified breathing and kidney bicarbonate production, in order to balance the effects of blood acidification (acidosis). Several studies suggested that 2.0 percent inspired concentrations could be used for closed air spaces (e.g. a submarine) since the adaptation is physiological and reversible, as deterioration in performance or in normal physical activity does not happen at this level of exposure for five days. Yet, other studies show a decrease in cognitive function even at much lower levels. Also, with ongoing respiratory acidosis, adaptation or compensatory mechanisms will be unable to reverse the condition.
Below 1%.
There are few studies of the health effects of long-term continuous CO2 exposure on humans and animals at levels below 1%. Occupational CO2 exposure limits have been set in the United States at 0.5% (5000 ppm) for an eight-hour period. At this CO2 concentration, International Space Station crew experienced headaches, lethargy, mental slowness, emotional irritation, and sleep disruption. Studies in animals at 0.5% CO2 have demonstrated kidney calcification and bone loss after eight weeks of exposure. A study of humans exposed in 2.5 hour sessions demonstrated significant negative effects on cognitive abilities at concentrations as low as 0.1% (1000ppm) CO2 likely due to CO2 induced increases in cerebral blood flow. Another study observed a decline in basic activity level and information usage at 1000 ppm, when compared to 500 ppm.
However a review of the literature found that a reliable subset of studies on the phenomenon of carbon dioxide induced cognitive impairment to only show a small effect on high-level decision making (for concentrations below 5000 ppm). Most of the studies were confounded by inadequate study designs, environmental comfort, uncertainties in exposure doses and differing cognitive assessments used. Similarly a study on the effects of the concentration of CO2 in motorcycle helmets has been criticized for having dubious methodology in not noting the self-reports of motorcycle riders and taking measurements using mannequins. Further when normal motorcycle conditions were achieved (such as highway or city speeds) or the visor was raised the concentration of CO2 declined to safe levels (0.2%).
Ventilation.
Poor ventilation is one of the main causes of excessive CO2 concentrations in closed spaces, leading to poor indoor air quality. Carbon dioxide differential above outdoor concentrations at steady state conditions (when the occupancy and ventilation system operation are sufficiently long that CO2 concentration has stabilized) are sometimes used to estimate ventilation rates per person. Higher CO2 concentrations are associated with occupant health, comfort and performance degradation. ASHRAE Standard 62.1–2007 ventilation rates may result in indoor concentrations up to 2,100 ppm above ambient outdoor conditions. Thus if the outdoor concentration is 400 ppm, indoor concentrations may reach 2,500 ppm with ventilation rates that meet this industry consensus standard. Concentrations in poorly ventilated spaces can be found even higher than this (range of 3,000 or 4,000 ppm).
Miners, who are particularly vulnerable to gas exposure due to insufficient ventilation, referred to mixtures of carbon dioxide and nitrogen as "blackdamp", "choke damp" or "stythe". Before more effective technologies were developed, miners would frequently monitor for dangerous levels of blackdamp and other gases in mine shafts by bringing a caged canary with them as they worked. The canary is more sensitive to asphyxiant gases than humans, and as it became unconscious would stop singing and fall off its perch. The Davy lamp could also detect high levels of blackdamp (which sinks, and collects near the floor) by burning less brightly, while methane, another suffocating gas and explosion risk, would make the lamp burn more brightly.
In February 2020, three people died from suffocation at a party in Moscow when dry ice (frozen CO2) was added to a swimming pool to cool it down. A similar accident occurred in 2018 when a woman died from CO2 fumes emanating from the large amount of dry ice she was transporting in her car.
Indoor air.
Humans spend more and more time in a confined atmosphere (around 80-90% of the time in a building or vehicle). According to the French Agency for Food, Environmental and Occupational Health & Safety (ANSES) and various actors in France, the CO2 rate in the indoor air of buildings (linked to human or animal occupancy and the presence of combustion installations), weighted by air renewal, is "usually between about 350 and 2,500 ppm".
In homes, schools, nurseries and offices, there are no systematic relationships between the levels of CO2 and other pollutants, and indoor CO2 is statistically not a good predictor of pollutants linked to outdoor road (or air, etc.) traffic. CO2 is the parameter that changes the fastest (with hygrometry and oxygen levels when humans or animals are gathered in a closed or poorly ventilated room). In poor countries, many open hearths are sources of CO2 and CO emitted directly into the living environment.
Outdoor areas with elevated concentrations.
Local concentrations of carbon dioxide can reach high values near strong sources, especially those that are isolated by surrounding terrain. At the Bossoleto hot spring near Rapolano Terme in Tuscany, Italy, situated in a bowl-shaped depression about in diameter, concentrations of CO2 rise to above 75% overnight, sufficient to kill insects and small animals. After sunrise the gas is dispersed by convection. High concentrations of CO2 produced by disturbance of deep lake water saturated with CO2 are thought to have caused 37 fatalities at Lake Monoun, Cameroon in 1984 and 1700 casualties at Lake Nyos, Cameroon in 1986.
Human physiology.
Content.
The body produces approximately of carbon dioxide per day per person, containing of carbon. In humans, this carbon dioxide is carried through the venous system and is breathed out through the lungs, resulting in lower concentrations in the arteries. The carbon dioxide content of the blood is often given as the partial pressure, which is the pressure which carbon dioxide would have had if it alone occupied the volume. In humans, the blood carbon dioxide contents are shown in the adjacent table.
Transport in the blood.
CO2 is carried in blood in three different ways. Exact percentages vary between arterial and venous blood.
Hemoglobin, the main oxygen-carrying molecule in red blood cells, carries both oxygen and carbon dioxide. However, the CO2 bound to hemoglobin does not bind to the same site as oxygen. Instead, it combines with the N-terminal groups on the four globin chains. However, because of allosteric effects on the hemoglobin molecule, the binding of CO2 decreases the amount of oxygen that is bound for a given partial pressure of oxygen. This is known as the Haldane Effect, and is important in the transport of carbon dioxide from the tissues to the lungs. Conversely, a rise in the partial pressure of CO2 or a lower pH will cause offloading of oxygen from hemoglobin, which is known as the Bohr effect.
Regulation of respiration.
Carbon dioxide is one of the mediators of local autoregulation of blood supply. If its concentration is high, the capillaries expand to allow a greater blood flow to that tissue.
Bicarbonate ions are crucial for regulating blood pH. A person's breathing rate influences the level of CO2 in their blood. Breathing that is too slow or shallow causes respiratory acidosis, while breathing that is too rapid leads to hyperventilation, which can cause respiratory alkalosis.
Although the body requires oxygen for metabolism, low oxygen levels normally do not stimulate breathing. Rather, breathing is stimulated by higher carbon dioxide levels. As a result, breathing low-pressure air or a gas mixture with no oxygen at all (such as pure nitrogen) can lead to loss of consciousness without ever experiencing air hunger. This is especially perilous for high-altitude fighter pilots. It is also why flight attendants instruct passengers, in case of loss of cabin pressure, to apply the oxygen mask to themselves first before helping others; otherwise, one risks losing consciousness.
The respiratory centers try to maintain an arterial CO2 pressure of 40 mmHg. With intentional hyperventilation, the CO2 content of arterial blood may be lowered to 10–20 mmHg (the oxygen content of the blood is little affected), and the respiratory drive is diminished. This is why one can hold one's breath longer after hyperventilating than without hyperventilating. This carries the risk that unconsciousness may result before the need to breathe becomes overwhelming, which is why hyperventilation is particularly dangerous before free diving.
Concentrations and role in the environment.
Oceans.
Ocean acidification.
Carbon dioxide dissolves in the ocean to form carbonic acid (), bicarbonate (), and carbonate (). There is about fifty times as much carbon dioxide dissolved in the oceans as exists in the atmosphere. The oceans act as an enormous carbon sink, and have taken up about a third of CO2 emitted by human activity.
Hydrothermal vents.
Carbon dioxide is also introduced into the oceans through hydrothermal vents. The "Champagne" hydrothermal vent, found at the Northwest Eifuku volcano in the Mariana Trench, produces almost pure liquid carbon dioxide, one of only two known sites in the world as of 2004, the other being in the Okinawa Trough. The finding of a submarine lake of liquid carbon dioxide in the Okinawa Trough was reported in 2006.
Production.
Biological processes.
Carbon dioxide is a by-product of the fermentation of sugar in the brewing of beer, whisky and other alcoholic beverages and in the production of bioethanol. Yeast metabolizes sugar to produce CO2 and ethanol, also known as alcohol, as follows:
All aerobic organisms produce CO2 when they oxidize carbohydrates, fatty acids, and proteins. The large number of reactions involved are exceedingly complex and not described easily. Refer to cellular respiration, anaerobic respiration and photosynthesis. The equation for the respiration of glucose and other monosaccharides is:
Anaerobic organisms decompose organic material producing methane and carbon dioxide together with traces of other compounds. Regardless of the type of organic material, the production of gases follows well defined kinetic pattern. Carbon dioxide comprises about 40–45% of the gas that emanates from decomposition in landfills (termed "landfill gas"). Most of the remaining 50–55% is methane.
Industrial processes.
Carbon dioxide can be obtained by distillation from air, but the method is inefficient. Industrially, carbon dioxide is predominantly an unrecovered waste product, produced by several methods which may be practiced at various scales.
Combustion.
The combustion of all carbon-based fuels, such as methane (natural gas), petroleum distillates (gasoline, diesel, kerosene, propane), coal, wood and generic organic matter produces carbon dioxide and, except in the case of pure carbon, water. As an example, the chemical reaction between methane and oxygen:
Iron is reduced from its oxides with coke in a blast furnace, producing pig iron and carbon dioxide:
By-product from hydrogen production.
Carbon dioxide is a byproduct of the industrial production of hydrogen by steam reforming and the water gas shift reaction in ammonia production. These processes begin with the reaction of water and natural gas (mainly methane). This is a major source of food-grade carbon dioxide for use in carbonation of beer and soft drinks, and is also used for stunning animals such as poultry. In the summer of 2018 a shortage of carbon dioxide for these purposes arose in Europe due to the temporary shut-down of several ammonia plants for maintenance.
Thermal decomposition of limestone.
It is produced by thermal decomposition of limestone, by heating (calcining) at about , in the manufacture of quicklime (calcium oxide, CaO), a compound that has many industrial uses:
Acids liberate CO2 from most metal carbonates. Consequently, it may be obtained directly from natural carbon dioxide springs, where it is produced by the action of acidified water on limestone or dolomite. The reaction between hydrochloric acid and calcium carbonate (limestone or chalk) is shown below:
The carbonic acid () then decomposes to water and CO2:
Such reactions are accompanied by foaming or bubbling, or both, as the gas is released. They have widespread uses in industry because they can be used to neutralize waste acid streams.
Commercial uses.
Carbon dioxide is used by the food industry, the oil industry, and the chemical industry.
The compound has varied commercial uses but one of its greatest uses as a chemical is in the production of carbonated beverages; it provides the sparkle in carbonated beverages such as soda water, beer and sparkling wine.
Precursor to chemicals.
In the chemical industry, carbon dioxide is mainly consumed as an ingredient in the production of urea, with a smaller fraction being used to produce methanol and a range of other products. Some carboxylic acid derivatives such as sodium salicylate are prepared using CO2 by the Kolbe–Schmitt reaction.
In addition to conventional processes using CO2 for chemical production, electrochemical methods are also being explored at a research level. In particular, the use of renewable energy for production of fuels from CO2 (such as methanol) is attractive as this could result in fuels that could be easily transported and used within conventional combustion technologies but have no net CO2 emissions.
Agriculture.
Plants require carbon dioxide to conduct photosynthesis. The atmospheres of greenhouses may (if of large size, must) be enriched with additional CO2 to sustain and increase the rate of plant growth. At very high concentrations (100 times atmospheric concentration, or greater), carbon dioxide can be toxic to animal life, so raising the concentration to 10,000 ppm (1%) or higher for several hours will eliminate pests such as whiteflies and spider mites in a greenhouse. Some plants respond more favorably to rising carbon dioxide concentrations than others, which can lead to vegetation regime shifts like woody plant encroachment.
Foods.
Carbon dioxide is a food additive used as a propellant and acidity regulator in the food industry. It is approved for usage in the EU (listed as E number E290), US, Australia and New Zealand (listed by its INS number 290).
A candy called Pop Rocks is pressurized with carbon dioxide gas at about . When placed in the mouth, it dissolves (just like other hard candy) and releases the gas bubbles with an audible pop.
Leavening agents cause dough to rise by producing carbon dioxide. Baker's yeast produces carbon dioxide by fermentation of sugars within the dough, while chemical leaveners such as baking powder and baking soda release carbon dioxide when heated or if exposed to acids.
Beverages.
Carbon dioxide is used to produce carbonated soft drinks and soda water. Traditionally, the carbonation of beer and sparkling wine came about through natural fermentation, but many manufacturers carbonate these drinks with carbon dioxide recovered from the fermentation process. In the case of bottled and kegged beer, the most common method used is carbonation with recycled carbon dioxide. With the exception of British real ale, draught beer is usually transferred from kegs in a cold room or cellar to dispensing taps on the bar using pressurized carbon dioxide, sometimes mixed with nitrogen.
The taste of soda water (and related taste sensations in other carbonated beverages) is an effect of the dissolved carbon dioxide rather than the bursting bubbles of the gas. Carbonic anhydrase 4 converts carbon dioxide to carbonic acid leading to a sour taste, and also the dissolved carbon dioxide induces a somatosensory response.
Winemaking.
Carbon dioxide in the form of dry ice is often used during the cold soak phase in winemaking to cool clusters of grapes quickly after picking to help prevent spontaneous fermentation by wild yeast. The main advantage of using dry ice over water ice is that it cools the grapes without adding any additional water that might decrease the sugar concentration in the grape must, and thus the alcohol concentration in the finished wine. Carbon dioxide is also used to create a hypoxic environment for carbonic maceration, the process used to produce Beaujolais wine.
Carbon dioxide is sometimes used to top up wine bottles or other storage vessels such as barrels to prevent oxidation, though it has the problem that it can dissolve into the wine, making a previously still wine slightly fizzy. For this reason, other gases such as nitrogen or argon are preferred for this process by professional wine makers.
Stunning animals.
Carbon dioxide is often used to "stun" animals before slaughter. "Stunning" may be a misnomer, as the animals are not knocked out immediately and may suffer distress.
Inert gas.
Carbon dioxide is one of the most commonly used compressed gases for pneumatic (pressurized gas) systems in portable pressure tools. Carbon dioxide is also used as an atmosphere for welding, although in the welding arc, it reacts to oxidize most metals. Use in the automotive industry is common despite significant evidence that welds made in carbon dioxide are more brittle than those made in more inert atmospheres. When used for MIG welding, CO2 use is sometimes referred to as MAG welding, for Metal Active Gas, as CO2 can react at these high temperatures. It tends to produce a hotter puddle than truly inert atmospheres, improving the flow characteristics. Although, this may be due to atmospheric reactions occurring at the puddle site. This is usually the opposite of the desired effect when welding, as it tends to embrittle the site, but may not be a problem for general mild steel welding, where ultimate ductility is not a major concern.
Carbon dioxide is used in many consumer products that require pressurized gas because it is inexpensive and nonflammable, and because it undergoes a phase transition from gas to liquid at room temperature at an attainable pressure of approximately , allowing far more carbon dioxide to fit in a given container than otherwise would. Life jackets often contain canisters of pressured carbon dioxide for quick inflation. Aluminium capsules of CO2 are also sold as supplies of compressed gas for air guns, paintball markers/guns, inflating bicycle tires, and for making carbonated water. High concentrations of carbon dioxide can also be used to kill pests. Liquid carbon dioxide is used in supercritical drying of some food products and technological materials, in the preparation of specimens for scanning electron microscopy and in the decaffeination of coffee beans.
Fire extinguisher.
Carbon dioxide can be used to extinguish flames by flooding the environment around the flame with the gas. It does not itself react to extinguish the flame, but starves the flame of oxygen by displacing it. Some fire extinguishers, especially those designed for electrical fires, contain liquid carbon dioxide under pressure. Carbon dioxide extinguishers work well on small flammable liquid and electrical fires, but not on ordinary combustible fires, because they do not cool the burning substances significantly, and when the carbon dioxide disperses, they can catch fire upon exposure to atmospheric oxygen. They are mainly used in server rooms.
Carbon dioxide has also been widely used as an extinguishing agent in fixed fire-protection systems for local application of specific hazards and total flooding of a protected space. International Maritime Organization standards recognize carbon dioxide systems for fire protection of ship holds and engine rooms. Carbon dioxide-based fire-protection systems have been linked to several deaths, because it can cause suffocation in sufficiently high concentrations. A review of CO2 systems identified 51 incidents between 1975 and the date of the report (2000), causing 72 deaths and 145 injuries.
Supercritical CO2 as solvent.
Liquid carbon dioxide is a good solvent for many lipophilic organic compounds and is used to decaffeinate coffee. Carbon dioxide has attracted attention in the pharmaceutical and other chemical processing industries as a less toxic alternative to more traditional solvents such as organochlorides. It is also used by some dry cleaners for this reason. It is used in the preparation of some aerogels because of the properties of supercritical carbon dioxide.
Medical and pharmacological uses.
In medicine, up to 5% carbon dioxide (130 times atmospheric concentration) is added to oxygen for stimulation of breathing after apnea and to stabilize the /CO2 balance in blood.
Carbon dioxide can be mixed with up to 50% oxygen, forming an inhalable gas; this is known as Carbogen and has a variety of medical and research uses.
Another medical use are the mofette, dry spas that use carbon dioxide from post-volcanic discharge for therapeutic purposes.
Energy.
Supercritical CO2 is used as the working fluid in the Allam power cycle engine.
Fossil fuel recovery.
Carbon dioxide is used in enhanced oil recovery where it is injected into or adjacent to producing oil wells, usually under supercritical conditions, when it becomes miscible with the oil. This approach can increase original oil recovery by reducing residual oil saturation by 7–23% additional to primary extraction. It acts as both a pressurizing agent and, when dissolved into the underground crude oil, significantly reduces its viscosity, and changing surface chemistry enabling the oil to flow more rapidly through the reservoir to the removal well. In mature oil fields, extensive pipe networks are used to carry the carbon dioxide to the injection points.
In enhanced coal bed methane recovery, carbon dioxide would be pumped into the coal seam to displace methane, as opposed to current methods which primarily rely on the removal of water (to reduce pressure) to make the coal seam release its trapped methane.
Bio transformation into fuel.
It has been proposed that CO2 from power generation be bubbled into ponds to stimulate growth of algae that could then be converted into biodiesel fuel. A strain of the cyanobacterium "Synechococcus elongatus" has been genetically engineered to produce the fuels isobutyraldehyde and isobutanol from CO2 using photosynthesis.
Researchers have developed an electrocatalytic technique using enzymes isolated from bacteria to power the chemical reactions which convert CO2 into fuels.
Refrigerant.
Liquid and solid carbon dioxide are important refrigerants, especially in the food industry, where they are employed during the transportation and storage of ice cream and other frozen foods. Solid carbon dioxide is called "dry ice" and is used for small shipments where refrigeration equipment is not practical. Solid carbon dioxide is always below at regular atmospheric pressure, regardless of the air temperature.
Liquid carbon dioxide (industry nomenclature R744 or R-744) was used as a refrigerant prior to the use of dichlorodifluoromethane (R12, a chlorofluorocarbon (CFC) compound). CO2 might enjoy a renaissance because one of the main substitutes to CFCs, 1,1,1,2-tetrafluoroethane (R134a, a hydrofluorocarbon (HFC) compound) contributes to climate change more than CO2 does. CO2 physical properties are highly favorable for cooling, refrigeration, and heating purposes, having a high volumetric cooling capacity. Due to the need to operate at pressures of up to , CO2 systems require highly mechanically resistant reservoirs and components that have already been developed for mass production in many sectors. In automobile air conditioning, in more than 90% of all driving conditions for latitudes higher than 50°, CO2 (R744) operates more efficiently than systems using HFCs (e.g., R134a). Its environmental advantages (GWP of 1, non-ozone depleting, non-toxic, non-flammable) could make it the future working fluid to replace current HFCs in cars, supermarkets, and heat pump water heaters, among others. Coca-Cola has fielded CO2-based beverage coolers and the U.S. Army is interested in CO2 refrigeration and heating technology.
Minor uses.
Carbon dioxide is the lasing medium in a carbon-dioxide laser, which is one of the earliest type of lasers.
Carbon dioxide can be used as a means of controlling the pH of swimming pools, by continuously adding gas to the water, thus keeping the pH from rising. Among the advantages of this is the avoidance of handling (more hazardous) acids. Similarly, it is also used in the maintaining reef aquaria, where it is commonly used in calcium reactors to temporarily lower the pH of water being passed over calcium carbonate in order to allow the calcium carbonate to dissolve into the water more freely, where it is used by some corals to build their skeleton.
Used as the primary coolant in the British advanced gas-cooled reactor for nuclear power generation.
Carbon dioxide induction is commonly used for the euthanasia of laboratory research animals. Methods to administer CO2 include placing animals directly into a closed, prefilled chamber containing CO2, or exposure to a gradually increasing concentration of CO2. The American Veterinary Medical Association's 2020 guidelines for carbon dioxide induction state that a displacement rate of 30–70% of the chamber or cage volume per minute is optimal for the humane euthanasia of small rodents. Percentages of CO2 vary for different species, based on identified optimal percentages to minimize distress.
Carbon dioxide is also used in several related cleaning and surface-preparation techniques.
History of discovery.
Carbon dioxide was the first gas to be described as a discrete substance. In about 1640, the Flemish chemist Jan Baptist van Helmont observed that when he burned charcoal in a closed vessel, the mass of the resulting ash was much less than that of the original charcoal. His interpretation was that the rest of the charcoal had been transmuted into an invisible substance he termed a "gas" (from Greek "chaos") or "wild spirit" ("spiritus sylvestris").
The properties of carbon dioxide were further studied in the 1750s by the Scottish physician Joseph Black. He found that limestone (calcium carbonate) could be heated or treated with acids to yield a gas he called "fixed air". He observed that the fixed air was denser than air and supported neither flame nor animal life. Black also found that when bubbled through limewater (a saturated aqueous solution of calcium hydroxide), it would precipitate calcium carbonate. He used this phenomenon to illustrate that carbon dioxide is produced by animal respiration and microbial fermentation. In 1772, English chemist Joseph Priestley published a paper entitled "Impregnating Water with Fixed Air" in which he described a process of dripping sulfuric acid (or "oil of vitriol" as Priestley knew it) on chalk in order to produce carbon dioxide, and forcing the gas to dissolve by agitating a bowl of water in contact with the gas.
Carbon dioxide was first liquefied (at elevated pressures) in 1823 by Humphry Davy and Michael Faraday. The earliest description of solid carbon dioxide (dry ice) was given by the French inventor Adrien-Jean-Pierre Thilorier, who in 1835 opened a pressurized container of liquid carbon dioxide, only to find that the cooling produced by the rapid evaporation of the liquid yielded a "snow" of solid CO2.
Carbon dioxide in combination with nitrogen was known from earlier times as Blackdamp, stythe or choke damp. Along with the other types of damp it was encountered in mining operations and well sinking. Slow oxidation of coal and biological processes replaced the oxygen to create a suffocating mixture of nitrogen and carbon dioxide.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K_\\mathrm{h} = \\frac{\\ce{[H2CO3]}}{\\ce{[CO2_{(aq)}]}} = 1.70 \\times 10^{-3}"
},
{
"math_id": 1,
"text": "K_\\mathrm{a1} = \\frac{\\ce{[HCO3- ][H+]}}{\\ce{[H2CO3]}}"
},
{
"math_id": 2,
"text": "K_\\mathrm{a1}{\\rm{(apparent)}}=\\frac{\\ce{[HCO3- ][H+]}}{\\ce{[H2CO3] + [CO2_{(aq)}]}}"
},
{
"math_id": 3,
"text": " \\mathrm{CO_{2}} "
},
{
"math_id": 4,
"text": "\\mathrm{CO_{2}} "
}
] |
https://en.wikipedia.org/wiki?curid=5906
|
5906036
|
Optical equivalence theorem
|
The optical equivalence theorem in quantum optics asserts an equivalence between the expectation value of an operator in Hilbert space and the expectation value of its associated function in the phase space formulation with respect to a quasiprobability distribution. The theorem was first reported by George Sudarshan in 1963 for normally ordered operators and generalized later that decade to any ordering.
Let Ω be an ordering of the non-commutative creation and annihilation operators, and let formula_0 be an operator that is expressible as a power series in the creation and annihilation operators that satisfies the ordering Ω. Then the optical equivalence theorem is succinctly expressed as
formula_1
Here, α is understood to be the eigenvalue of the annihilation operator on a coherent states and is replaced formally in the power series expansion of g. The left side of the above equation is an expectation value in the Hilbert space whereas the right hand side is an expectation value with respect to the quasiprobability distribution.
We may write each of these explicitly for better clarity. Let formula_2 be the density operator and formula_3 be the ordering "reciprocal" to Ω. The quasiprobability distribution associated with Ω is given, then, at least formally, by
formula_4
The above framed equation becomes
formula_5
For example, let Ω be the normal order. This means that g can be written in a power series of the following form:
formula_6
The quasiprobability distribution associated with the normal order is the Glauber–Sudarshan P representation. In these terms, we arrive at
formula_7
This theorem implies the formal equivalence between expectation values of normally ordered operators in quantum optics and the corresponding complex numbers in classical optics.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "g_{\\Omega}(\\hat{a},\\hat{a}^{\\dagger})"
},
{
"math_id": 1,
"text": "\\langle g_{\\Omega}(\\hat{a},\\hat{a}^{\\dagger}) \\rangle = \\langle g_{\\Omega}(\\alpha,\\alpha^*) \\rangle."
},
{
"math_id": 2,
"text": "\\hat{\\rho}"
},
{
"math_id": 3,
"text": "{\\bar{\\Omega}}"
},
{
"math_id": 4,
"text": "\\hat{\\rho} = \\frac{1}{\\pi} \\int f_{\\bar{\\Omega}}(\\alpha,\\alpha^*) |\\alpha\\rangle\\langle\\alpha| \\, d^2\\alpha."
},
{
"math_id": 5,
"text": "\\operatorname{tr}( \\hat{\\rho} \\cdot g_\\Omega(\\hat{a},\\hat{a}^\\dagger)) = \\int f_{\\bar\\Omega}(\\alpha,\\alpha^*) g_\\Omega(\\alpha,\\alpha^*) \\, d^2\\alpha."
},
{
"math_id": 6,
"text": "g_N(\\hat{a}^\\dagger, \\hat{a}) = \\sum_{n,m} c_{nm} \\hat{a}^{\\dagger n} \\hat{a}^m. "
},
{
"math_id": 7,
"text": "\\operatorname{tr}( \\hat{\\rho} \\cdot g_N(\\hat{a},\\hat{a}^\\dagger)) = \\int P(\\alpha) g(\\alpha,\\alpha^*) \\, d^2\\alpha."
}
] |
https://en.wikipedia.org/wiki?curid=5906036
|
5906119
|
Glauber–Sudarshan P representation
|
The Glauber–Sudarshan P representation is a suggested way of writing down the phase space distribution of a quantum system in the phase space formulation of quantum mechanics. The P representation is the quasiprobability distribution in which observables are expressed in normal order. In quantum optics, this representation, formally equivalent to several other representations, is sometimes preferred over such alternative representations to describe light in optical phase space, because typical optical observables, such as the particle number operator, are naturally expressed in normal order. It is named after George Sudarshan and Roy J. Glauber, who worked on the topic in 1963.
Despite many useful applications in laser theory and coherence theory, the Sudarshan–Glauber P representation has the peculiarity that it is not always positive, and is not a bona-fide probability function.
Definition.
We wish to construct a function formula_0 with the property that the density matrix formula_1 is diagonal in the basis of coherent states formula_2, i.e.,
formula_3
We also wish to construct the function such that the expectation value of a normally ordered operator satisfies the optical equivalence theorem. This implies that the density matrix should be in "anti"-normal order so that we can express the density matrix as a power series
formula_4
Inserting the resolution of the identity
formula_5
we see that
formula_6
and thus we formally assign
formula_7
More useful integral formulas for "P" are necessary for any practical calculation. One method is to define the characteristic function
formula_8
and then take the Fourier transform
formula_9
Another useful integral formula for "P" is
formula_10
Note that both of these integral formulas do "not" converge in any usual sense for "typical" systems . We may also use the matrix elements of formula_1 in the Fock basis formula_11. The following formula shows that it is "always" possible to write the density matrix in this diagonal form without appealing to operator orderings using the inversion (given here for a single mode),
formula_12
where r and θ are the amplitude and phase of α. Though this is a full formal solution of this possibility, it requires infinitely many derivatives of Dirac delta functions, far beyond the reach of any ordinary tempered distribution theory.
Discussion.
If the quantum system has a classical analog, e.g. a coherent state or thermal radiation, then P is non-negative everywhere like an ordinary probability distribution. If, however, the quantum system has no classical analog, e.g. an incoherent Fock state or entangled system, then P is negative somewhere or more singular than a Dirac delta function. (By a theorem of Schwartz, distributions that are more singular than the Dirac delta function are always negative somewhere.) Such "negative probability" or high degree of singularity is a feature inherent to the representation and does not diminish the meaningfulness of expectation values taken with respect to P. Even if P does behave like an ordinary probability distribution, however, the matter is not quite so simple. According to Mandel and Wolf: "The different coherent states are not [mutually] orthogonal, so that even if formula_13 behaved like a true probability density [function], it would not describe probabilities of mutually exclusive states."
Examples.
Thermal radiation.
From statistical mechanics arguments in the Fock basis, the mean photon number of a mode with wavevector k and polarization state s for a black body at temperature T is known to be
formula_14
The P representation of the black body is
formula_15
In other words, every mode of the black body is normally distributed in the basis of coherent states. Since P is positive and bounded, this system is essentially classical. This is actually quite a remarkable result because for thermal equilibrium the density matrix is also diagonal in the Fock basis, but Fock states are non-classical.
Highly singular example.
Even very simple-looking states may exhibit highly non-classical behavior. Consider a superposition of two coherent states
formula_16
where "c"0 , "c"1 are constants subject to the normalizing constraint
formula_17
Note that this is quite different from a qubit because formula_18 and formula_19 are not orthogonal. As it is straightforward to calculate formula_20, we can use the Mehta formula above to compute "P",
formula_21
Despite having infinitely many derivatives of delta functions, P still obeys the optical equivalence theorem. If the expectation value of the number operator, for example, is taken with respect to the state vector or as a phase space average with respect to P, the two expectation values match:
formula_22
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "P(\\alpha)"
},
{
"math_id": 1,
"text": "\\hat{\\rho}"
},
{
"math_id": 2,
"text": "\\{|\\alpha\\rangle\\}"
},
{
"math_id": 3,
"text": "\\hat{\\rho} = \\int P(\\alpha) |{\\alpha}\\rangle \\langle {\\alpha}|\\, d^{2}\\alpha, \\qquad d^2\\alpha \\equiv d\\, {\\rm Re}(\\alpha) \\, d\\, {\\rm Im}(\\alpha)."
},
{
"math_id": 4,
"text": "\\hat{\\rho}_A=\\sum_{j,k} c_{j,k}\\cdot\\hat{a}^j\\hat{a}^{\\dagger k}."
},
{
"math_id": 5,
"text": "\\hat{I}=\\frac{1}{\\pi} \\int |{\\alpha}\\rangle \\langle {\\alpha}|\\, d^{2}\\alpha ,"
},
{
"math_id": 6,
"text": "\\begin{align}\n\\rho_A(\\hat{a},\\hat{a}^{\\dagger})&=\\frac{1}{\\pi}\\sum_{j,k} \\int c_{j,k}\\cdot\\hat{a}^j|{\\alpha}\\rangle \\langle {\\alpha}|\\hat{a}^{\\dagger k} \\, d^{2}\\alpha \\\\\n&= \\frac{1}{\\pi} \\sum_{j,k} \\int c_{j,k} \\cdot \\alpha^j|{\\alpha}\\rangle \\langle {\\alpha}|\\alpha^{*k} \\, d^{2}\\alpha \\\\\n&= \\frac{1}{\\pi} \\int \\sum_{j,k} c_{j,k} \\cdot \\alpha^j\\alpha^{*k}|{\\alpha}\\rangle \\langle {\\alpha}| \\, d^{2}\\alpha \\\\\n&= \\frac{1}{\\pi} \\int \\rho_A(\\alpha,\\alpha^*)|{\\alpha}\\rangle \\langle {\\alpha}| \\, d^{2}\\alpha,\\end{align}"
},
{
"math_id": 7,
"text": "P(\\alpha)=\\frac{1}{\\pi}\\rho_A(\\alpha,\\alpha^*)."
},
{
"math_id": 8,
"text": "\\chi_N(\\beta)=\\operatorname{tr}(\\hat{\\rho} \\cdot e^{i\\beta\\cdot\\hat{a}^{\\dagger}}e^{i\\beta^*\\cdot\\hat{a}})"
},
{
"math_id": 9,
"text": "P(\\alpha)=\\frac{1}{\\pi^2}\\int \\chi_N(\\beta) e^{-\\beta\\alpha^*+\\beta^*\\alpha} \\, d^2\\beta."
},
{
"math_id": 10,
"text": "P(\\alpha)=\\frac{e^{|\\alpha|^2}}{\\pi^2}\\int \\langle -\\beta|\\hat{\\rho}|\\beta\\rangle e^{|\\beta|^2-\\beta\\alpha^*+\\beta^*\\alpha} \\, d^2\\beta."
},
{
"math_id": 11,
"text": "\\{|n\\rangle\\}"
},
{
"math_id": 12,
"text": "P(\\alpha)=\\sum_{n} \\sum_{k} \\langle n|\\hat{\\rho}|k\\rangle \\frac{\\sqrt{n! k!}}{2 \\pi r (n+k)!} e^{r^2-i(n-k)\\theta} \\left[\\left( - \\frac{\\partial}{\\partial r} \\right)^{n+k} \\delta (r) \\right],"
},
{
"math_id": 13,
"text": "P(\\alpha) "
},
{
"math_id": 14,
"text": "\\langle\\hat{n}_{\\mathbf{k},s}\\rangle=\\frac{1}{e^{\\hbar \\omega / k_B T}-1}."
},
{
"math_id": 15,
"text": "P(\\{\\alpha_{\\mathbf{k},s}\\})=\\prod_{\\mathbf{k},s} \\frac{1}{\\pi \\langle\\hat{n}_{\\mathbf{k},s}\\rangle} e^{-|\\alpha|^2 / \\langle\\hat{n}_{\\mathbf{k},s}\\rangle}."
},
{
"math_id": 16,
"text": "|\\psi\\rangle=c_0|\\alpha_0\\rangle+c_1|\\alpha_1\\rangle"
},
{
"math_id": 17,
"text": "1=|c_0|^2+|c_1|^2+2e^{-(|\\alpha_0|^2+|\\alpha_1|^2)/2}\\operatorname{Re}\\left( c_0^*c_1 e^{\\alpha_0^*\\alpha_1} \\right)."
},
{
"math_id": 18,
"text": "|\\alpha_0\\rangle"
},
{
"math_id": 19,
"text": "|\\alpha_1\\rangle"
},
{
"math_id": 20,
"text": "\\langle -\\alpha|\\hat{\\rho}|\\alpha\\rangle=\\langle -\\alpha|\\psi\\rangle\\langle\\psi|\\alpha\\rangle"
},
{
"math_id": 21,
"text": "\\begin{align}P(\\alpha)= {} & |c_0|^2\\delta^2(\\alpha-\\alpha_0)+|c_1|^2\\delta^2(\\alpha-\\alpha_1) \\\\[5pt]\n& {} +2c_0^*c_1\ne^{|\\alpha|^2-\\frac{1}{2}|\\alpha_0|^2-\\frac{1}{2}|\\alpha_1|^2}\ne^{(\\alpha_1^*-\\alpha_0^*)\\cdot\\partial/\\partial(2\\alpha^*-\\alpha_0^*-\\alpha_1^*)}\ne^{(\\alpha_0-\\alpha_1)\\cdot\\partial/\\partial(2\\alpha-\\alpha_0-\\alpha_1)}\n\\cdot \\delta^2(2\\alpha-\\alpha_0-\\alpha_1) \\\\[5pt]\n& {} +2c_0c_1^*\ne^{|\\alpha|^2-\\frac{1}{2}|\\alpha_0|^2-\\frac{1}{2}|\\alpha_1|^2}\ne^{(\\alpha_0^*-\\alpha_1^*)\\cdot\\partial/\\partial(2\\alpha^*-\\alpha_0^*-\\alpha_1^*)}\ne^{(\\alpha_1-\\alpha_0)\\cdot\\partial/\\partial(2\\alpha-\\alpha_0-\\alpha_1)}\n\\cdot \\delta^2(2\\alpha-\\alpha_0-\\alpha_1).\n\\end{align}"
},
{
"math_id": 22,
"text": "\\begin{align}\\langle\\psi|\\hat{n}|\\psi\\rangle&=\\int P(\\alpha) |\\alpha|^2 \\, d^2\\alpha \\\\\n&=|c_0\\alpha_0|^2+|c_1\\alpha_1|^2+2e^{-(|\\alpha_0|^2+|\\alpha_1|^2)/2}\\operatorname{Re}\\left( c_0^*c_1 \\alpha_0^*\\alpha_1 e^{\\alpha_0^*\\alpha_1} \\right).\\end{align}"
}
] |
https://en.wikipedia.org/wiki?curid=5906119
|
5906299
|
Charlieplexing
|
Technique for driving a multiplexed display
Charlieplexing (also known as tristate multiplexing, reduced pin-count LED multiplexing, complementary LED drive and crossplexing) is a technique for accessing a large number of LEDs, switches, micro-capacitors or other I/O entities, using relatively few tri-state logic wires from a microcontroller. These I/O entities can be wired as discrete components, x/y arrays, or woven in a diagonally intersecting pattern to form diagonal arrays.
The simplest way to address a single pixel (or input button) is to run a wire out to it and another wire back to ground, but this requires a lot of wiring. A slight improvement is to have everything return on a common ground, but this still requires one wire (and one pin on the microcontroller) for each pixel or button. For an X by Y array, X*Y pins are required.
With tri-state logic pins (high, low, disconnected), matrix wiring needs only X+Y pins and wires. Each X and each Y take turns being on vs being disconnected; the disadvantage is that each light is only powered at most 1/(X*Y) of the time. If there is enough Fan-out, the Y pins can be left always on, and all checked in parallel. The refresh can then happen every 1/X of the time, but the X wires each need to pass enough current to light up Y lights at once.
Charlieplexing is a further improvement on matrix wiring. Instead of X horizontal wires meeting Y vertical wires, every wire meets every other wire. Assuming diodes are used for the connections (to distinguish between wire 3 meeting wire 5 vs wire 5 meeting wire 3), Charlieplexing needs only about half as many pins as a conventional matrix arrangement, at the cost of more complicated mapping. Alternatively, the same number of pins will support a display nearly four times (doubling in both directions) as large.
This enables these I/O entities (LEDs, switches etc.) to be connected between any two microcontroller I/Os - e.g. with 4 I/Os, each I/O can pair with 3 other I/Os, resulting in 6 unique pairings (1/2, 1/3, 1/4, 2/3, 2/4, 3/4). Only 4 pairings are possible with standard x/y multiplexing (1/3, 1/4, 2/3, 2/4). Also, due to the microcontroller's ability to reverse the polarity of the 6 I/O pairs, the number of LEDS (or diodes) that are uniquely addressable, can be doubled to 12 - adding LEDS 2/1, 3/1, 4/1, 3/2, 4/2 and 4/3.
Although it is more efficient in its use of I/O, a small amount of address manipulation is required when trying to fit Charlieplexing into a standard x/y array.
Other issues that affect standard multiplexing but are exacerbated by Charlieplexing are:
Origin.
The Charlieplexing technique was introduced by Maxim Integrated in 2001 as a reduced pin-count LED multiplexing scheme in their MAX6951 LED display driver. The name "Charlieplexing", however, first occurred in a 2003 application note. It was named after Charles "Charlie" M. Allen, an applications engineer of MAX232 fame, who had proposed this method internally.
Also in 2001, Don Lancaster illustrated the method as part of his musings about the "N-connectedness" problem, referring to Microchip Technology, who had already discussed it as "complementary LED drive technique" in a 1998 application note and would later include it in a tips & tricks booklet.
While Microchip did not mention the origin of the idea, they might have picked it up in the PICLIST, a mailing list on Microchip PIC microcontrollers, where, also in 1998, Graham Daniel proposed it to the community as a method to drive rows and columns of bidirectional LEDs. Daniel at the time had created simple circuits with PIC 12C508 chips driving 12 LEDs off 5 pins with a mini command set to set various lighting displays in motion.
The method, however, was known and utilized by various parties much earlier in the 1980s, and has been described in detail as early as in 1979 in a patent by Christopher W. Malinowski, Heinz Rinderle and Martin Siegle of the Department of Research and Development, AEG-Telefunken, Heilbronn, Germany for what they called a "three-state signaling system".
Reportedly, similar techniques were already in use as early as 1972 for track signaling applications in model railroading.
Display multiplexing is very different from multiplexing used in data transmission, although it has the same basic principles. In display multiplexing, the data lines of the displays are connected in parallel to a common databus on the microcontroller. Then, the displays are turned on and addressed individually. This allows the use of fewer I/O pins than it would normally take to drive the same number of displays directly. Here, each "display" could, for instance, be one calculator digit, not the complete array of digits.
With traditional multiplexing formula_0 I/O pins can drive a maximum of formula_1 LEDs, or listen to that many input switches. Charlieplexing can drive formula_2 LEDs, or listen to formula_3 buttons even if directionality is not enforced by a diode.
Tri-state multiplexing (Charlieplexing).
The Charlieplexing configuration may be viewed as a directed graph, where the drive pins are vertices and the LEDs are directed edges; there is an outward-pointing edge connected from each vertex to each other vertex, hence with "n" drive pins there are ("n")("n"-1) total edges. This equates to "n" pins being able to drive "n"2 − "n" segments or LEDs.
If the number of LEDs ("L") is known, then the number of pins ("n") can be found from the equation: formula_4, the result being rounded to the nearest whole number.
Example: If L = 57, then √L = 7.549, and 1 + √L = 8.549; the nearest whole number to this is 9, so 9 pins are needed to drive 57 LEDs (9 pins could drive up to 72 LEDs, but 8 pins could drive only 56 LEDs at most).
Unlike in a traditional x/y multiplexed array, where a sub-set of conductive elements crosses a different sub-set of conductive elements, in a "fully Charlieplexed" multiplexed array, each conductive element crosses every other conductive element.
Six ("n") conductive elements in a standard x/y multiplexed array forms a maximum of nine (("n" / 2)2) unique intersections (see figure on far left).
The other diagrams also show six ("n") conductive elements, but here all six elements cross over themselves, forming a multiplexed array of 36 ("n"2) intersections. LEDs are shown placed at every intersection. However, each conductor also crosses itself at the diagonal. Horizontal conductor 1 crosses vertical conductor 1, horizontal conductor 2 crosses vertical conductor 2, etc. This means that six of these LEDs are short-circuited (e.g. D1 and D5 are short-circuited). The six ("n") diagonal LEDs will, therefore, never light up, because no voltage can ever develop across them, so ("n") has to be subtracted from the total. There is no point in installing these LEDs (they are simply included here for illustrative purposes).
This leaves 30 LEDs ("n"2 − "n") that can be uniquely addressed and lit up independently.
Conductor "a" crossing conductor "b" is distinguishable from conductor "b" crossing conductor "a" because LED polarity is reversed. For example, when conductor 3 is positive and conductor 2 is negative, current flows through, and lights up LED D8, but when conductor 3 is negative and conductor 2 is positive, current flows through, and lights up LED D9.
These reverse polarity LED pairs are called complementary pairs. This diagram has 15 complementary pairs, allowing 30 LEDs to be lit independently.
The 6 unusable diagonal LEDs can be conveniently replaced by actual bidirectional shortcuts (so that there's no longer need to set up the interconnection lines grouped on the left and bottom of the diagrams, to drive the bottom input of vertical connectors from the matching left input of horizontal connectors).
By adjusting diagonally the form of horizontal and vertical connectors along the short-circuited main diagonal of the original matrix, this can be easily transformed into an array of 5 × 6 or 6 × 5 LEDs arranged on a regular grid.
A similar pattern could be used for a 10 × 11 matrix that could be used to drive up to 110 keys (including a few indicator LEDs) on a modern PC keyboard, where each key switch includes a small serial diode or LED, so that only 11 pins would be needed to individually control all of them (these individual diodes or LEDs inside each key switch would also avoid all common and undesirable "ghosting" effects, that are hard to eliminate completely when an arbitrary number of keys at any position are pressed at the same time).
Charlieplexing can also be used to significantly reduce the number of controlling pins for much larger matrixes, such as modern digital displays with high resolution. E.g. for a 4K RGB display at 3840 × 2160, this requires more than 8 millions individually addressable pixels, each featuring at least 3 colored LEDs or LCD cells, for a total of nearly 25 millions LEDs or LCD cells. Using a conventional x/y multiplexing would require at least (3840 + 2160 × 3) = 10320 controlling pins and many selection chips for controlling rows and columns all around the panel of LEDs or LCD cells. But with Charlieplexing, this can be reduced to only 63 controlling pins for the selection gate of display columns, plus 46 × 3 controlling pins for the selection and power-activation of RGB display rows, by a single transistor for each row or column (possibly with an extra common shielding ground to limit their mutual coupling); these controlling pins can easily fit around the output pins of one or two controller chips, even if we add the few additional pins needed on the controller for power, ground, clocks and I/O buses, surface-mounted with a high density and low cost on a single-layer PCB, and no need of complex routing and interconnection holes between layers; a dual layer is needed only for the basic Charlieplexing matrix mounted on borders of the panel itself.
Positions in the Charlieplexed matrix are not reduced to be just LEDs or diodes, they can be filled as well by two pins of a transistor (including its gate pin) so that its third pin is used as output to further control other devices, such as the horizontal and vertical selection lines of a large flat display panel (in that case, the two Charlieplexed matrices of transistors controlling and activating the rows or columns of the panel will be smartly arranged all along a border of that panel).
Complementary drive.
Charlieplexing in its simplest form works by using a diode matrix of complementary pairs of LEDs. The simplest possible Charlieplexed matrix would look like this:
By applying a positive voltage to pin X1 and grounding pin X2, LED 1 will light. Since current cannot flow through LEDs in reverse direction at this low voltage, LED2 will remain unlit. If the voltages on pin X1 and pin X2 are reversed, LED 2 will light and LED1 will be unlit.
The Charlieplexing technique does not actually make a larger matrix possible when only using two pins, because two LEDs can be driven by two pins without any matrix connections, and without even using tri-state mode. In this two-LED example, Charlieplexing would save one ground wire, which would be needed in a common 2-pin driver situation.
However, the 2-pin circuit serves as a simple example to show the basic concepts before moving on to larger circuits where Charlieplexing actually shows an advantage.
Expanding: tri-state logic.
If the circuit above were to be expanded to accommodate three pins and six LEDs, it would look like this:
This presents a problem, however: In order for this circuit to act like the previous one, one of the pins must be disconnected before applying charge to the remaining two. If, for example, LED 5 was intended to be lit, X1 must be charged and X3 must be grounded. However, if X2 is also charged, LED 3 would illuminate as well. If X2 was instead grounded, LED1 would illuminate, meaning that LED 5 cannot be lit by itself. This can be solved by utilizing the tri-state logic properties of microcontroller pins. Microcontroller pins generally have three states: "high" (5 V), "low" (0 V) and "input". Input mode puts the pin into a high-impedance state, which, electrically speaking, "disconnects" that pin from the circuit, meaning little or no current will flow through it. This allows the circuit to see any number of pins connected at any time, simply by changing the state of the pin. In order to drive the six-LED matrix above, the two pins corresponding to the LED to be lit are connected to 5 V (I/O pin "high" = binary number 1) and 0 V (I/O pin "low" = binary 0), while the third pin is set in its input state.
In doing so, current leakage out of the third pin is prevented, ensuring that the LED wished to be lit is the only one lit. Because the desired LED reduces the voltage available after the resistor, current will not flow across alternate paths (an alternate 2-LED path exists for every pair of pins in the 3-pin diagram, for example), so long as the voltage drop in the desired LED path is less than the total voltage drop across each string of alternative LEDs. However, in the variant with individual resistors this voltage-regulating effect does not affect the alternative paths so all LEDs used will not have to be lit with half the supply voltage applied because this variant does not benefit from the voltage-regulating effect of the desired path LED.
By using tri-state logic, the matrix can theoretically be expanded to any size, as long as pins are available. For "n" pins, "n"("n" − 1) LEDs can be in the matrix. Any LED can be lit by applying 5 V and 0 V to its corresponding pins and setting all of the other pins connected to the matrix to input mode. Under the same constraints as discussed above up to "n" − 1 LEDs sharing a common positive or negative path can be lit in parallel.
Expanding.
The 3-wire circuit can be rearranged to this near-equivalent matrix (resistors have been relocated).
This emphasizes the similarities between ordinary grid multiplex and Charlieplex, and demonstrates the pattern that leads to "the "n"-squared minus "n"" rule.
In typical usage on a circuit board the resistors would be physically located at the top of the columns and connected to the input pin. The rows would then be connected directly to the input pin bypassing the resistor.
The first setup in the image on the left is suitable only when identical LEDs are used since a single resistor is used for current-limiting through more than one LED (though not at the same time—rather, one resistor limits current through only one LED in a given column at one time). This is contrasted to the second configuration with individual resistors for each LED, as shown in the image on the right. In this second configuration, each LED has a unique resistor paired with it. This makes it possible to mix different kinds of LEDs by providing each with its appropriate resistor value.
In both of these configuration, as shown in both the left and the right image, the relocated resistors make it possible to light multiple LEDs at the same time row-by-row, instead of requiring that they be lit individually. The row current capacity could be boosted by an NPN emitter follower BJT transistor instead of driving the current directly with the typically much weaker I/O pin alone.
Problems with Charlieplexing.
Refresh rate.
Refresh rate is not a problem if Charlieplexed Active matrix addressing is used with a Charlieplexed LED array.
In common with x/y multiplexing, however, there can be refresh rate issues if passive matrix addressing is used.
Because only a single set of LEDs, all having a common anode or cathode, can be lit simultaneously without turning on unintended LEDs, Charlieplexing requires frequent output changes, through a method known as multiplexing. When multiplexing is done, not all LEDs are lit quite simultaneously, but rather one set of LEDs is lit briefly, then another set, and eventually the cycle repeats. If it is done fast enough, they will appear to all be on, all the time, to the human eye because of persistence of vision. In order for a display to not have any noticeable flicker, the refresh rate for each LED must be greater than the Flicker fusion threshold; 50 Hz is often used as an approximation.
As an example, 8 tri-state pins are used to control 56 LEDs through Charlieplexing, which is enough for 8 7-segment displays (without decimal points). Typically, 7-segment displays are made to have a common cathode, sometimes a common anode, but without loss of generality a common cathode is assumed in the following: All LEDs in all 8 7-segment displays cannot be turned on simultaneously in any desired combination using Charlieplexing. It is impossible to get 56 bits of information directly from 8 trits (the term for a base-3 character, as the pins are 3-state) of information, as 8 trits fundamentally comprises 8 log23, or about 12.7 bits of information, which falls far short of the 56 bits required to turn all 56 LEDs on or off in any arbitrary combination. Instead, the human eye must be fooled by use of multiplexing.
Only one 7-segment display, one set of 7 LEDs can be active at any time. The way this would be done is for the 8 common cathodes of the 8 displays to each get assigned to its own unique pin among the 8 I/O ports. At any time, one and only one of the 8 controlling I/O pins will be actively low, and thus only the 7-segment display with its common cathode connected to that actively low pin can have any of its LEDs on. That is the active 7-segment display. The anodes of the 7 LED segments within the active 7-segment display can then be turned on in any combination by having the other 7 I/O ports either high or in high-impedance mode, in any combination. They are connected to the remaining 7 pins, but through resistors (the common cathode connection is connected to the pin itself, not through a resistor, because otherwise the current through each individual segment would depend on the number of total segments turned on, as they would all have to share a single resistor). But to show a desired number using all 8 digits, only one 7-segment display can be shown at a time, so all 8 must be cycled through separately, and in a 50th of a second for the entire period of 8. Thus the display must be refreshed at 400 Hz for the period-8 cycle through all 8 segments to make the LEDs flash no slower than 50 times per second. This requires constant interruption of whatever additional processing the controller performs, 400 times per second.
Peak current.
Due to the decreased duty cycle, the current requirement of a Charlieplexed display increases much faster than it would with a traditionally multiplexed display. As the display gets larger, the average current flowing through the LED must be (roughly) constant in order for it to maintain constant brightness, thus requiring the peak current to increase proportionally. This causes a number of issues that limit the practical size of a Charlieplexed display.
Requirement for tristate.
All the outputs used to drive a Charlieplexed display must be tristate. If the current is low enough to drive the displays directly by the I/O pins of the microcontroller, this is not a problem, but if external tristates must be used, then each tristate will generally require two output lines to control, eliminating most of the advantage of a Charlieplexed display. Since the current from microcontroller pins is typically limited to about 20 mA, this severely restricts the practical size of a Charlieplexed display. However, it can be done by enabling one segment at a time.
Complexity.
Diagonally "wired" Charlieplex arrays are very simple to lay out and scan.
If used as a multitouch projected capacitance touchscreen (see figure on left), the first I/O can be set as an output and all the remaining I/Os set as inputs. All these inputs can be sensed simultaneously, if processor resources allow - the input equivalent of Chipiplexing.
When output 1 has been "read" by all these inputs, the second I/O can be set as an output and I/Os 1, 3, 4, 5, etc. set as inputs.
This sequence is repeated until the whole screen has been scanned. This process is repeated, ad infinitum, for subsequent scans.
A very simple, diagonal layout can be used to create a regular, scalable Charlieplexed diode array, where "n" I/O lines control ("n" - 1)2 diodes - all of which face the same direction (see diagram on right).
This diagram shows "n" ("n" - 1) diodes, but the diodes in the last column face alternating directions.
X/y Charlieplexed matrices are normally significantly more complicated, both in the required PCB layout and microcontroller programming, than prebuilt standard x/y multiplex matrices. This increased design time. Soldering components could also be more time-consuming. It was suggested that a balance between complexity and pin use could be achieved by Charlieplexing several pre-built multiplexed LED arrays together.
Forward voltage.
When using LEDs with different forward voltages, such as when using different color LEDs, some LEDs can light when not desired.
In the diagram above it can be seen that if LED 6 has a 4 V forward voltage, and LEDs 1 and 3 have forward voltages of 2 V or less, they will light when LED 6 is intended to, as their current path is shorter. This issue can easily be avoided by comparing forward voltages of the LEDs used in the matrix and checking for compatibility issues. Or, more simply, using LEDs that all have the same forward voltage.
This is also a problem where the LEDs are using individual resistors instead of shared resistors, if there is a path through two LEDs that has less LED drop than the supply voltage these LEDs may also illuminate at unintended times.
LED failure.
If a single LED fails, by becoming either open-circuit, short-circuit, or leaky (developing a parasitic parallel resistance, which allows current in both directions), the impact will be catastrophic for the display as a whole. Furthermore, the actual problematic LED may be very difficult to identify, because potentially a large set of LEDs which should not be lit may all come on together, and—without detailed knowledge of the circuit—the relation between which LED is bad and what set of LEDs all come on together cannot be easily established.
In a standard x/y array, a LED (D1) going open-circuit causes that LED to cease functioning, with no further consequences.
In a partially "Charlieplexed" array, however, if the failed LED (D1) becomes open circuit, the voltage between the LED's 2 electrodes may build up until it finds a path through, at least "three" other LEDs. If the voltage is high enough, this may cause these other LEDs (such as D2, D3 and D4) to light up unexpectedly.
No deleterious effect is noticed, however, when the polarity is reversed, as D1 would not have conducted under such circumstances anyway, due to it being reverse biased. Current passes through D1's complementary diode (D5) as normal.
If the failed LED becomes an open circuit in a fully "Charlieplexed" array, the voltage between the LED's two electrodes may build up until it finds a path through "two" other LEDs. There are as many such paths as there are pins used to control the array minus 2; if the LED with anode at node "m" and cathode at node "n" fails in this way, it may be that every single pair of LEDs in which one's anode is node "m", cathode is "p" for any value of "p" (with the exceptions that "p" cannot be "m" or "n", so there are as many possible choices for "p" as the number of pins controlling the array minus 2), along with the LED whose anode is "p" and cathode is "n", will all light up.
If there are 8 I/O pins controlling the array, this means that there will be 6 parasitic paths through pairs of 2 LEDs, and 12 LEDs may be unintentionally lit, but fortunately this will only happen when the one bad LED is supposed to come on, which may be a small fraction of the time and will exhibit no deleterious symptoms when the problem LED is not supposed to be lit. If the problem is a short between nodes "x" and "y", then every time any LED "U" with either "x" or "y" as its anode or cathode and some node "z" as its other electrode is supposed to come on (without loss of generality, here "U"'s cathode is connected to "x"), the LED "V" with cathode "y" and anode "z" will light as well, so any time either node "x" or "y" is activated as an anode OR a cathode, two LEDs will come on instead of one. In this case, it lights only one additional LED unintentionally, but it does it far more frequently; not merely when the failed LED is supposed to come on, but when "any" LED that has a pin in common with the failed LED is supposed to come on.
The problematic elements become especially difficult to identify if there are two or more LEDs at fault. What this means is that unlike most methods in which the loss of a single LED merely causes a single burned-out segment, when Charlieplexing is used, one or two burned-out LEDs, whatever the mode of failure, will almost certainly cause a catastrophic cascade of unintended lightings of the LEDs that still work, very likely rendering the entire device completely and immediately unusable. This must be taken into account when considering the required lifetime and failure characteristics of the device being designed.
"LED failure in a diagonal matrix:"
Due to the fact that the layout of a standard vertical/horizontal Charlieplexed matrix is quite complicated, the consequences of LED failure are more easily described using a simple diagonal Charlieplexed matrix.
The diagram shows a 6-input Charlieplexed array where one LED (L1) becomes open circuit.
If one LED goes open circuit, and if the voltage is high enough, then current that should have gone through that LED could theoretically find an alternative route through other LEDs. For example, if LED 1 (L1) goes open circuit, then current could still flow from terminal 3 to terminal 2 via L2 in series with L3. Other routes are via L4/L5, L6/L7 and L8/L9. This could possibly cause these LEDs to flicker.
If LED 1 goes short circuit, then both its terminals will always be at the same potential, and so will those of its inverted complementary LED.
Therefore, neither LED will light up, even though one of them may still be fully functional.
If terminal 2 or terminal 3 is negative, then both the red and brown tracks will be negative at the same time. Therefore, some LEDs connected to these tracks could light up un-intentionally when terminals 1, 4, 5 or 6 are positive.
Similarly, if terminal 2 or terminal 3 is positive, then both the red and brown tracks will be positive at the same time. Therefore, some LEDs connected to these tracks could light up un-intentionally when terminals 1, 4, 5 or 6 are negative.
It has been shown that the failure of one LED can cause other consequences.
If a complementary pair of LEDs are not working, then it is most likely that only one of them is shorting, and a meter may be used to test which one it is.
Otherwise, if one or more LEDs never light up, then they are probably all faulty and should be replaced. Their replacement will, hopefully, make any spurious artefacts disappear.
Alternative use cases and variants.
Input data multiplexing.
Charlieplexing can also be used to multiplex digital input signals into a microcontroller. The same diode circuits are used, except a switch is placed in series with each diode. To read whether a switch is open or closed, the microcontroller configures one pin as an input with an internal pull-up resistor. The other pin is configured as an output and set to the low logic level. If the input pin reads low, then the switch is closed, and if the input pin reads high, then the switch is open.
One potential application for this is to read a standard (4 × 3) 12-key numeric keypad using only 4 I/O lines. The traditional row-column scan method requires 4 + 3 = 7 I/O lines. Thus Charlieplexing saves 3 I/O lines; however it adds the expense of 12 diodes, (since the diodes are only free when LEDs are used). A variation of the circuit with only 4 diodes is possible, however this reduces the rollover of the keyboard. The microcontroller can always detect when the data is corrupt, but there is no guarantee it can sense the original key presses, unless only one button is pressed at a time. (However, it is probably possible to arrange the circuit so that if at most any two adjacent buttons are pressed, then no data loss will occur.) The input is only lossless in the 4-diode circuit if only one button is pressed at a time, or if certain problematic multiple key presses are avoided. In the 12-diode circuit, this is not an issue, and there is always a one-to-one correspondence between button presses and input data. However, there are so many diodes that are required to use the method (especially for larger arrays) that there is generally no cost savings over the traditional row-column scan method, unless the cost of a diode is only a fraction of the cost of an I/O pin, where that fraction is one over the number of I/O lines.
Projected capacitance touchscreens and keypads.
These do not use diodes but rely on the change in capacitance between crossing conductive tracks to detect the proximity of one or more fingers through non-conducting materials such as plastic overlays, wood, glass, etc. - even double glazing.
These tracks can be made from a wide range of materials, such as printed circuit boards, transparent Indium Tin oxide, insulation coated fine wire, etc.
The technology can range in size from very small, as in "fingerprint detectors", to very large, as in "touch interactive video walls". Usually, a limit is imposed on the maximum width of an x/y wired touchscreen, because the horizontal track resistance gets too great for the product to function properly. However, a diagonally wired touchscreen (as described later in this section) does not have this problem.
Charlieplexing is ideal for diagonally wired projected capacitance keypads and touchscreens. It almost doubles the number of cross-over points when compared to standard x/y multiplexing, and all I/O tracks come from just one edge.
The left image (above) shows the diagonal wiring arrangement of a 32 I/O projected capacitance touchscreen, manufactured using 10 micron diameter wire. The video shows the same touchscreen in action.
There are no LEDs or diodes and, at any one time, only one I/O line is set as an output, the remaining I/O lines being set as high-impedance inputs or "grounded". This means that power requirements are very small.
GuGaplexing.
In 2008, Dhananjay V. Gadre devised "Gugaplexing", which is like Charlieplexing with multiple drive voltages.
Chipiplexing.
In 2008, Guillermo Jaquenod's so called "Chipiplexing" adds emitter followers to boost the strength of the row drive allowing rows wider than a single microcontroller port could drive to be lit simultaneously.
Cross-plexing.
In 2010, the Austrian chip manufacturer austriamicrosystems AG (named ams AG since 2012, and ams-OSRAM AG since 2020) introduced the multiplexing LED driver IC AS1119, followed by the AS1130 in 2011.
Also, the analog & mixed signal (AMS) division (named Lumissil Microsystems since 2020) of Integrated Silicon Solution Inc. (ISSI) introduced the IS31FL3731 in 2012 and the IS31FL3732 in 2015.
They all use a technique they call "cross-plexing", a variant of Charlieplexing with automatic detection of open or shorted connections and anti-ghosting measures.
Diagonal arrays.
In 2015, a diagonal Charlieplex array was invented by Ron Binstead of Binstead Designs Ltd, while searching for a simplified projected capacitance touchscreen design. This greatly simplified the layout of large Charlieplexed arrays which, until then, used some very complex arrangements.
Triangular array - A triangular Charlieplexed array of ("n"2 − "n") LEDs can be formed simply by folding a group of n parallel conductors at right angles over themselves, and placing a complementary pair of LEDs at each of the resulting unique intersections - see diagram on left. I/O connections can be made at the ends of the conductors, or at the fold positions - forming split conductors.
Rectangular array - A square/rectangular diagonal array can be formed by double folding the parallel conductors - see diagram on right. Unsplit I/O conductors enter from the end of the array.
Cylindrical array - Split and unsplit diagonal conductors can also be formed into a seamless cylindrical array.
The diagram on the right shows a 6 I/O, split Charlieplexed cylindrical display layout, with 30 intersections, each with a uniquely addressable LED. All the I/Os connect at the bottom edge of the cylinder (standard x/y cylindrical arrays would require the horizontal I/Os to enter from the side, or be "bussed" up a seam in one side).
In the top image, the North-West orientated branch of a split I/O conductor is sometimes used as a current source (logic 1). At other times, the North-East orientated branch of the same conductor, is used as a current sink (logic 0). When not being used to power any LEDs, the I/O is "turned off" (tristate). This prevents other LEDs from being lit unintentionally.
The red and blue LEDs are both connected to the same two conductors, but with reversed polarity, forming a complementary pair. This means that it is not possible to turn on both LEDs at exactly the same time.
The red LED in the display is turned on by: a) setting all the I/Os to "off", b) setting I/O 2 to logical 0, and c) setting I/O 4 to logical 1. The blue LED does not light up because, under these conditions, it is a diode that is reverse biased.
The blue LED in the display is turned on by: a) setting all the I/Os to "off", b) setting I/O 2 to logical 1, and c) setting I/O 4 to logical 0. The red LED does not light up because, under these conditions, it is reverse biased.
This illustrates how Charlieplexing requires all I/Os to be capable of three states (tri-state) - "off", logical 0, or logical 1.
The conductive elements can be formed into a loop - as shown in the top image. This allows current to flow to the LEDs via two routes - similar to a Domestic Ring Main.
The LEDs could alternatively be arranged as vertical or horizontal complementary pairs, at the intersections - vertical being shown in the lower image.
When using complementary LED pairs, an odd number of I/Os may be required in order to obtain full Charlieplexing capability. For example: 6 Charlieplexed I/Os can create an array of 15 unique intersections. One of the dimensions of the array will be 6. To obtain 15 unique intersections, the other dimension would have to be 15/6 or 2.5, which could be problematic. However, 7 I/Os can create 21 unique intersections, 21/7 = 3. Therefore, 7 I/Os create a 7 × 3 array, which does not cause issues.
Non-Charlieplexed diagonal arrays can also be formed into cylinders, but 6 I/Os would only create 9 unique intersections.
These cylinders can be physically transformed into complex 3 dimensional shapes, by a range of different methods - such as blow molding, vacuum forming, etc.
A similar layout is possible for a cylindrical touchscreen ( see Touchscreen#Diagonal_touchscreen_arrays).
"Infinitely" wide array - The diagram on the right shows the layout for a multi-touch, projected capacitance, touchscreen of potentially "infinite" width. Diagonal conductor lengths never exceed 1.414 times the height of the touchscreen formula_5, meaning that the screen can be widened "indefinitely" without increasing conductor resistance. This is reduced to 1.12 times the height of the touchscreen formula_6, if the sensing elements intersect at 60 degrees instead of 90 degrees.
Tucoplexing.
In 2019, Micah Elizabeth Scott developed a method to use 3 pins to run 4 LEDs and 4 switches called "Tucoplexing".
Pulse-width modulation.
Charlieplexing can even be used with pulse-width modulation to control the brightness of 12 LEDs with 4 pins.
Code example.
In the following Arduino code example, the circuit uses ATtiny 8-pin microcontroller which has 5 I/O pins to create a 7-segment display. Since a 7-segment display only requires control of 7 individual LEDs, we use 4 of the ATtiny I/O pins as Charlieplexed outputs ("n" ("n" - 1)), i.e. the 4 pins could be used to control up to 12 individual LEDs (here we just use 7 of them). Leaving the fifth I/O pin to be used as digital or analog input or another output.
// ATtiny code.
// Reads analog (or digital) input from pin 4 and every time the input goes below a set threshold.
// It counts one and displays the increase in count either by activating up one of four LEDs (or transistors)
// or one of twelve Charlieplexed LEDs.
// SET THESE VALUES:
int threshold = 500;
int maxCount = 7;
boolean sensorTriggered = false;
int count = 0;
int sensorValue = 0;
long lastDebounceTime = 0; // The last time the output pin was toggled.
long debounceDelay = 50; // The debounce time; increase if the output flickers.
void setup() {
// Use pull-down for disabled output pins rather than pull-up to reduce internal consumption.
for (int pin = 0; pin < 4; pin++) {
pinMode(pin, INPUT), digitalWrite(pin, LOW);
// Internal pull-up for enabled input pin 4.
pinMode(4, INPUT), digitalWrite(4, HIGH);
////////////////////////////////////////////////////////////////////////////////
void loop() {
testDigits();
void testDigits() {
charlieLoop();
////////////////////////////////////////////////////////////////////////////////
void readSensor() {
sensorValue = analogRead(2); // pin4!
delay(100);
if (sensorValue < threshold && sensorTriggered == false) {
sensorTriggered = true;
count++;
if (count > maxCount) count = 0;
charlieLoop();
if (sensorValue > threshold) sensorTriggered = false;
////////////////////////////////////////////////////////////////////////////////
void charlieLoop() {
count++;
for (int i = 0; i < 1000; i++) {
for (int c = 0; c < count; c++) {
charliePlexPin(c);
delay(1000);
if (count > maxCount) count = 0;
////////////////////////////////////////////////////////////////////////////////
void charliePlexPin(int myLed){
// Make sure we don't feed random voltages to the LEDs
// during the brief time we are changing pin voltages and modes.
// Use pull-down for disabled output pins rather than pull-up to reduce internal consumption.
for (int pin = 0; pin < 4; pin++) {
pinMode(pin, INPUT), digitalWrite(pin, LOW);
// With 4 pins we could lit up to 12 LEDs, we use only 7 here.
// Make sure to set pin voltages (by internal pull-up or pull-down)
// before changing pin modes to output.
typedef struct {
// Two different pin numbers (between 0 and 3; order is significant),
// otherwise no led will be lit.
low, high: int: 2;
} Pins;
static Pins pinsLookup[] = {
{2, 0}, {2, 3}, {1, 3}, {0, 1}, {1, 0}, {0, 2}, {1, 2},
// Other possible combinations for up to 12 LEDs:
// {0, 3}, {2, 1}, {3, 0}, {3, 1}, {3, 2},
// Other unusable combinations that don't lit any LED with a significant voltage and current,
// unless pull-up or pull-down resistances are very unbalanced:
if (myLed >=0 && myLed <= sizeof(pinsLookup) / sizeof(Pins)) {
register Pins &pins = pinsLookup[myLed];
// Note that the first digitWrite to LOW is commented out,
// as it is already set above for all output pins.
/* digitalWrite(pins.low, LOW), */ pinMode(pins.low, OUTPUT);
digitalWrite(pins.high, HIGH), pinMode(pins.high, OUTPUT);
switch(myLed) {
case 0:
/* digitalWrite(2, LOW), */ pinMode(2, OUTPUT);
digitalWrite(0, HIGH), pinMode(0, OUTPUT);
break;
case 1:
/* digitalWrite(2, LOW), */ pinMode(2, OUTPUT);
digitalWrite(3, HIGH), pinMode(3, OUTPUT);
break;
case 2:
/* digitalWrite(1, LOW), */ pinMode(1, OUTPUT);
digitalWrite(3, HIGH), pinMode(3, OUTPUT);
break;
case 3:
/* digitalWrite(0, LOW), */ pinMode(0, OUTPUT);
digitalWrite(1, HIGH), pinMode(1, OUTPUT);
break;
case 4:
/* digitalWrite(1, LOW), */ pinMode(1, OUTPUT);
digitalWrite(0, HIGH), pinMode(0, OUTPUT);
break;
case 5:
/* digitalWrite(0, LOW), */ pinMode(0, OUTPUT);
digitalWrite(2, HIGH), pinMode(2, OUTPUT);
break;
case 6:
/* digitalWrite(1, LOW), */ pinMode(1, OUTPUT);
digitalWrite(2, HIGH), pinMode(2, OUTPUT);
break;
////////////////////////////////////////////////////////////////////////////////
void spwm(int freq, int pin, int sp) {
// Call Charlieplexing to set correct pin outs:
// on:
digitalWrite(pin, HIGH);
delayMicroseconds(sp * freq);
// off:
digitalWrite(pin, LOW);
delayMicroseconds(sp * (255 - freq));
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "(\\frac{N}{2})^2 = \\frac{N^2}{4}"
},
{
"math_id": 2,
"text": "N^2 - N"
},
{
"math_id": 3,
"text": "\\frac{N^2 - N}{2}"
},
{
"math_id": 4,
"text": " n = \\left\\lceil 1 + \\sqrt{L} \\right\\rfloor"
},
{
"math_id": 5,
"text": " \\left\\lceil H \\sqrt{2} \\right\\rfloor"
},
{
"math_id": 6,
"text": " \\left\\lceil H \\sqrt{1.25} \\right\\rfloor"
}
] |
https://en.wikipedia.org/wiki?curid=5906299
|
590696
|
Definite description
|
Denoting phrase in the form of "the X"
In formal semantics and philosophy of language, a definite description is a denoting phrase in the form of "the X" where X is a noun-phrase or a singular common noun. The definite description is "proper" if X applies to a unique individual or object. For example: "the first person in space" and "the 42nd President of the United States of America", are proper. The definite descriptions "the person in space" and "the Senator from Ohio" are "improper" because the noun phrase X applies to more than one thing, and the definite descriptions "the first man on Mars" and "the Senator from Washington D.C." are "improper" because X applies to nothing. Improper descriptions raise some difficult questions about the law of excluded middle, denotation, modality, and mental content.
Russell's analysis.
As France is currently a republic, it has no king. Bertrand Russell pointed out that this raises a puzzle about the truth value of the sentence "The present King of France is bald."
The sentence does not seem to be true: if we consider all the bald things, the present King of France is not among them, since there is no present King of France. But if it is false, then one would expect that the negation of this statement, that is, "It is not the case that the present King of France is bald", or its logical equivalent, "The present King of France is not bald", is true. But this sentence does not seem to be true either: the present King of France is no more among the things that fail to be bald than among the things that are bald. We therefore seem to have a violation of the law of excluded middle.
Is it meaningless, then? One might suppose so (and some philosophers have) since "the present King of France" certainly does fail to refer. But on the other hand, the sentence "The present King of France is bald" (as well as its negation) seem perfectly intelligible, suggesting that "the present King of France" cannot be meaningless.
Russell proposed to resolve this puzzle via his theory of descriptions. A definite description like "the present King of France", he suggested, is not a referring expression, as we might naively suppose, but rather an "incomplete symbol" that introduces quantificational structure into sentences in which it occurs. The sentence "the present King of France is bald", for example, is analyzed as a conjunction of the following three quantified statements:
More briefly put, the claim is that "The present King of France is bald" says that some x is such that x is currently King of France, and that any y is currently King of France only if y = x, and that x is bald:
<templatestyles src="Block indent/styles.css"/>formula_3
This is "false", since it is "not" the case that some x is currently King of France.
The negation of this sentence, i.e. "The present King of France is not bald", is ambiguous. It could mean one of two things, depending on where we place the negation 'not'. On one reading, it could mean that there is no one who is currently King of France and bald:
<templatestyles src="Block indent/styles.css"/>formula_4
On this disambiguation, the sentence is "true" (since there is indeed no x that is currently King of France).
On a second reading, the negation could be construed as attaching directly to 'bald', so that the sentence means that there is currently a King of France, but that this King fails to be bald:
<templatestyles src="Block indent/styles.css"/>formula_5
On this disambiguation, the sentence is "false" (since there is no x that is currently King of France).
Thus, whether "the present King of France is not bald" is true or false depends on how it is interpreted at the level of logical form: if the negation is construed as taking wide scope (as in the first of the above), it is true, whereas if the negation is construed as taking narrow scope (as in the second of the above), it is false. In neither case does it lack a truth value.
So we do "not" have a failure of the Law of Excluded Middle: "the present King of France is bald" (i.e. formula_3) is false, because there is no present King of France.
The negation of this statement is the one in which 'not' takes wide scope: formula_4. This statement is "true" because there does not exist anything which is currently King of France.
Generalized quantifier analysis.
Stephen Neale, among others, has defended Russell's theory, and incorporated it into the theory of generalized quantifiers. On this view, 'the' is a quantificational determiner like 'some', 'every', 'most' etc. The determiner 'the' has the following denotation (using lambda notation):
<templatestyles src="Block indent/styles.css"/>formula_6
<templatestyles src="Block indent/styles.css"/>formula_7
<templatestyles src="Block indent/styles.css"/>formula_8
we then get the Russellian truth conditions via two steps of function application: 'The present King of France is bald' is true if, and only if, formula_3. On this view, definite descriptions like 'the present King of France' do have a denotation (specifically, definite descriptions denote a function from properties to truth values—they are in that sense not syncategorematic, or "incomplete symbols"); but the view retains the essentials of the Russellian analysis, yielding exactly the truth conditions Russell argued for.
Fregean analysis.
The Fregean analysis of definite descriptions, implicit in the work of Frege and later defended by Strawson among others, represents the primary alternative to the Russellian theory. On the Fregean analysis, definite descriptions are construed as referring expressions rather than quantificational expressions. Existence and uniqueness are understood as a presupposition of a sentence containing a definite description, rather than part of the content asserted by such a sentence. The sentence 'The present King of France is bald', for example, is not used to claim that there exists a unique present King of France who is bald; instead, that there is a unique present King of France is part of what this sentence "presupposes", and what it "says" is that this individual is bald. If the presupposition fails, the definite description "fails to refer", and the sentence as a whole fails to express a proposition.
The Fregean view is thus committed to the kind of truth value gaps (and failures of the law of excluded middle) that the Russellian analysis is designed to avoid. Since there is currently no King of France, the sentence 'The present King of France is bald' fails to express a proposition, and therefore fails to have a truth value, as does its negation, 'The present King of France is not bald'. The Fregean will account for the fact that these sentences are nevertheless "meaningful" by relying on speakers' knowledge of the conditions under which either of these sentences "could" be used to express a true proposition. The Fregean can also hold on to a restricted version of the law of excluded middle: for any sentence whose presuppositions are met (and thus expresses a proposition), either that sentence or its negation is true.
On the Fregean view, the definite article 'the' has the following denotation (using lambda notation):
<templatestyles src="Block indent/styles.css"/>formula_9 [The unique z such that formula_10]
(That is, 'the' denotes a function which takes a property f and yields the unique object z that has property f, if there is such a z, and is undefined otherwise.) The presuppositional character of the existence and uniqueness conditions is here reflected in the fact that the definite article denotes a partial function on the set of properties: it is only defined for those properties f which are true of exactly one object. It is thus undefined on the denotation of the predicate 'currently King of France', since the property of currently being King of France is true of no object; it is similarly undefined on the denotation of the predicate 'Senator of the US', since the property of being a US Senator is true of more than one object.
Mathematical logic.
Following the example of "Principia Mathematica", it is customary to use a definite description operator symbolized using the "turned" (rotated) Greek lower case iota character "℩". The notation ℩formula_11 means "the unique formula_12 such that formula_13", and
<templatestyles src="Block indent/styles.css"/>formula_14℩formula_15
is equivalent to "There is exactly one formula_16 and it has the property
formula_17":
<templatestyles src="Block indent/styles.css"/>formula_18
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\exists xKx"
},
{
"math_id": 1,
"text": "\\forall x \\forall y ((Kx \\land Ky) \\rightarrow x=y)"
},
{
"math_id": 2,
"text": "\\forall x (Kx \\rightarrow Bx)"
},
{
"math_id": 3,
"text": "\\exists x((Kx \\land \\forall y(Ky \\rightarrow y =x)) \\land Bx)"
},
{
"math_id": 4,
"text": "\\lnot \\exists x ((Kx \\land \\forall y (Ky \\rightarrow y = x)) \\land Bx)"
},
{
"math_id": 5,
"text": "\\exists x ((Kx \\land \\forall y (Ky \\rightarrow y = x)) \\land \\lnot Bx)"
},
{
"math_id": 6,
"text": "\\lambda f. \\lambda g.\\exists x(f(x)=1 \\land \\forall y(f(y)=1 \\rightarrow y=x) \\land g(x) = 1)"
},
{
"math_id": 7,
"text": "\\lambda x.Kx"
},
{
"math_id": 8,
"text": "\\lambda x.Bx"
},
{
"math_id": 9,
"text": "\\lambda f: \\exists x(f(x)=1 \\land \\forall y(f(y)=1 \\rightarrow y=x))."
},
{
"math_id": 10,
"text": "f(z)=1"
},
{
"math_id": 11,
"text": "x(\\phi x)"
},
{
"math_id": 12,
"text": "x"
},
{
"math_id": 13,
"text": "\\phi x"
},
{
"math_id": 14,
"text": "\\psi("
},
{
"math_id": 15,
"text": "x(\\phi x))"
},
{
"math_id": 16,
"text": "\\phi"
},
{
"math_id": 17,
"text": "\\psi"
},
{
"math_id": 18,
"text": "\\exists x (\\forall y (\\phi(y) \\iff y=x) \\land \\psi(x))"
}
] |
https://en.wikipedia.org/wiki?curid=590696
|
59071829
|
August 1972 solar storms
|
Solar storms during solar cycle 20
The solar storms of August 1972 were a historically powerful series of solar storms with intense to extreme solar flare, solar particle event, and geomagnetic storm components in early August 1972, during solar cycle 20. The storm caused widespread electric- and communication-grid disturbances through large portions of North America as well as satellite disruptions. On 4 August 1972 the storm caused the accidental detonation of numerous U.S. naval mines near Haiphong, North Vietnam. The coronal mass ejection (CME)'s transit time from the Sun to the Earth is the fastest ever recorded.
Solar-terrestrial characteristics.
Sunspot region.
The most significant detected solar flare activity occurred from 2 to 11 August. Most of the significant solar activity emanated from active sunspot region McMath 11976 (MR 11976; active regions being clusters of sunspot pairs). McMath 11976 was extraordinarily magnetically complex. Its size was large although not exceptionally so. McMath 11976 produced 67 solar flares (4 of these X-class) during the time it was facing Earth, from 29 July to 11 August. It also produced multiple relatively rare white light flares over multiple days. The same active area was long-lived. It persisted through five solar rotation cycles, first receiving the designation as Region 11947 as it faced Earth, going unseen as it rotated past the far side of the Sun, then returning Earthside as Region 11976, before cycling as Regions 12007, 12045, and 12088, respectively.
Flare of 4 August.
Electromagnetic effects.
The 4 August flare was among the largest since records began. It saturated the Solrad 9 X-ray sensor at approximately X5.3 but was estimated to be in the vicinity of X20, the threshold of the very rarely reached R5 on the NOAA radio blackout space weather scale. A radio burst of 76,000 sfu was measured at 1 GHz. This was an exceptionally long duration flare, generating X-ray emissions above background level for more than 16 hours. Rare emissions in the gamma ray (formula_0-ray) spectrum were detected for the first time, on both 4 and 7 August, by the Orbiting Solar Observatory (OSO 7). The broad spectrum electromagnetic emissions of the largest flare are estimated to total 1-5 x 1032 ergs in energy released.
CMEs.
The arrival time of the associated coronal mass ejection (CME) and its coronal cloud, 14.6 hours, remains the record shortest duration as of November 2023, indicating an exceptionally fast and typically an exceptionally geoeffective event (normal transit time is two to three days). A preceding series of solar flares and CMEs cleared the interplanetary medium of particles, enabling the rapid arrival in a process similar to the July 2012 solar storm. Normalizing the transit times of other known extreme events to a standard 1 AU to account for the varying distance of the Earth from the Sun throughout the year, one study found the ultrafast 4 August flare to be an outlier to all other events, even compared to the great solar storm of 1859, the overall most extreme known solar storm, which is known as the "Carrington Event". This corresponds to an ejecta speed of an estimated .
The near Earth vicinity solar wind velocity may also be record-breaking and is calculated to have exceeded (about 0.7% of light speed). The velocity was not directly measurable as instrumentation was off-scale high. Analysis of a Guam magnetogram indicated a shockwave traversing the magnetosphere at and astonishing sudden storm commencement (SSC) time of 62 s. Estimated magnetic field strength of 73-103 nT and electric field strength of >200 mV/m was calculated at 1 AU.
Solar particle event.
Reanalysis based on IMP-5 (a.k.a. Explorer 41) space solar observatory data suggests that >10-MeV ion flux reached 70,000 particles·s-1·sr-1·cm-2 (i.e. 70,000 particles per second, per steradian, per square centimeter; see Radiance) bringing it near the exceedingly rarely reached NOAA S5 level on the solar radiation scale. Fluxes at other energy levels, from soft to hard, at >1 MeV, >30 MeV, and >60 MeV, also reached extreme levels, as well as inferred for >100 MeV. The particle storm led to northern hemisphere polar stratospheric ozone depletion of about 46% at altitude for several days before the atmosphere recovered and which persisted for 53 days at the lower altitude of .
The intense solar wind and particle storm associated with the CMEs led to one of the largest decreases in cosmic ray radiation from outside the Solar System, known as a Forbush decrease, ever observed. Solar energetic particle (SEP) onslaught was so strong that the Forbush decrease in fact partially abated. SEPs reached the Earth's surface, causing a ground level event (GLE).
Geomagnetic storm.
The 4 August flare and ejecta caused significant to extreme effects on the Earth's magnetosphere, which responded in an unusually complex manner. The disturbance storm time index (Dst) was only −125 nT, falling merely within the relatively common "intense" storm category. Initially an exceptional geomagnetic response occurred and some extreme storming occurred locally later (some of these possibly within substorms), but arrival of subsequent CMEs with northward oriented magnetic fields is thought to have shifted the interplanetary magnetic field (IMF) from an initial southward to northward orientation, thus substantially suppressing geomagnetic activity as the solar blast was largely deflected away from rather than toward Earth. An early study found an extraordinary asymmetry range of ≈450 nT. A 2006 study found that if a favorable IMF southward orientation were present that the Dst may have surpassed −1,600 nT, comparable to the 1859 Carrington Event.
Magnetometers in Boulder, Colorado, Honolulu, Hawaii, and elsewhere went off-scale high. Stations in India recorded geomagnetic sudden impulses (GSIs) of 301-486 nT. Estimated AE index peaked at over 3,000 nT and Kp reached 9 at several hourly intervals (corresponding to NOAA G5 level).
The magnetosphere compressed rapidly and substantially with the magnetopause reduced to 4-5 RE and the plasmapause (boundary of the plasmasphere, or lower magnetosphere) reduced to 2 RE or less. This is a contraction of at least one half and up to two-thirds the size of the magnetosphere under normal conditions, to a distance of less than . Solar wind dynamic pressure increased to about 100 times normal, based upon data from Prognoz 1.
Impacts.
Spacecraft.
Astronomers first reported unusual flares on 2 August, later corroborated by orbiting spacecraft. On 3 August, Pioneer 9 detected a shock wave and sudden increase in solar wind speed from approximately . A shockwave passed Pioneer 10, which was 2.2 AU from the Sun at the time. The greatly constricted magnetosphere caused many satellites to cross outside Earth's protective magnetic field, such boundary crossings into the magnetosheath led to erratic space weather conditions and potentially destructive solar particle bombardment. The Intelsat IV F-2 communications satellite solar panel arrays power generation was degraded by 5%, about 2 years worth of wear. An on-orbit power failure ended the mission of a Defense Satellite Communications System (DSCS II) satellite. Disruptions of Defense Meteorological Satellite Program (DMSP) scanner electronics caused anomalous dots of light in the southern polar cap imagery.
Terrestrial effects and aurora.
On 4 August, an aurora shone so luminously that shadows were cast on the southern coast of the United Kingdom and shortly later as far south as Bilbao, Spain at magnetic latitude 46°. Extending to 5 August, intense geomagnetic storming continued with bright red (a relatively rare color associated with extreme events) and fast-moving aurora visible at midday from dark regions of the Southern Hemisphere.
Radio frequency (RF) effects were rapid and intense. Radio blackouts commenced nearly instantaneously on the sunlit side of Earth on HF and other vulnerable bands. A nighttime mid-latitude E layer developed.
Geomagnetically induced currents (GICs) were generated and produced significant electrical grid disturbances throughout Canada and across much of eastern and central United States, with strong anomalies reported as far south as Maryland and Ohio, moderate anomalies in Tennessee, and weak anomalies in Alabama and north Texas. The voltage collapse of 64% on the North Dakota to Manitoba interconnection would have been sufficient to cause a system breakup if occurring during high export conditions on the line, which would have precipitated a large power outage. Many U.S. utilities in these regions reported no disturbances, with the presence of igneous rock geology a suspected factor, as well as geomagnetic latitude and differences in operational characteristics of respective electrical grids. Manitoba Hydro reported that power going the other way, from Manitoba to the U.S., plummeted 120 MW within a few minutes. Protective relays were repeatedly activated in Newfoundland.
An outage was reported along American Telephone and Telegraph (now AT&T)'s L4 coaxial cable between Illinois and Iowa. Magnetic field variations (dB/dt) of ≈800 nT/min were estimated locally at the time and the peak rate of change of magnetic field intensity reached >2,200 nT/min in central and western Canada, although the outage was most likely caused by swift intensification of the eastward electrojet of the ionosphere. AT&T also experienced a surge of 60 volts on their telephone cable between Chicago and Nebraska. Exceeding the high-current shutdown threshold, an induced electric field was measured at 7.0 V/km. The storm was detected in low-latitude areas such as the Philippines and Brazil, as well as Japan.
Military operations.
The U.S. Air Force's Vela nuclear detonation detection satellites mistook that an explosion occurred, but this was quickly dealt with by personnel monitoring the data in real-time.
The U.S. Navy concluded, as shown in declassified documents, that the seemingly spontaneous detonation of dozens of Destructor magnetic-influence sea mines (DSTs) within about 30 seconds in the Hon La area (magnetic latitude ≈9°) was highly likely the result of an intense solar storm. One account claims that 4,000 mines were detonated. It was known that solar storms caused terrestrial geomagnetic disturbances but it was as yet unknown to the military whether these effects could be sufficiently intense. It was confirmed as possible in a meeting of Navy investigators at the NOAA Space Environment Center (SEC) as well as by other facilities and experts.
Human spaceflight.
Although it occurred between Apollo missions, the storm has long been chronicled within NASA. Apollo 16 returned to Earth on April 27, 1972, with the subsequent (and ultimately final) Apollo Moon landing scheduled to depart on December 7 that same year. Had a mission been taking place during August, those inside the Apollo command module would have been shielded from 90% of the incoming radiation. However, this reduced dose could still have caused acute radiation sickness if the astronauts were located outside the protective magnetic field of Earth, which was the case for much of a lunar mission. An astronaut engaged in EVA in orbit or on a moonwalk could have experienced severe radiation poisoning, or even absorbed a potentially lethal dose. Regardless of location, an astronaut would have suffered an enhanced risk of contracting cancer after being exposed to that amount of radiation.
This was one of only a handful of solar storms which have occurred in the Space Age that could cause severe illness, and was potentially the most hazardous. Had the most intense solar activity of early August occurred during a mission, it would have forced the crew to abort the flight and resort to contingency measures, including an emergency return and landing for medical treatment.
Implications for heliophysics and society.
The storm was an important event in the field of heliophysics, the study of space weather, with numerous studies published in the next few years and throughout the 1970s and 1980s, as well as leading to several influential internal investigations and to significant policy changes. Almost fifty years after the fact, the storm was reexamined in an October 2018 article published in the American Geophysical Union (AGU) journal "Space Weather". The initial and early studies as well as the later reanalysis studies were only possible due to initial monitoring facilities installed during the International Geophysical Year (IGY) in 1957-1958 and subsequent global scientific cooperation to maintain the data sets. That initial terrestrial data from ground stations and balloons was later combined with spaceborne observatories to form far more complete information than had been previously possible, with this storm being one of the first widely documented of the then young Space Age. It convinced both the military and NASA to take space weather seriously and accordingly devote resources to its monitoring and study.
The authors of the 2018 paper compared the 1972 storm to the great storm of 1859 in some aspects of intensity. They posit that it was a Carrington-class storm. Other researchers conclude that the 1972 event could have been comparable to 1859 for geomagnetic storming if magnetic field orientation parameters were favorable, or as a "failed Carrington-type storm" based on related considerations, which is also the finding of a 2013 Royal Academy of Engineering report.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\gamma"
}
] |
https://en.wikipedia.org/wiki?curid=59071829
|
5907185
|
Jaynes–Cummings model
|
Model in quantum optics
The Jaynes–Cummings model (sometimes abbreviated JCM) is a theoretical model in quantum optics. It describes the system of a two-level atom interacting with a quantized mode of an optical cavity (or a bosonic field), with or without the presence of light (in the form of a bath of electromagnetic radiation that can cause spontaneous emission and absorption). It was originally developed to study the interaction of atoms with the quantized electromagnetic field in order to investigate the phenomena of spontaneous emission and absorption of photons in a cavity.
The Jaynes–Cummings model is of great interest to atomic physics, quantum optics, solid-state physics and quantum information circuits, both experimentally and theoretically. It also has applications in coherent control and quantum information processing.
Historical development.
1963: Edwin Jaynes & Fred Cummings.
The model was originally developed in a 1963 article by Edwin Jaynes and Fred Cummings to elucidate the effects of giving a fully quantum mechanical treatment to the behavior of atoms interacting with an electromagnetic field. In order to simplify the math and allow for a tractable calculation, Jaynes and Cummings restricted their attention to the interaction of an atom with a "single mode" of quantum electromagnetic field. (See below for further mathematical details.)
This approach is in contrast to the earlier semi-classical method, in which only the dynamics of the atom are treated quantum mechanically, while the field with which it interacts is assumed to behave according to classical electromagnetic theory. The quantum mechanical treatment of the field in the Jaynes–Cummings model reveals a number of novel features, including:
To realize the dynamics predicted by the Jaynes–Cummings model experimentally requires a quantum mechanical resonator with a very high quality factor so that the transitions between the states in the two-level system (typically two energy sub-levels in an atom) are coupled very strongly by the interaction of the atom with the field mode. This simultaneously suppresses any coupling between other sub-levels in atom and coupling to other modes of the field, and thus makes any losses small enough to observe the dynamics predicted by the Jaynes–Cummings model. Because of the difficulty in realizing such an apparatus, the model remained a mathematical curiosity for quite some time. In 1985, several groups using Rydberg atoms along with a maser in a microwave cavity demonstrated the predicted Rabi oscillations. However, as noted before, this effect was later found to have a semi-classical explanation.
1987: Rempe, Walther & Klein.
It was not until 1987 that Rempe, Walther, & Klein were finally able to use a single-atom maser to demonstrate the revivals of probabilities predicted by the model. Before that time, research groups were unable to build experimental setups capable of enhancing the coupling of an atom with a single field mode, simultaneously suppressing other modes. Experimentally, the quality factor of the cavity must be high enough to consider the dynamics of the system as equivalent to the dynamics of a single mode field. This successful demonstration of dynamics that could only be explained by a quantum mechanical model of the field spurred further development of high quality cavities for use in this research.
With the advent of one-atom masers it was possible to study the interaction of a single atom (usually a Rydberg atom) with a single resonant mode of the electromagnetic field in a cavity from an experimental point of view, and study different aspects of the Jaynes–Cummings model.
It was found that an hourglass geometry could be used to maximize the volume occupied by the mode, while simultaneously maintaining a high quality factor in order to maximize coupling strength, and thus better approximate the parameters of the model. To observe strong atom-field coupling in visible light frequencies, hour-glass-type optical modes can be helpful because of their large mode volume that eventually coincides with a strong field inside the cavity. A quantum dot inside a photonic crystal nano-cavity is also a promising system for observing collapse and revival of Rabi cycles in the visible light frequencies.
Further developments.
Many recent experiments have focused on the application of the model to systems with potential applications in quantum information processing and coherent control.
Various experiments have demonstrated the dynamics of the Jaynes–Cummings model in the coupling of a quantum dot to the modes of a micro-cavity, potentially allowing it to be applied in a physical system of much smaller size. Other experiments have focused on demonstrating the non-linear nature of the Jaynes-Cummings ladder of energy levels by direct spectroscopic observation. These experiments have found direct evidence for the non-linear behavior predicted from the quantum nature of the field in both superconducting circuits containing an "artificial atom" coupled to a very high quality oscillator in the form of a superconducting RLC circuit, and in a collection of Rydberg atoms coupled via their spins. In the latter case, the presence or absence of a collective Rydberg excitation in the ensemble serves the role of the two level system, while the role of the bosonic field mode is played by the total number of spin flips that take place.
Theoretical work has extended the original model to include the effects of dissipation and damping, typically via a phenomenological approach. Proposed extensions have also incorporated the inclusion of multiple modes of the quantum field, allowing for coupling to additional energy levels within the atom, or the presence of multiple atoms interacting with the same field. Some attempt has also been made to go beyond the so-called rotating-wave approximation that is usually employed (see the mathematical derivation below). The coupling of a single quantum field mode with multiple (formula_2) two-state subsystems (equivalent to spins higher than 1/2) is known as the Dicke model or the Tavis–Cummings model. For example, it applies to a high quality resonant cavity containing multiple identical atoms with transitions near the cavity resonance, or a resonator coupled to multiple quantum dots on a superconducting circuit. It reduces to the Jaynes–Cummings model for the case formula_3.
The model provides the possibility to realize several exotic theoretical possibilities in an experimental setting. For example, it was realized that during the periods of collapsed Rabi oscillations, the atom-cavity system exists in a quantum superposition state on a macroscopic scale. Such a state is sometimes referred to as a "Schrödinger cat", since it allows the exploration of the counter intuitive effects of how quantum entanglement manifests in macroscopic systems. It can also be used to model how quantum information is transferred in a quantum field.
Mathematical formulation 1.
The Hamiltonian that describes the full system,
formula_4
consists of the free field Hamiltonian, the atomic excitation Hamiltonian, and the Jaynes–Cummings interaction Hamiltonian:
formula_5
Here, for convenience, the vacuum field energy is set to formula_6.
For deriving the JCM interaction Hamiltonian the quantized radiation field is taken to consist of a single bosonic mode with the field operator
formula_7, where the operators formula_8 and formula_9 are the bosonic creation and annihilation operators and formula_10 is the angular frequency of the mode. On the other hand, the two-level atom is equivalent to a spin-half whose state can be described using a three-dimensional Bloch vector. (It should be understood that "two-level atom" here is not an actual atom "with" spin, but rather a generic two-level quantum system whose Hilbert space is isomorphic "to" a spin-half.) The atom is coupled to the field through its polarization operator formula_11. The operators formula_12 and formula_13 are the raising and lowering operators of the atom. The operator formula_14 is the atomic inversion operator, and formula_15 is the atomic transition frequency.
Jaynes–Cummings Hamiltonian 1.
Moving from the Schrödinger picture into the interaction picture (a.k.a. rotating frame) defined by the choice
formula_16,
we obtain
formula_17
This Hamiltonian contains both quickly formula_18 and slowly formula_19 oscillating components. To get a solvable model, the quickly oscillating "counter-rotating" terms, formula_18, are ignored. This is referred to as the rotating wave approximation, and it is valid since the fast oscillating term couples states of comparatively large energy difference:
When the difference in energy is much larger than the coupling, the mixing of these states will be small, or put differently, the coupling is responsible for very little population transfer between the states. Transforming back into the Schrödinger picture the JCM Hamiltonian is thus written as
formula_20
Eigenstates.
It is possible, and often very helpful, to write the Hamiltonian of the full system as a sum of two commuting parts:
formula_21
where
formula_22
with formula_23 called the detuning (frequency) between the field and the two-level system.
The eigenstates of formula_24, being of tensor product form, are easily solved and denoted by formula_25, where formula_26 denotes the number of radiation quanta in the mode.
As the states formula_27 and formula_28 are degenerate with respect to formula_24 for all formula_29, it is enough to diagonalize formula_30 in the subspaces formula_31. The matrix elements of formula_30 in this subspace, formula_32 read
formula_33
For a given formula_29, the energy eigenvalues of formula_34 are
formula_35
where formula_36 is the Rabi frequency for the specific detuning parameter. The eigenstates formula_37 associated with the energy eigenvalues are given by
formula_38
formula_39
where the angle formula_40 is defined through
formula_41
Schrödinger picture dynamics.
It is now possible to obtain the dynamics of a general state by expanding it on to the noted eigenstates. We consider a superposition of number states as the initial state for the field, formula_42, and assume an atom in the excited state is injected into the field. The initial state of the system is
formula_43
Since the formula_37 are stationary states of the field-atom system, then the state vector for times
formula_44 is just given by
formula_45
The Rabi oscillations can readily be seen in the sin and cos functions in the state vector. Different periods occur for different number states of photons.
What is observed in experiment is the sum of many periodic functions that can be very widely oscillating and destructively sum to zero at some moment of time, but will be non-zero again at later moments. Finiteness of this moment results just from discreteness of the periodicity arguments. If the field amplitude were continuous, the revival would have never happened at finite time.
Heisenberg picture dynamics.
It is possible in the Heisenberg notation to directly determine the unitary evolution operator from the Hamiltonian:
formula_46
where the operator formula_47 is defined as
formula_48
and formula_49 is given by
formula_50
The unitarity of formula_51 is guaranteed by the identities
formula_52
and their Hermitian conjugates.
By the unitary evolution operator one can calculate the time evolution of the state of the system described by its density matrix formula_53, and from there the expectation value of any observable, given the initial state:
formula_54
formula_55
The initial state of the system is denoted by formula_56 and formula_57 is an operator denoting the observable.
Mathematical formulation 2.
For ease of illustration, consider the interaction of two energy sub-levels of an atom with a quantized electromagnetic field. The behavior of any other two-state system coupled to a bosonic field will be isomorphic to these dynamics. In that case, the Hamiltonian for the atom-field system is:
formula_58
Where we have made the following definitions:
Rotating frame and rotating-wave approximation.
Next, the analysis may be simplified by performing a passive transformation into the so-called "co-rotating" frame. To do this, we use the interaction picture. Take
formula_82. Then the interaction Hamiltonian becomes:
formula_83
We now assume that the resonance frequency of the cavity is near the transition frequency of the atom, that is, we assume formula_84. Under this condition, the exponential terms oscillating at formula_85 are nearly resonant, while the other exponential terms oscillating at formula_86 are nearly anti-resonant. In the time formula_87 that it takes for the resonant terms to complete one full oscillation, the anti-resonant terms will complete many full cycles. Since over each full cycle formula_88 of anti-resonant oscillation, the net effect of the quickly oscillating anti-resonant terms tends to average to 0 for the timescales over which we wish to analyze resonant behavior. We may thus neglect the anti-resonant terms altogether, since their value is negligible compared to that of the nearly resonant terms. This approximation is known as the rotating wave approximation, and it accords with the intuition that energy must be conserved. Then the interaction Hamiltonian (taking formula_89 to be real for simplicity) is:
formula_90
With this approximation in hand (and absorbing the negative sign into formula_89), we may transform back to the Schrödinger picture:
formula_91
Jaynes-Cummings Hamiltonian 2.
Using the results gathered in the last two sections, we may now write down the full Jaynes-Cummings Hamiltonian:
formula_92
The constant term formula_93 represents the zero-point energy of the field. It will not contribute to the dynamics, so it may be neglected, giving:
formula_94
Next, define the so-called "number operator" by:
formula_95.
Consider the commutator of this operator with the atom-field Hamiltonian:
formula_96
Thus the number operator commutes with the atom-field Hamiltonian. The eigenstates of the number operator are the basis of tensor product states
formula_97 where the states formula_98 of the field are those with a definite number formula_1 of photons. The number operator formula_99 counts the "total" number formula_1 of quanta in the atom-field system.
In this basis of eigenstates of formula_99 (total number states), the Hamiltonian takes on a block diagonal structure:
formula_100
With the exception of the scalar formula_101, each formula_102 on the diagonal is itself a formula_103 matrix of the form;
formula_104
Now, using the relation:
formula_105
We obtain the portion of the Hamiltonian that acts in the nth subspace as:
formula_106
By shifting the energy from formula_107 to formula_108 with the amount of formula_109, we can get
formula_110
where we have identified formula_111 as the Rabi frequency of the system, and formula_112 is the so-called "detuning" between the frequencies of the cavity and atomic transition. We have also defined the operators:
formula_113
to be the identity operator and Pauli x and z operators in the Hilbert space of the nth energy level of the atom-field system. This simple formula_114 Hamiltonian is of the same form as what would be found in the Rabi problem. Diagonalization gives the energy eigenvalues and eigenstates to be:
formula_115
Where the angle formula_116 is defined by the relation formula_117.
Vacuum Rabi oscillations.
Consider an atom entering the cavity initially in its excited state, while the cavity is initially in the vacuum. Moreover, one assumes that the angular frequency of the mode can be approximated to the atomic transition frequency, involving formula_118. Then the state of the atom-field system as a function of time is:
formula_119
So the probabilities to find the system in the ground or excited states after interacting with the cavity for a time formula_120 are:
formula_121
Thus the probability amplitude to find the atom in either state oscillates. This is the quantum mechanical explanation for the phenomenon of vacuum Rabi oscillation. In this case, there was only a single quantum in the atom-field system, carried in by the initially excited atom. In general, the Rabi oscillation associated with an atom-field system of formula_1 quanta will have frequency formula_122. As explained below, this discrete spectrum of frequencies is the underlying reason for the collapses and subsequent revivals probabilities in the model.
Jaynes-Cummings ladder.
As shown in the previous subsection, if the initial state of the atom-cavity system is formula_123 or formula_124, as is the case for an atom initially in a definite state (ground or excited) entering a cavity containing a known number of photons, then the state of the atom-cavity system at later times becomes a superposition of the "new" eigenstates of the atom-cavity system:
formula_125
This change in eigenstates due to the alteration of the Hamiltonian caused by the atom-field interaction is sometimes called "dressing" the atom, and the new eigenstates are referred to as the dressed states.
The energy difference between the dressed states is:
formula_126
Of particular interest is the case where the cavity frequency is perfectly resonant with the transition frequency of the atom, so formula_127.
In the resonant case, the dressed states are:
formula_128
With energy difference formula_129. Thus the interaction of the atom with the field splits the degeneracy of the states formula_130 and formula_124 by formula_131. This non-linear hierarchy of energy levels scaling as formula_0 is known as the Jaynes-Cummings ladder. This non-linear splitting effect is purely quantum mechanical, and cannot be explained by any semi-classical model.
Collapse and revival of probabilities.
Consider an atom initially in the ground state interacting with a field mode initially prepared in a coherent state, so the initial state of the atom-field system is:
formula_132
For simplicity, take the resonant case (formula_133), then the Hamiltonian for the nth number subspace is:
formula_134
Using this, the time evolution of the atom-field system will be:
formula_135
Note neither of the constant factors formula_136 nor formula_137 contribute to the dynamics beyond an overall phase, since they represent the zero-point energy. In this case, the probability to find the atom having flipped to the excited state at a later time formula_138 is:
formula_139
Where we have identified formula_140 to be the mean photon number in a coherent state. If the mean photon number is large, then since the statistics of the coherent state are Poissonian we have that the variance-to-mean ratio is formula_141. Using this result and expanding formula_142 around formula_143 to lowest non-vanishing order in formula_1 gives:
formula_144
Inserting this into the sum yields a complicated product of exponentials:
formula_145
For "small" times such that formula_146, the inner exponential inside the double exponential in the last term can be expanded up second order to obtain:
formula_147
This result shows that the probability of occupation of the excited state "oscillates" with effective frequency formula_148. It also shows that it should decay over characteristic time:
formula_149
The collapse can be easily understood as a consequence of destructive interference between the different frequency components as they de-phase and begin to destructively interfere over time. However, the fact that the frequencies have a discrete spectrum leads to another interesting result in the longer time regime; in that case, the periodic nature of the slowly varying double exponential predicts that there should also be a "revival" of probability at time:
formula_150
The revival of probability is due to the re-phasing of the various discrete frequencies. If the field were classical, the frequencies would have a continuous spectrum, and such re-phasing could never occur within a finite time.
A plot of the probability to find an atom initially in the ground state to have transitioned to the excited state after interacting with a cavity prepared a in a coherent state vs. the unit-less parameter formula_151 is shown to the right. Note the initial collapse followed by the clear revival at longer times.
Collapses and revivals of quantum oscillations.
This plot of quantum oscillations of atomic inversion—for quadratic scaled detuning parameter formula_152, where formula_153 is the detuning parameter—was built on the basis of formulas obtained by A.A. Karatsuba and E.A. Karatsuba.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\sqrt{n} "
},
{
"math_id": 1,
"text": " n "
},
{
"math_id": 2,
"text": "N>1"
},
{
"math_id": 3,
"text": "N=1"
},
{
"math_id": 4,
"text": "\\hat{H} = \\hat{H}_{\\text{field}} +\\hat{H}_{\\text{atom}} +\\hat{H}_{\\text{int}}"
},
{
"math_id": 5,
"text": "\n\\begin{align}\n\\hat{H}_\\text{field} &= \\hbar \\omega_c \\hat{a}^{\\dagger}\\hat{a}\\\\\n\\hat{H}_\\text{atom} &= \\hbar \\omega_a \\frac{\\hat{\\sigma}_z}{2}\\\\\n\\hat{H}_\\text{int} &= \\frac{\\hbar \\Omega}{2} \\hat{E} \\hat{S}.\n\\end{align}\n"
},
{
"math_id": 6,
"text": "0"
},
{
"math_id": 7,
"text": "\\hat{E} = E_\\text{ZPF}\\left( \\hat{a} +\\hat{a}^{\\dagger}\\right)"
},
{
"math_id": 8,
"text": "\\hat{a}^{\\dagger}"
},
{
"math_id": 9,
"text": "\\hat{a} "
},
{
"math_id": 10,
"text": "\\omega_c"
},
{
"math_id": 11,
"text": "\\hat{S} = \\hat{\\sigma}_+ +\\hat{\\sigma}_-"
},
{
"math_id": 12,
"text": "\\hat{\\sigma}_+ = |e \\rangle \\langle g |"
},
{
"math_id": 13,
"text": "\\hat{\\sigma}_- = |g \\rangle \\langle e |"
},
{
"math_id": 14,
"text": "\\hat{\\sigma}_z = |e \\rangle \\langle e | - |g \\rangle \\langle g |"
},
{
"math_id": 15,
"text": "\\omega_a"
},
{
"math_id": 16,
"text": "\\hat{H}_0 = \\hat{H}_{\\text{field}} + \\hat{H}_{\\text{atom}}"
},
{
"math_id": 17,
"text": "\\hat{H}_\\text{int}(t) = \\frac{\\hbar \\Omega}{2} \\left(\\hat{a}\\hat{\\sigma}_{-} e^{-i(\\omega_c+\\omega_a)t}\n+\\hat{a}^{\\dagger}\\hat{\\sigma}_{+}e^{i(\\omega_c+\\omega_a)t}\n+\\hat{a}\\hat{\\sigma}_{+} e^{-i (-\\omega_c+\\omega_a) t}\n+\\hat{a}^{\\dagger}\\hat{\\sigma}_{-} e^{i (-\\omega_c+\\omega_a) t}\\right)."
},
{
"math_id": 18,
"text": "(\\omega_c + \\omega_a)"
},
{
"math_id": 19,
"text": "(\\omega_c - \\omega_a)"
},
{
"math_id": 20,
"text": "\\hat{H}_{\\text{JC}} = \\hbar \\omega_c \\hat{a}^{\\dagger}\\hat{a}\n+\\hbar \\omega_a \\frac{\\hat{\\sigma}_z}{2}\n+\\frac{\\hbar \\Omega}{2} \\left(\\hat{a}\\hat{\\sigma}_+\n+\\hat{a}^{\\dagger}\\hat{\\sigma}_-\\right)."
},
{
"math_id": 21,
"text": "\\hat{H}_\\text{JC} = \\hat{H}_\\text{I} +\\hat{H}_\\text{II},"
},
{
"math_id": 22,
"text": "\n\\begin{align}\n\\hat{H}_\\text{I} &= \\hbar \\omega_c \\left(\\hat{a}^{\\dagger}\\hat{a} +\\frac{\\hat{\\sigma}_z}{2}\\right)\\\\\n\\hat{H}_\\text{II} &= \\hbar \\delta \\frac{\\hat{\\sigma}_z}{2}\n+\\frac{\\hbar \\Omega}{2} \\left(\\hat{a}\\hat{\\sigma}_+\n+\\hat{a}^{\\dagger}\\hat{\\sigma}_-\\right)\n\\end{align}\n"
},
{
"math_id": 23,
"text": "\\delta = \\omega_a - \\omega_c"
},
{
"math_id": 24,
"text": "\\hat{H}_{I}"
},
{
"math_id": 25,
"text": "|n+1,g\\rangle, |n,e\\rangle"
},
{
"math_id": 26,
"text": "n \\in \\mathbb{N}"
},
{
"math_id": 27,
"text": "|\\psi_{1n}\\rangle := |n,e\\rangle"
},
{
"math_id": 28,
"text": "|\\psi_{2n}\\rangle := |n+1,g\\rangle"
},
{
"math_id": 29,
"text": "n"
},
{
"math_id": 30,
"text": "\\hat{H}_{\\text{JC}}"
},
{
"math_id": 31,
"text": "\\operatorname{span} \\{ |\\psi_{1n}\\rangle ,|\\psi_{2n}\\rangle\\}"
},
{
"math_id": 32,
"text": "{H}^{(n)}_{ij} := \\langle\\psi_{in}|\\hat{H}_{\\text{JC}}|\\psi_{jn}\\rangle,"
},
{
"math_id": 33,
"text": "H^{(n)} = \\hbar\n\\begin{pmatrix}\nn \\omega_c +\\frac{\\omega_a}{2} & \\frac{\\Omega}{2} \\sqrt{n+1} \\\\[8pt]\n\\frac{\\Omega}{2} \\sqrt{n+1} & (n+1)\\omega_c -\\frac{\\omega_a}{2}\n\\end{pmatrix}\n"
},
{
"math_id": 34,
"text": "H^{(n)}"
},
{
"math_id": 35,
"text": "E_{\\pm}(n) = \\hbar \\omega_c \\left(n+\\frac{1}{2}\\right) \\pm \\frac{1}{2} \\hbar\\Omega_n(\\delta),"
},
{
"math_id": 36,
"text": " \\Omega_n(\\delta) = \\sqrt{\\delta^2 +\\Omega^2(n+1)}"
},
{
"math_id": 37,
"text": "|n,\\pm\\rangle"
},
{
"math_id": 38,
"text": "|n,+\\rangle= \\cos \\left(\\frac{\\alpha_n}{2}\\right)|\\psi_{1n}\\rangle+\\sin \\left(\\frac{\\alpha_n}{2}\\right)|\\psi_{2n}\\rangle"
},
{
"math_id": 39,
"text": "|n,-\\rangle= \\sin \\left(\\frac{\\alpha_n}{2}\\right)|\\psi_{1n}\\rangle-\\cos \\left(\\frac{\\alpha_n}{2}\\right) |\\psi_{2n}\\rangle"
},
{
"math_id": 40,
"text": "\\alpha_n"
},
{
"math_id": 41,
"text": "\\alpha_n := \\tan^{-1}\\left(\\frac{\\Omega \\sqrt{n+1}}{\\delta}\\right)."
},
{
"math_id": 42,
"text": "|\\psi_\\text{field}(0)\\rangle = \\sum_n{C_n|n\\rangle}"
},
{
"math_id": 43,
"text": "|\\psi_\\text{tot}(0)\\rangle=\\sum_n{C_n|n,e\\rangle}= \\sum_n C_n \\left[ \\cos \\left(\\frac{\\alpha_n}{2}\\right)|n,+\\rangle+\\sin \\left(\\frac{\\alpha_n}{2}\\right)|n,-\\rangle\\right]."
},
{
"math_id": 44,
"text": " t > 0 "
},
{
"math_id": 45,
"text": "|\\psi_\\text{tot}(t)\\rangle = e^{-i\\hat{H}_{\\text{JC}}t/\\hbar}|\\psi_\\text{tot}(0)\\rangle = \\sum_n C_n \\left[ \\cos \\left(\\frac{\\alpha_n}{2}\\right)|n,+\\rangle e^{-iE_+(n)t/\\hbar}+ \\sin \\left(\\frac{\\alpha_n}{2}\\right)|n,-\\rangle e^{-iE_-(n)t/\\hbar}\\right]."
},
{
"math_id": 46,
"text": "\\begin{matrix}\\begin{align}\n\\hat{U}(t) &= e^{-i\\hat{H}_{\\text{JC}}t/\\hbar}\\\\\n&=\n\\begin{pmatrix}\ne^{- i \\omega_c t \\left(\\hat{a}^{\\dagger} \\hat{a} + \\frac{1}{2}\\right)}\\left( \\cos t \\sqrt{\\hat{\\varphi} + g^2} - i \\delta/2 \\frac{\\sin t \\sqrt{\\hat{\\varphi} +\ng^2}}{\\sqrt{\\hat{\\varphi} + g^2}}\\right)\n& - i g e^{- i \\omega_c t \\left(\\hat{a}^{\\dagger} \\hat{a} + \\frac{1}{2}\\right)} \\frac{\\sin t \\sqrt{\\hat{\\varphi} + g^2}}{\\sqrt{\\hat{\\varphi} + g^2}} \\,\\hat{a} \\\\\n\n-i g e^{- i \\omega_c t \\left(\\hat{a}^{\\dagger} \\hat{a} - \\frac{1}{2}\\right)} \\frac{\\sin t \\sqrt{\\hat{\\varphi}}} {\\sqrt{\\hat{\\varphi}}} \\hat{a}^{\\dagger}\n& e^{- i \\omega_c t \\left(\\hat{a}^{\\dagger} \\hat{a} - \\frac{1}{2} \\right)} \\left( \\cos t \\sqrt{\\hat{\\varphi}} + i \\delta/2 \\frac{\\sin t \\sqrt{\\hat{\\varphi}}}{\\sqrt{\\hat{\\varphi} }}\\right)\n\\end{pmatrix}\n\\end{align}\\end{matrix}"
},
{
"math_id": 47,
"text": "\\hat{\\varphi}"
},
{
"math_id": 48,
"text": " \\hat{\\varphi} = g^2 \\hat{a}^{\\dagger} \\hat{a} + \\delta^2/4 "
},
{
"math_id": 49,
"text": " g "
},
{
"math_id": 50,
"text": " g = \\frac{\\Omega}{\\hbar}"
},
{
"math_id": 51,
"text": "\\hat{U}"
},
{
"math_id": 52,
"text": "\\begin{align}\n\\frac{\\sin t\\,\\sqrt{\\hat{\\varphi} + g^2}}{\\sqrt{\\hat{\\varphi} + g^2}}\\; \\hat{a} &= \\hat{a}\\; \\frac{\\sin t\\,\\sqrt{\\hat{\\varphi}}}{\\sqrt{\\hat{\\varphi}}} , \\\\\n\\cos t\\, \\sqrt{\\hat{\\varphi} + g^2}\\; \\hat{a} &= \\hat{a}\\; \\cos t \\sqrt{\\hat{\\varphi}},\n\\end{align}"
},
{
"math_id": 53,
"text": "\\hat{\\rho}(t)"
},
{
"math_id": 54,
"text": "\\hat{\\rho}(t) = \\hat{U}^{\\dagger}(t)\\hat{\\rho}(0)\\hat{U}(t)"
},
{
"math_id": 55,
"text": "\\langle\\hat{\\Theta}\\rangle_{t}=\\text{Tr}[\\hat{\\rho}(t)\\hat{\\Theta}]"
},
{
"math_id": 56,
"text": "\\hat{\\rho}(0) "
},
{
"math_id": 57,
"text": " \\hat{\\Theta}"
},
{
"math_id": 58,
"text": " \\hat{H} = \\hat{H}_{A} + \\hat{H}_F + \\hat{H}_{AF}"
},
{
"math_id": 59,
"text": "\\hat{H}_A= E_g|g\\rangle\\langle g| +E_e|e\\rangle\\langle e| "
},
{
"math_id": 60,
"text": " e, g "
},
{
"math_id": 61,
"text": " \\hat{H}_A= E_e|e\\rangle\\langle e|=\\hbar \\omega_{eg}|e\\rangle \\langle e|"
},
{
"math_id": 62,
"text": " \\omega_{eg} "
},
{
"math_id": 63,
"text": " \\hat{H}_F=\\sum_{\\mathbf{k},\\lambda}\\hbar\\omega_{\\mathbf{k}}\\left(\\hat{a}^{\\dagger}_{\\mathbf{k},\\lambda}\\hat{a}_{\\mathbf{k},\\lambda}+\\frac{1}{2}\\right)"
},
{
"math_id": 64,
"text": "\\mathbf{k}"
},
{
"math_id": 65,
"text": "\\lambda"
},
{
"math_id": 66,
"text": " \\hat{a}^{\\dagger}_{\\mathbf{k},\\lambda} "
},
{
"math_id": 67,
"text": " \\hat{a}_{\\mathbf{k},\\lambda} "
},
{
"math_id": 68,
"text": " \\hat{H}_F = \\hbar\\omega_c\\left(\\hat{a}^{\\dagger}_c \\hat{a}_c + \\frac{1}{2}\\right)"
},
{
"math_id": 69,
"text": "c"
},
{
"math_id": 70,
"text": "\\hat{H}_{AF} =-\\hat{\\mathbf{d}}\\cdot\\hat{\\mathbf{E}}(\\mathbf{R})"
},
{
"math_id": 71,
"text": " \\mathbf{R} "
},
{
"math_id": 72,
"text": "\\hat{\\mathbf{E}}(\\mathbf{R})=i \\sum_{\\mathbf{k},\\lambda}\\sqrt{\\frac{2\\pi\\hbar\\omega_\\mathbf{k}}{V}}\n\\mathbf{u}_{\\mathbf{k},\\lambda}\n\\left(\\hat{a}_{\\mathbf{k},\\lambda}e^{i \\mathbf{k}\\cdot\\mathbf{R}}\n-\\hat{a}^\\dagger_{\\mathbf{k},\\lambda}e^{-i \\mathbf{k}\\cdot\\mathbf{R}}\\right)"
},
{
"math_id": 73,
"text": "\\hat{\\mathbf{d}}=\\hat{\\sigma}_+\\langle e| \\hat{\\mathbf{d}}|g\\rangle +\\hat{\\sigma}_- \\langle g| \\hat{\\mathbf{d}}|e\\rangle"
},
{
"math_id": 74,
"text": "\\mathbf{R}=\\mathbf{0}"
},
{
"math_id": 75,
"text": " \\hbar g_{\\mathbf{k},\\lambda} = i\\sqrt{\\frac{2 \\pi \\hbar\\omega_{\\mathbf{k}}}{V}}\\langle e| \\hat{\\mathbf{d}}|g\\rangle\\cdot\\mathbf{u}_{\\mathbf{k},\\lambda},"
},
{
"math_id": 76,
"text": " \\mathbf{u}_{\\mathbf{k},\\lambda} "
},
{
"math_id": 77,
"text": " \\hat{H}_{AF} = -\\sum_{\\mathbf{k},\\lambda}\\hbar\\left(g_{\\mathbf{k},\\lambda}\\hat{\\sigma}_+\\hat{a}_{\\mathbf{k},\\lambda}-g^*_{\\mathbf{k},\\lambda}\\hat{\\sigma}_-\\hat{a}^{\\dagger}_{\\mathbf{k},\\lambda} -g_{\\mathbf{k},\\lambda}\\hat{\\sigma}_+\\hat{a}^{\\dagger}_{\\mathbf{k},\\lambda}+g^*_{\\mathbf{k},\\lambda}\\hat{\\sigma}_-\\hat{a}_{\\mathbf{k},\\lambda}\\right),"
},
{
"math_id": 78,
"text": " \\hat{\\sigma}_ +=|e\\rangle\\langle g|"
},
{
"math_id": 79,
"text": " \\hat{\\sigma}_-=|g\\rangle\\langle e|"
},
{
"math_id": 80,
"text": "\\{|e\\rangle,|g\\rangle\\} "
},
{
"math_id": 81,
"text": " \\hat{H}_{AF} = \\hbar \\left[\\left(g_c \\hat{\\sigma}_+ \\hat{a}_c - g_c^* \\hat{\\sigma}_- \\hat{a}_c^{\\dagger}\\right) + \\left(-g_c \\hat{\\sigma}_+ \\hat{a}_c^{\\dagger} + g_c^* \\hat{\\sigma}_- \\hat{a}_c\\right)\\right]"
},
{
"math_id": 82,
"text": " \\hat{H}_0=\\hat{H}_A+\\hat{H}_F "
},
{
"math_id": 83,
"text": " \\hat{H}_{AF}(t)=e^{i\\hat{H}_0t/\\hbar}\\hat{H}_{AF}e^{-i\\hat{H}_0t/\\hbar}=\\hbar\\left(g_c\\hat{\\sigma}_+\\hat{a}_c^{\\dagger}e^{i(\\omega_c+\\omega_{eg})t}+g_c^*\\hat{\\sigma}_-\\hat{a}_ce^{-i(\\omega_c+\\omega_{eg})t}-g_c^*\\hat{\\sigma}_-\\hat{a}_c^{\\dagger}e^{-i(\\omega_{eg}-\\omega_c)t}-g_c\\hat{\\sigma}_+\\hat{a}_ce^{i(\\omega_{eg}-\\omega_c)t}\\right)"
},
{
"math_id": 84,
"text": " |\\omega_{eg}-\\omega_c| \\ll \\omega_{eg}+\\omega_c"
},
{
"math_id": 85,
"text": " \\omega_{eg} -\\omega_c \\simeq 0"
},
{
"math_id": 86,
"text": " \\omega_{eg}+\\omega_c\\simeq 2\\omega_c "
},
{
"math_id": 87,
"text": " \\tau = \\frac{2\\pi}{\\Delta}, \\Delta \\equiv \\omega_{eg}-\\omega_c "
},
{
"math_id": 88,
"text": " \\frac{2 \\pi}{2\\omega_c} \\ll \\tau "
},
{
"math_id": 89,
"text": " g_c "
},
{
"math_id": 90,
"text": " \\hat{H}_{AF}(t)=-\\hbar g_c \\left(\\hat{\\sigma}_+\\hat{a}_ce^{i(\\omega_{eg}-\\omega_c)t}+\\hat{\\sigma}_-\\hat{a}_c^{\\dagger}e^{-i(\\omega_{eg}-\\omega_c)t}\\right) "
},
{
"math_id": 91,
"text": " \\hat{H}_{AF}=e^{-i\\hat{H}_0t/\\hbar}\\hat{H}_{AF}(t)e^{i\\hat{H}_0t/\\hbar} = \\hbar g_c \\left(\\hat{\\sigma}_+\\hat{a}_c+\\hat{\\sigma}_-\\hat{a}_c^{\\dagger}\\right)"
},
{
"math_id": 92,
"text": "\n\\hat{H}_{JC}= \\hbar \\omega_c\\left(\\hat{a}^{\\dagger}_c\\hat{a}_c+\\frac{1}{2}\\right)+\\hbar\\omega_{eg} |e\\rangle\\langle e|+\\hbar g_c \\left(\\hat{\\sigma}_+\\hat{a}_c+\\hat{\\sigma}_-\\hat{a}_c^{\\dagger}\\right)"
},
{
"math_id": 93,
"text": "\\frac{1}{2}\\hbar \\omega_c"
},
{
"math_id": 94,
"text": " \\hat{H}_{JC}= \\hbar \\omega_c\\hat{a}^{\\dagger}_c\\hat{a}_c+\\hbar\\omega_{eg}|e\\rangle\\langle e|+\\hbar g_c \\left(\\hat{\\sigma}_+\\hat{a}_c+\\hat{\\sigma}_-\\hat{a}_c^{\\dagger}\\right)"
},
{
"math_id": 95,
"text": " \\hat{N}=|e\\rangle\\langle e| +\\hat{a}_c^{\\dagger}\\hat{a}_c "
},
{
"math_id": 96,
"text": "\\begin{align}\n \\left[\\hat{H}_{AF},\\hat{N}\\right] &= \\hbar g_c\\left( \\left[\\hat{a}_c\\hat{\\sigma}_+,|e\\rangle\\langle e| +\\hat{a}_c^{\\dagger}\\hat{a}_c\\right]+\\left[\\hat{a}_c^{\\dagger}\\hat{\\sigma}_-,|e\\rangle\\langle e| +\\hat{a}_c^{\\dagger}\\hat{a}_c\\right]\\right)\\\\\n&= \\hbar g_c \\left(\\hat{a}_c\\left[\\hat{\\sigma}_+,|e\\rangle\\langle e|\\right]+\\left[\\hat{a}_c,\\hat{a}_c^{\\dagger}\\hat{a}_c\\right]\\hat{\\sigma}_++\\hat{a}_c^{\\dagger}\\left[\\hat{\\sigma}_-,|e\\rangle\\langle e|\\right]+\\left[\\hat{a}_c^{\\dagger},\\hat{a}_c^{\\dagger}\\hat{a}_c\\right]\\hat{\\sigma}_-\\right)\\\\\n&=\\hbar g_c \\left( -\\hat{a}_c\\hat{\\sigma}_++\\hat{a}_c\\hat{\\sigma}_++\\hat{a}_c^{\\dagger}\\hat{\\sigma}_--\\hat{a}_c^{\\dagger}\\hat{\\sigma}_-\\right)\\\\\n&=0\n\\end{align} "
},
{
"math_id": 97,
"text": " \\left\\{|g,0\\rangle; |e,0\\rangle ,|g,1\\rangle ; \\cdots ;|e,n-1\\rangle,|g,n\\rangle \\right\\} "
},
{
"math_id": 98,
"text": " \\left\\{ |n\\rangle \\right\\} "
},
{
"math_id": 99,
"text": " \\hat{N} "
},
{
"math_id": 100,
"text": " \\hat{H}_{JC}=\\begin{bmatrix} \nH_0 &0 & 0 & 0&\\cdots &\\cdots &\\cdots\\\\\n0 & \\hat{H}_1 & 0 & 0 &\\ddots &\\ddots &\\ddots \\\\\n0 & 0 & \\hat{H}_2 & 0 & \\ddots & \\ddots &\\ddots \\\\\n\\vdots & \\ddots & \\ddots & \\ddots &\\ddots & \\ddots & \\ddots \\\\\n\\vdots &\\ddots & \\ddots & 0 & \\hat{H}_n & 0 &\\ddots \\\\\n\\vdots &\\ddots&\\ddots&\\ddots&\\ddots&\\ddots & \\ddots\\\\ \n\\end{bmatrix} "
},
{
"math_id": 101,
"text": " H_0 "
},
{
"math_id": 102,
"text": " \\hat{H}_n "
},
{
"math_id": 103,
"text": " 2 \\times 2 "
},
{
"math_id": 104,
"text": " \\hat{H}_n=\\begin{bmatrix}\n\\hbar\\omega_c(n-1)+ \\hbar\\omega_{eg} & \\langle e,n-1|\\hat{H}_{JC}|g,n\\rangle \\\\\n\\langle g,n|\\hat{H}_{JC}|e,n-1 \\rangle & n\\hbar \\omega_c \\\\\n\\end{bmatrix}\n"
},
{
"math_id": 105,
"text": "\n\\langle g,n|\\hat{H}_{JC}|e,n-1\\rangle\n= \\hbar g_c \\langle g,n|\\hat{a}_c^{\\dagger}\\hat{\\sigma}_-|e,n-1\\rangle+\\hbar g_c\\langle g,n|\\hat{a}_c\\hat{\\sigma}_+|e,n-1\\rangle\n=\\sqrt{n}\\hbar g_c\n"
},
{
"math_id": 106,
"text": " \\hat{H}_n=\\begin{bmatrix}\nn\\hbar\\omega_c-\\hbar\\Delta & \\frac{\\sqrt{n}\\hbar\\Omega}{2}\\\\\n\\frac{\\sqrt{n}\\hbar\\Omega}{2} & n\\hbar\\omega_c \\\\\n\\end{bmatrix}"
},
{
"math_id": 107,
"text": " |e\\rangle "
},
{
"math_id": 108,
"text": " |g\\rangle "
},
{
"math_id": 109,
"text": " \\frac{1}{2}\\hbar\\Delta "
},
{
"math_id": 110,
"text": " \\hat{H}_n=\\begin{bmatrix}\nn\\hbar\\omega_c-\\frac{1}{2}\\hbar\\Delta & \\frac{\\sqrt{n}\\hbar\\Omega}{2}\\\\\n\\frac{\\sqrt{n}\\hbar\\Omega}{2} & n\\hbar\\omega_c+\\frac{1}{2}\\hbar\\Delta \\\\\n\\end{bmatrix}\n\n=n\\hbar\\omega_c\\hat{I}^{(n)}-\\frac{\\hbar\\Delta}{2}\\hat{\\sigma}_z^{(n)}+\\frac{1}{2}\\sqrt{n}\\hbar\\Omega\\hat{\\sigma}_x^{(n)}\n"
},
{
"math_id": 111,
"text": " 2g_c = \\Omega "
},
{
"math_id": 112,
"text": " \\Delta=\\omega_c-\\omega_{eg} "
},
{
"math_id": 113,
"text": "\\begin{align}\n\\hat{I}^{(n)} &= \\left|e,n-1\\right\\rangle \\left\\langle e,n-1\\right| + \\left|g,n\\right\\rangle \\left\\langle g,n\\right| \\\\[1ex]\n\\hat{\\sigma}_z^{(n)} &= \\left|e,n-1\\right\\rangle \\left\\langle e,n-1\\right| - \\left|g,n\\right\\rangle \\left\\langle g,n\\right| \\\\[1ex]\n\\hat{\\sigma}_x^{(n)} &= \\left|e,n-1\\right\\rangle \\left\\langle g,n\\right| + \\left|g,n\\right\\rangle \\left\\langle e,n-1\\right|. \\\\[-1ex]\\,\n\\end{align}"
},
{
"math_id": 114,
"text": " 2\\times2 "
},
{
"math_id": 115,
"text": "\\begin{align}\nE_{n,\\pm}&=\\left(n\\hbar\\omega_c-\\frac{1}{2}\\hbar\\Delta\\right) \\pm \\frac{1}{2}\\hbar\\sqrt{\\Delta^2+n\\Omega^2}\\\\\n|n,+\\rangle &=\\cos\\left(\\frac{\\theta_n}{2}\\right)|e,n-1\\rangle+\\sin\\left(\\frac{\\theta_n}{2}\\right)|g,n\\rangle\\\\\n|n,-\\rangle &=\\cos\\left(\\frac{\\theta_n}{2}\\right)|g,n\\rangle -\\sin\\left(\\frac{\\theta_n}{2}\\right)|e,n-1\\rangle\\\\\n\\end{align} "
},
{
"math_id": 116,
"text": " \\theta_n "
},
{
"math_id": 117,
"text": " \\tan\\theta_n=-\\frac{\\sqrt{n}\\Omega}{\\Delta} "
},
{
"math_id": 118,
"text": " \\Delta \\approx 0\n"
},
{
"math_id": 119,
"text": " |\\psi (t)\\rangle = \\cos\\left(\\frac{\\Omega t}{2}\\right)|e,0\\rangle-i\\sin\\left(\\frac{\\Omega t}{2}\\right)|g,1\\rangle"
},
{
"math_id": 120,
"text": " t "
},
{
"math_id": 121,
"text": " \\begin{align}\nP_e(t)&=|\\langle e,0|\\psi (t) \\rangle |^2=\\cos^2\\left(\\frac{\\Omega t}{2}\\right)\\\\\nP_g(t)&=|\\langle g,1|\\psi (t) \\rangle |^2=\\sin^2\\left(\\frac{\\Omega t}{2}\\right)\\\\\n\\end{align} "
},
{
"math_id": 122,
"text": " \\Omega_n=\\frac{\\sqrt{n}\\Omega}{2} "
},
{
"math_id": 123,
"text": " |e,n-1\\rangle "
},
{
"math_id": 124,
"text": " |g,n\\rangle "
},
{
"math_id": 125,
"text": "\\begin{align}\n|n,+\\rangle &=\\cos\\left(\\frac{\\theta_n}{2}\\right)|e,n-1\\rangle+\\sin\\left(\\frac{\\theta_n}{2}\\right)|g,n\\rangle\\\\\n|n,-\\rangle &=\\cos\\left(\\frac{\\theta_n}{2}\\right)|g,n\\rangle -\\sin\\left(\\frac{\\theta_n}{2}\\right)|e,n-1\\rangle\\\\\n\\end{align} "
},
{
"math_id": 126,
"text": "\\delta E=E_+-E_-=\\hbar\\sqrt{\\Delta^2+n\\Omega^2}"
},
{
"math_id": 127,
"text": " \\omega_{eg}=\\omega_c\\implies\\Delta=0"
},
{
"math_id": 128,
"text": "|n,\\pm \\rangle = \\frac{1}{\\sqrt{2}}\\left(|g,n \\rangle\\mp|e,n-1\\rangle\\right)"
},
{
"math_id": 129,
"text": " \\delta E =\\sqrt{n} \\hbar\\Omega "
},
{
"math_id": 130,
"text": " |e,n-1\\rangle "
},
{
"math_id": 131,
"text": " \\sqrt{n} \\hbar \\Omega "
},
{
"math_id": 132,
"text": " |\\psi (0)\\rangle = |g,\\alpha \\rangle = \\sum_{n=0}^\\infty e^{-|\\alpha|^2/2}\\frac{\\alpha ^n}{\\sqrt{n!}}|g,n\\rangle "
},
{
"math_id": 133,
"text": " \\Delta = 0"
},
{
"math_id": 134,
"text": "\\hat{H}_n=\\left(n+\\frac{1}{2}\\right)\\hat{I}^{(n)}+\\frac{\\hbar\\sqrt{n}\\Omega}{2}\\hat{\\sigma}_x^{(n)} "
},
{
"math_id": 135,
"text": "\\begin{align}\n|\\psi (t) \\rangle &= e^{-i\\hat{H}_nt /\\hbar}|\\psi(0) \\rangle \\\\\n&=e^{-|\\alpha|^2/2}|g,0\\rangle+\\sum_{n=1}^\\infty e^{-|\\alpha|^2/2}\\frac{\\alpha^n}{\\sqrt{n!}}e^{-in\\omega_c t} \\left(\\cos{(\\sqrt{n}\\Omega t/2)}\\hat{I}^{(n)}-i\\sin{(\\sqrt{n}\\Omega t /2)}\\hat{\\sigma}_x^{(n)}\\right)|g,n\\rangle\\\\\n&=e^{-|\\alpha|^2/2}|g,0\\rangle+\\sum_{n=1}^\\infty e^{-|\\alpha|^2/2}\\frac{\\alpha^n}{\\sqrt{n!}}e^{-in\\omega_c t} \\left(\\cos{(\\sqrt{n}\\Omega t/2)}|g,n\\rangle-i\\sin{(\\sqrt{n}\\Omega t /2)}|e,n-1\\rangle\\right)\n\\end{align}"
},
{
"math_id": 136,
"text": " \\frac{\\hbar\\omega_c}{2}\\hat{I}^{(n)} "
},
{
"math_id": 137,
"text": " \\hat{H}_0 "
},
{
"math_id": 138,
"text": " t"
},
{
"math_id": 139,
"text": "\\begin{align}\nP_e(t) = \\left|\\langle e|\\psi (t)\\rangle \\right|^2 &= \\sum_{n=1}^\\infty\\frac{e^{-|\\alpha|^2}}{n!}|\\alpha|^{2n} \\sin^2\\left(\\tfrac{1}{2} \\sqrt{n} \\Omega t\\right) \\\\[2ex]\n&= \\sum_{n=1}^\\infty\\frac{e^{-\\langle n \\rangle}\\langle n \\rangle^n}{n!} \\sin^2\\left(\\tfrac{1}{2} \\sqrt{n}\\Omega t \\right) \\\\[2ex]\n&= \\sum_{n=1}^\\infty\\frac{e^{-\\langle n \\rangle}\\langle n \\rangle^n}{n!} \\sin^2(\\Omega_n t) \\\\{}\n\\end{align}"
},
{
"math_id": 140,
"text": " \\langle n \\rangle = |\\alpha|^2 "
},
{
"math_id": 141,
"text": " \\langle (\\Delta n)^2\\rangle /\\langle n \\rangle ^2 \\simeq 1/\\langle n \\rangle "
},
{
"math_id": 142,
"text": " \\Omega_n "
},
{
"math_id": 143,
"text": " \\langle n \\rangle "
},
{
"math_id": 144,
"text": "\\Omega_n\\simeq\\frac{\\Omega}{2}\\sqrt{\\langle n \\rangle}\\left(1+\\frac{1}{2}\\frac{n-\\langle n \\rangle}{\\langle n \\rangle}\\right) "
},
{
"math_id": 145,
"text": " P_e(t)\\simeq \\frac{1}{2}-\\frac{e^{-\\langle n\\rangle}}{4}\\cdot\\left(e^{-i\\sqrt{\\langle n \\rangle }\\Omega t/2} \\exp\\left[\\langle n \\rangle \\exp\\left(-\\frac{i\\Omega t}{2 \\sqrt{\\langle n \\rangle}}\\right)\\right]+e^{i\\sqrt{\\langle n \\rangle }\\Omega t/2} \\exp\\left[\\langle n \\rangle \\exp\\left(\\frac{i\\Omega t}{2 \\sqrt{\\langle n \\rangle}}\\right)\\right]\\right) "
},
{
"math_id": 146,
"text": " \\frac{\\Omega t}{2} \\ll \\sqrt{\\langle n \\rangle} "
},
{
"math_id": 147,
"text": "P_e(t)\\simeq \\frac{1}{2}-\\frac{1}{2}\\cdot \\cos\\left[\\sqrt{\\langle n \\rangle}\\Omega t\\right]e^{-\\Omega^2 t^2/8}"
},
{
"math_id": 148,
"text": " \\Omega_{\\text{eff}} = \\sqrt{\\langle n \\rangle}\\Omega "
},
{
"math_id": 149,
"text": " \\tau_c=\\frac{\\sqrt{2}}{\\Omega} "
},
{
"math_id": 150,
"text": " \\tau_r=\\frac{4\\pi}{\\Omega}\\sqrt{\\langle n \\rangle} ."
},
{
"math_id": 151,
"text": " gt = \\Omega t /2 "
},
{
"math_id": 152,
"text": "a = (\\delta/2g)^2 = 40"
},
{
"math_id": 153,
"text": "\\delta"
}
] |
https://en.wikipedia.org/wiki?curid=5907185
|
5907688
|
Digital topology
|
Properties of 2D or 3D digital images that correspond to classic topological properties
Digital topology deals with properties and features of two-dimensional (2D) or three-dimensional (3D) digital images
that correspond to topological properties (e.g., connectedness) or topological features (e.g., boundaries) of objects.
Concepts and results of digital topology are used to specify and justify important (low-level) image analysis algorithms,
including algorithms for thinning, border or surface tracing, counting of components or tunnels, or region-filling.
History.
Digital topology was first studied in the late 1960s by the computer image analysis researcher Azriel Rosenfeld (1931–2004), whose publications on the subject played a major role in establishing and developing the field. The term "digital topology" was itself invented by Rosenfeld, who used it in a 1973 publication for the first time.
A related work called the grid cell topology, which could be considered as a link to classic combinatorial topology, appeared in the book of Pavel Alexandrov and Heinz Hopf, Topologie I (1935). Rosenfeld "et al." proposed digital connectivity such as 4-connectivity and 8-connectivity in two dimensions as well as 6-connectivity and 26-connectivity in three dimensions. The labeling method for inferring a connected component was studied in the 1970s. Theodosios Pavlidis (1982) suggested the use of graph-theoretic algorithms such as the depth-first search method for finding connected components. Vladimir A. Kovalevsky (1989) extended the Alexandrov–Hopf 2D grid cell topology to three and higher dimensions. He also proposed (2008) a more general axiomatic theory of locally finite topological spaces and abstract cell complexes formerly suggested by Ernst Steinitz (1908). It is the Alexandrov topology. The book from 2008 contains new definitions of topological balls and spheres independent of a metric and numerous applications to digital image analysis.
In the early 1980s, digital surfaces were studied. David Morgenthaler and Rosenfeld (1981) gave a mathematical definition of surfaces in three-dimensional digital space. This definition contains a total of nine types of digital surfaces. The digital manifold was studied in the 1990s. A recursive definition of the digital k-manifold was proposed intuitively by Chen and Zhang in 1993. Many applications were found in image processing and computer vision.
Basic results.
A basic (early) result in digital topology says that 2D binary images require the alternative use of 4- or 8-adjacency or "pixel connectivity" (for "object" or "non-object"
pixels) to ensure the basic topological duality of separation and connectedness. This alternative use corresponds to open or closed
sets in the 2D grid cell topology, and the result generalizes to 3D: the alternative use of 6- or 26-adjacency corresponds
to open or closed sets in the 3D grid cell topology. Grid cell topology also applies to multilevel (e.g., color) 2D or 3D images,
for example based on a total order of possible image values and applying a 'maximum-label rule' (see the book by Klette and Rosenfeld, 2004).
Digital topology is highly related to combinatorial topology. The main differences between them are: (1) digital topology mainly studies digital objects that are formed by grid cells (the cells of integer lattices), rather than more general cell complexes, and (2) digital topology also deals with non-Jordan manifolds.
A combinatorial manifold is a kind of manifold which is a discretization of a manifold. It usually means a piecewise linear manifold made by simplicial complexes. A digital manifold is a special kind of combinatorial manifold which is defined in digital space i.e. grid cell space.
A digital form of the Gauss–Bonnet theorem is: Let "M" be a closed digital 2D manifold in direct adjacency (i.e., a (6,26)-surface in 3D).
The formula for genus is
formula_0,
where formula_1 indicates the set of surface-points each of which has "i" adjacent points on the surface (Chen and Rong, ICPR 2008).
If "M" is simply connected, i.e., formula_2, then formula_3. (See also Euler characteristic.)
|
[
{
"math_id": 0,
"text": " g = 1 + (M_{5} + 2 M_{6} - M_{3}) / 8"
},
{
"math_id": 1,
"text": "M_i"
},
{
"math_id": 2,
"text": "g=0"
},
{
"math_id": 3,
"text": "M_3= 8+ M_5+ 2M_6"
}
] |
https://en.wikipedia.org/wiki?curid=5907688
|
5908484
|
Dynamic pressure
|
Kinetic energy per unit volume of a fluid
In fluid dynamics, dynamic pressure (denoted by q or Q and sometimes called velocity pressure) is the quantity defined by:
formula_0
where (in SI units):
It can be thought of as the fluid's kinetic energy per unit volume.
For incompressible flow, the dynamic pressure of a fluid is the difference between its total pressure and static pressure. From Bernoulli's law, dynamic pressure is given by
formula_1
where "p"0 and "p"s are the total and static pressures, respectively.
Physical meaning.
Dynamic pressure is the kinetic energy per unit volume of a fluid. Dynamic pressure is one of the terms of Bernoulli's equation, which can be derived from the conservation of energy for a fluid in motion.
At a stagnation point the dynamic pressure is equal to the difference between the stagnation pressure and the static pressure, so the dynamic pressure in a flow field can be measured at a stagnation point.
Another important aspect of dynamic pressure is that, as dimensional analysis shows, the aerodynamic stress (i.e. stress within a structure subject to aerodynamic forces) experienced by an aircraft travelling at speed formula_2 is proportional to the air density and square of formula_2, i.e. proportional to formula_3. Therefore, by looking at the variation of formula_3 during flight, it is possible to determine how the stress will vary and in particular when it will reach its maximum value. The point of maximum aerodynamic load is often referred to as "max q" and it is a critical parameter in many applications, such as launch vehicles.
Dynamic pressure can also appear as a term in the incompressible Navier-Stokes equation which may be written:
formula_4
By a vector calculus identity (formula_5)
formula_6
so that for incompressible, irrotational flow (formula_7), the second term on the left in the Navier-Stokes equation is just the gradient of the dynamic pressure. In hydraulics, the term formula_8 is known as the hydraulic velocity head (hv) so that the dynamic pressure is equal to formula_9.
Uses.
The dynamic pressure, along with the static pressure and the pressure due to elevation, is used in Bernoulli's principle as an energy balance on a closed system. The three terms are used to define the state of a closed system of an incompressible, constant-density fluid.
When the dynamic pressure is divided by the product of fluid density and acceleration due to gravity, g, the result is called velocity head, which is used in head equations like the one used for pressure head and hydraulic head. In a venturi flow meter, the "differential pressure head" can be used to calculate the "differential velocity head", which are equivalent in the adjacent picture. An alternative to "velocity head" is "dynamic head".
Compressible flow.
Many authors define "dynamic pressure" only for incompressible flows. (For compressible flows, these authors use the concept of impact pressure.) However, the definition of "dynamic pressure" can be extended to include compressible flows.
For compressible flow the isentropic relations can be used (also valid for incompressible flow):
formula_10
Where:
References.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "q = \\frac{1}{2}\\rho\\, u^2"
},
{
"math_id": 1,
"text": " p_0 - p_\\text{s} = \\frac{1}{2}\\rho\\, u^2"
},
{
"math_id": 2,
"text": "v"
},
{
"math_id": 3,
"text": "q"
},
{
"math_id": 4,
"text": "\\rho\\frac{\\partial \\mathbf{u}}{\\partial t} + \\rho(\\mathbf{u} \\cdot \\nabla) \\mathbf{u} - \\rho\\nu \\,\\nabla^2 \\mathbf{u} = - \\nabla p + \\rho\\mathbf{g}"
},
{
"math_id": 5,
"text": "u=| \\mathbf{u} |"
},
{
"math_id": 6,
"text": "\\nabla (u^2/2)=(\\mathbf{u}\\cdot \\nabla) \\mathbf{u} + \\mathbf{u} \\times (\\nabla \\times \\mathbf{u})"
},
{
"math_id": 7,
"text": "\\nabla \\times \\mathbf{u}=0"
},
{
"math_id": 8,
"text": "u^2/2g"
},
{
"math_id": 9,
"text": "\\rho g h_v"
},
{
"math_id": 10,
"text": " q=p_s\\left(1+\\frac{\\gamma-1}{2}M^2\\right)^{\\frac{\\gamma}{\\gamma-1}}-p_s "
}
] |
https://en.wikipedia.org/wiki?curid=5908484
|
59090155
|
Thomas Schick
|
German mathematician
Thomas Schick (born 22 May 1969 in Alzey) is a German mathematician, specializing in algebraic topology and differential geometry.
Education and career.
Schick studied mathematics and physics at the Johannes Gutenberg University Mainz, where he received in 1994 his Diplom in mathematics and in 1996 his PhD (Promotion) under the supervision of Wolfgang Lück with thesis "Analysis on Manifolds of Bounded Geometry, Hodge-deRham Isomorphism and formula_0-Index Theorem". As a postdoc he was from 1996 to 1998 at the University of Münster and from 1998 to 2000 an assistant professor at Pennsylvania State University, where he worked with Nigel Higson and John Roe. Schick received his habilitation in 2000 from the University of Münster and is since 2001 a professor for pure mathematics at the University of Göttingen.
His research deals with topological invariants, "e.g." formula_0-invariants and those invariants which result from the K-theory of operator algebras. Such invariants arise in generalizations of the Atiyah-Singer index theorem.
Schick, with Wolfgang Lück, introduced the strong Atiyah conjecture. Given a discrete group G, the Atiyah conjecture states that the formula_0-Betti numbers of a finite CW-complex that has fundamental group G are integers, provided that G is torsion-free; furthermore, in the general case, the formula_0-Betti numbers are rational numbers with denominators determined by the finite subgroups of G. In 2007 Schick, with Peter Linnell, proved a theorem which established conditions under which the Atiyah conjecture for a torsion-free group G implies the Atiyah conjecture for every finite extension of G; furthermore, they proved that the conditions are satisfied for a certain class of groups. In 2000 Schick proved the Atiyah conjecture for a large class of special cases. In 2007 he presented a method which proved the Baum-Connes conjecture for the full braid groups, and for other classes of groups which arise as (finite) extensions for which the Baum-Connes conjecture is known to be true.
In the 1990s there were proofs of many special cases of the Gromov-Lawson-Rosenberg conjecture concerning criteria for the existence of a metric with positive scalar curvature; in 1997 Schick published the first counterexample.
He is the coordinator of the Courant Research Center's "Strukturen höherer Ordnung in der Mathematik" (Structures of Higher Order in Mathematics) at the University of Göttingen. A major goal of the research center is the investigation of mathematical structures that could play a role in modern theoretical physics, especially string theory and quantum gravity.
He was the managing editor for Mathematische Annalen. In 2014 he was an invited speaker with talk "The topology of scalar curvature" at the International Congress of Mathematicians in Seoul. In 2016 he became a full member of the Göttingen Academy of Sciences and Humanities.
|
[
{
"math_id": 0,
"text": "L^2"
}
] |
https://en.wikipedia.org/wiki?curid=59090155
|
59091992
|
Ranking (statistics)
|
Data transformation of statistics into rank
In statistics, ranking is the data transformation in which numerical or ordinal values are replaced by their rank when the data are sorted.
For example, if the numerical data 3.4, 5.1, 2.6, 7.3 are observed, the ranks of these data items would be 2, 3, 1 and 4 respectively.
As another example, the ordinal data hot, cold, warm would be replaced by 3, 1, 2. In these examples, the ranks are assigned to values in ascending order, although descending ranks can also be used.
Ranks are related to the indexed list of order statistics, which consists of the original dataset rearranged into ascending order.
Use for testing.
Some kinds of statistical tests employ calculations based on ranks. Examples include:
The distribution of values in decreasing order of rank is often of interest when values vary widely in scale; this is the rank-size distribution (or rank-frequency distribution), for example for city sizes or word frequencies. These often follow a power law.
Some ranks can have non-integer values for tied data values. For example, when there is an even number of copies of the same data value, the fractional statistical rank of the tied data ends in ½.
Percentile rank is another type of statistical ranking.
Computation.
Microsoft Excel provides two ranking functions, the Rank.EQ function which assigns competition ranks ("1224") and the Rank.AVG function which assigns fractional ranks ("1 2.5 2.5 4"). The functions have the order argument, which is by default is set to "descending", i.e. the largest number will have a rank 1. This is generally uncommon for statistics where the ranking is usually in ascending order, where the smallest number has a rank 1.
Comparison of rankings.
A rank correlation can be used to compare two rankings for the same set of objects.
For example, Spearman's rank correlation coefficient is useful to measure the statistical dependence between the rankings of athletes in two tournaments. And the Kendall rank correlation coefficient is another approach.
Alternatively, intersection/overlap-based approaches offer additional flexibility.
One example is the "Rank–rank hypergeometric overlap" approach, which is designed to compare ranking of the genes that are at the "top" of two ordered lists of differentially expressed genes.
A similar approach is taken by the "Rank Biased Overlap (RBO)", which also implements an adjustable probability, p, to customize the weight assigned at a desired depth of ranking.
These approaches have the advantages of addressing disjoint sets, sets of different sizes, and top-weightedness (taking into account the absolute ranking position, which may be ignored in standard non-weighted rank correlation approaches).
Definition.
Let formula_0 be a set of random variables. By sorting them into order, we have defined their order statistics
formula_1
If all the values are unique, the rank of variable number formula_2 is the unique solution formula_3 to the equation formula_4.
In the presence of ties, we may either use a midrank (corresponding to the "fractional rank" mentioned above), defined as the average of all indices formula_2 such that formula_5, or the uprank (corresponding to the "modified competition ranking") defined by formula_6.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X_1,..X_n"
},
{
"math_id": 1,
"text": " X_{n,(1)}\\leq ... \\leq X_{n,(n)}"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "R_{n,i}"
},
{
"math_id": 4,
"text": "X_i = X_{N,(R_{n,i})}"
},
{
"math_id": 5,
"text": "X_j = X_{N,(R_{n,j})}"
},
{
"math_id": 6,
"text": "\\sum_{j=1}^{n}1\\{X_j \\leq X_i\\}"
}
] |
https://en.wikipedia.org/wiki?curid=59091992
|
59095024
|
List of viscosities
|
Dynamic viscosity is a material property which describes the resistance of a fluid to shearing flows. It corresponds roughly to the intuitive notion of a fluid's 'thickness'. For instance, honey has
a much higher viscosity than water. Viscosity is measured using a viscometer. Measured values span several orders
of magnitude. Of all fluids, gases have the lowest viscosities, and thick liquids have the highest.
The values listed in this article are representative estimates only, as they do not account for measurement uncertainties, variability in material definitions, or non-Newtonian behavior.
Kinematic viscosity is dynamic viscosity divided by fluid density. This page lists only dynamic viscosity.
Units and conversion factors.
For dynamic viscosity, the SI unit is Pascal-second. In engineering, the unit is usually Poise or centiPoise, with 1 Poise = 0.1 Pascal-second, and 1 centiPoise = 0.01 Poise.
For kinematic viscosity, the SI unit is m^2/s. In engineering, the unit is usually Stoke or centiStoke, with 1 Stoke = 0.0001 m^2/s, and 1 centiStoke = 0.01 Stoke.
For liquid, the dynamic viscosity is usually in the range of 0.001 to 1 Pascal-second, or 1 to 1000 centiPoise. The density is usually on the order of 1000 kg/m^3, i.e. that of water. Consequently, if a liquid has dynamic viscosity of n centiPoise, and its density is not too different from that of water, then its kinematic viscosity is around n centiStokes.
For gas, the dynamic viscosity is usually in the range of 10 to 20 microPascal-seconds, or 0.01 to 0.02 centiPoise. The density is usually on the order of 0.5 to 5 kg/m^3. Consequently, its kinematic viscosity is around 2 to 40 centiStokes.
Viscosities at or near standard conditions.
Here "standard conditions" refers to temperatures of 25 °C and pressures of 1 atmosphere. Where data points are unavailable for 25 °C or 1 atmosphere, values are given at a nearby temperature/pressure.
The temperatures corresponding to each data point are stated explicitly. By contrast, pressure is omitted since gaseous viscosity depends only weakly on it.
Gases.
Noble gases.
The simple structure of noble gas molecules makes them amenable to accurate theoretical treatment. For this reason, measured viscosities of the noble gases serve as important tests of the kinetic-molecular theory of transport processes in gases (see Chapman–Enskog theory). One of the key predictions of the theory is the following relationship between viscosity formula_0, thermal conductivity formula_1, and specific heat formula_2:
formula_3
where formula_4 is a constant which in general depends on the details of intermolecular interactions, but for spherically symmetric molecules is very close to formula_5.
This prediction is reasonably well-verified by experiments, as the following table shows. Indeed, the relation provides a viable means for obtaining thermal conductivities of gases since these are more difficult to measure directly than viscosity.
Liquids.
n-Alkanes.
Substances composed of longer molecules tend to have larger viscosities due to the increased contact of molecules across layers of flow. This effect can be observed for the n-alkanes and 1-chloroalkanes tabulated below. More dramatically, a long-chain hydrocarbon like squalene (C30H62) has a viscosity an order of magnitude larger than the shorter n-alkanes (roughly 31 mPa·s at 25 °C). This is also the reason oils tend to be highly viscous, since they are usually composed of long-chain hydrocarbons.
Aqueous solutions.
The viscosity of an aqueous solution can either increase or decrease with concentration depending on the solute and the range of concentration. For instance, the table below shows that viscosity increases monotonically with concentration for sodium chloride and calcium chloride, but decreases for potassium iodide and cesium chloride (the latter up to 30% mass percentage, after which viscosity increases).
The increase in viscosity for sucrose solutions is particularly dramatic, and explains in part the common experience of sugar water being "sticky".
Substances of variable composition.
<templatestyles src="Reflist/styles.css" />
Viscosities under nonstandard conditions.
Gases.
All values are given at 1 bar (approximately equal to atmospheric pressure).
Liquids (including liquid metals).
In the following table, the temperature is given in kelvins.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mu"
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": "c_v"
},
{
"math_id": 3,
"text": "\nk = f \\mu c_v\n"
},
{
"math_id": 4,
"text": "f"
},
{
"math_id": 5,
"text": "2.5"
}
] |
https://en.wikipedia.org/wiki?curid=59095024
|
5909536
|
Poset topology
|
In mathematics, the poset topology associated to a poset ("S", ≤) is the Alexandrov topology (open sets are upper sets) on the poset of finite chains of ("S", ≤), ordered by inclusion.
Let "V" be a set of vertices. An abstract simplicial complex Δ is a set of finite sets of vertices, known as faces formula_0, such that
formula_1
Given a simplicial complex Δ as above, we define a (point set) topology on Δ by declaring a subset formula_2 be closed if and only if Γ is a simplicial complex, i.e.
formula_3
This is the Alexandrov topology on the poset of faces of Δ.
The order complex associated to a poset ("S", ≤) has the set "S" as vertices, and the finite chains of ("S", ≤) as faces. The poset topology associated to a poset ("S", ≤) is then the Alexandrov topology on the order complex associated to ("S", ≤).
|
[
{
"math_id": 0,
"text": "\\sigma \\subseteq V"
},
{
"math_id": 1,
"text": "\\forall \\rho \\, \\forall \\sigma \\!: \\ \\rho \\subseteq \\sigma \\in \\Delta \\Rightarrow \\rho \\in \\Delta."
},
{
"math_id": 2,
"text": "\\Gamma \\subseteq \\Delta"
},
{
"math_id": 3,
"text": "\\forall \\rho \\, \\forall \\sigma \\!: \\ \\rho \\subseteq \\sigma \\in \\Gamma \\Rightarrow \\rho \\in \\Gamma."
}
] |
https://en.wikipedia.org/wiki?curid=5909536
|
590971
|
Haversine formula
|
Formula for the great-circle distance between two points on a sphere
The haversine formula determines the great-circle distance between two points on a sphere given their longitudes and latitudes. Important in navigation, it is a special case of a more general formula in spherical trigonometry, the law of haversines, that relates the sides and angles of spherical triangles.
The first table of haversines in English was published by James Andrew in 1805, but Florian Cajori credits an earlier use by José de Mendoza y Ríos in 1801. The term "haversine" was coined in 1835 by James Inman.
These names follow from the fact that they are customarily written in terms of the haversine function, given by hav "θ"
sin2(). The formulas could equally be written in terms of any multiple of the haversine, such as the older versine function (twice the haversine). Prior to the advent of computers, the elimination of division and multiplication by factors of two proved convenient enough that tables of haversine values and logarithms were included in 19th- and early 20th-century navigation and trigonometric texts. These days, the haversine form is also convenient in that it has no coefficient in front of the sin2 function.
Formulation.
Let the central angle "θ" between any two points on a sphere be:
formula_0
where
The "haversine formula" allows the haversine of "θ" to be computed directly from the latitude (represented by "φ") and longitude (represented by "λ") of the two points:
formula_1
where
Finally, the haversine function hav("θ"), applied above to both the central angle "θ" and the differences in latitude and longitude, is
formula_4
The haversine function computes half a versine of the angle "θ", or the squares of half chord of the angle on a unit circle (sphere).
To solve for the distance "d", apply the archaversine (inverse haversine) to hav("θ") or use the arcsine (inverse sine) function:
formula_5
or more explicitly:
formula_6
where
formula_7.
When using these formulae, one must ensure that "h"
hav("θ") does not exceed 1 due to a floating point error ("d" is real only for 0 ≤ "h" ≤ 1). "h" only approaches 1 for "antipodal" points (on opposite sides of the sphere)—in this region, relatively large numerical errors tend to arise in the formula when finite precision is used. Because "d" is then large (approaching π"R", half the circumference) a small error is often not a major concern in this unusual case (although there are other great-circle distance formulas that avoid this problem). (The formula above is sometimes written in terms of the arctangent function, but this suffers from similar numerical problems near "h"
1.)
As described below, a similar formula can be written using cosines (sometimes called the spherical law of cosines, not to be confused with the law of cosines for plane geometry) instead of haversines, but if the two points are close together (e.g. a kilometer apart, on the Earth) one might end up with cos()
0.99999999, leading to an inaccurate answer. Since the haversine formula uses sines, it avoids that problem.
Either formula is only an approximation when applied to the Earth, which is not a perfect sphere: the "Earth radius" "R" varies from 6356.752 km at the poles to 6378.137 km at the equator. More importantly, the radius of curvature of a north-south line on the earth's surface is 1% greater at the poles (≈6399.594 km) than at the equator (≈6335.439 km)—so the haversine formula and law of cosines cannot be guaranteed correct to better than 0.5%. More accurate methods that consider the Earth's ellipticity are given by Vincenty's formulae and the other formulas in the geographical distance article.
The law of haversines.
Given a unit sphere, a "triangle" on the surface of the sphere is defined by the great circles connecting three points "u", "v", and "w" on the sphere. If the lengths of these three sides are "a" (from "u" to "v"), "b" (from "u" to "w"), and "c" (from "v" to "w"), and the angle of the corner opposite "c" is "C", then the law of haversines states:
formula_8
Since this is a unit sphere, the lengths "a", "b", and "c" are simply equal to the angles (in radians) subtended by those sides from the center of the sphere (for a non-unit sphere, each of these arc lengths is equal to its central angle multiplied by the radius "R" of the sphere).
In order to obtain the haversine formula of the previous section from this law, one simply considers the special case where "u" is the north pole, while "v" and "w" are the two points whose separation "d" is to be determined. In that case, "a" and "b" are − "φ"1,2 (that is, the, co-latitudes), "C" is the longitude separation "λ"2 − "λ"1, and "c" is the desired . Noting that sin( − "φ")
cos("φ"), the haversine formula immediately follows.
To derive the law of haversines, one starts with the spherical law of cosines:
formula_9
As mentioned above, this formula is an ill-conditioned way of solving for "c" when "c" is small. Instead, we substitute the identity that cos("θ")
1 − 2 hav("θ"), and also employ the addition identity cos("a" − "b")
cos("a") cos("b") + sin("a") sin("b"), to obtain the law of haversines, above.
Proof.
One can prove the formula:
formula_10
by transforming the points given by their latitude and longitude into cartesian coordinates, then taking their dot product.
Consider two points formula_11 on the unit sphere, given by their latitude formula_12 and longitude formula_13:
formula_14
These representations are very similar to spherical coordinates, however latitude is measured as angle from the equator and not the north pole. These points have the following representations in cartesian coordinates:
formula_15
From here we could directly attempt to calculate the dot product and proceed, however the formulas become significantly simpler when we consider the following fact: the distance between the two points will not change if we rotate the sphere along the z-axis. This will in effect add a constant to formula_16. Note that similar considerations do not apply to transforming the latitudes - adding a constant to the latitudes may change the distance between the points. By choosing our constant to be formula_17, and setting formula_18, our new points become:
formula_19
With formula_20 denoting the angle between formula_21 and formula_22, we now have that:
formula_23
|
[
{
"math_id": 0,
"text": "\\theta = \\frac{d}{r}"
},
{
"math_id": 1,
"text": "\n \\operatorname{hav}\\theta =\n \\operatorname{hav}\\left(\\Delta \\varphi \\right) + \\cos\\left(\\varphi_1\\right)\\cos\\left(\\varphi_2\\right)\\operatorname{hav}\\left(\\Delta \\lambda \\right)\n"
},
{
"math_id": 2,
"text": "\\Delta \\varphi = \\varphi_2 - \\varphi_1"
},
{
"math_id": 3,
"text": "\\Delta \\lambda = \\lambda_2 - \\lambda_1"
},
{
"math_id": 4,
"text": "\\operatorname{hav}\\theta= \\sin^2\\left(\\frac{\\theta}{2}\\right) = \\frac{1 - \\cos(\\theta)}{2}"
},
{
"math_id": 5,
"text": "d = r\\operatorname{archav}(\\operatorname{hav}\\theta) = 2r\\arcsin\\left(\\sqrt{\\operatorname{hav}\\theta}\\right)"
},
{
"math_id": 6,
"text": "\\begin{align}\n d &= 2r \\arcsin\\left(\\sqrt{\\operatorname{hav}(\\Delta \\varphi ) + ( 1 - \\operatorname{hav}(\\Delta \\varphi) - \\operatorname{hav}(2 \\varphi_\\text{m} ))\\cdot\\operatorname{hav}(\\Delta \\lambda)}\\right) \\\\\n &= 2r \\arcsin\\left(\\sqrt{\\sin^2\\left(\\frac{\\Delta \\varphi }{2}\\right) + \\left(1- \\sin^2\\left(\\frac{\\Delta \\varphi }{2}\\right) - \\sin^2\\left(\\varphi_\\text{m}\\right)\\right) \\cdot \\sin^2\\left(\\frac{\\Delta \\lambda}{2}\\right)}\\right) \\\\\n &= 2r \\arcsin\\left(\\sqrt{\\sin^2\\left(\\frac{\\Delta \\varphi }{2}\\right) + \\cos \\varphi_1 \\cdot \\cos \\varphi_2 \\cdot \\sin^2\\left(\\frac{\\Delta \\lambda}{2}\\right)}\\right) \\\\\n &= 2r \\arcsin\\left(\\sqrt{\\sin^2\\left(\\frac{\\Delta \\varphi }{2}\\right) \\cdot \\cos^2\\left(\\frac{\\Delta \\lambda}{2}\\right) + \\cos^2\\left(\\varphi_\\text{m}\\right) \\cdot \\sin^2\\left(\\frac{\\Delta \\lambda}{2}\\right)}\\right) \\\\\n &= 2r \\arcsin\\left(\\sqrt{\\frac{1 - \\cos\\left(\\Delta \\varphi \\right) + \\cos \\varphi_1 \\cdot \\cos \\varphi_2 \\cdot \\left(1 - \\cos\\left(\\Delta \\lambda\\right)\\right)}{2}}\\right)\n\\end{align}"
},
{
"math_id": 7,
"text": "\\varphi_\\text{m} = \\frac{\\varphi_2 + \\varphi_1}{2}"
},
{
"math_id": 8,
"text": "\\operatorname{hav}(c) = \\operatorname{hav}(a - b) + \\sin(a)\\sin(b)\\operatorname{hav}(C)."
},
{
"math_id": 9,
"text": "\\cos(c) = \\cos(a)\\cos(b) + \\sin(a)\\sin(b)\\cos(C). \\,"
},
{
"math_id": 10,
"text": "\n \\operatorname{hav}\\left(\\theta\\right) =\n \\operatorname{hav}\\left(\\Delta \\varphi \\right) + \\cos\\left(\\varphi_1\\right)\\cos\\left(\\varphi_2\\right)\\operatorname{hav}\\left(\\Delta \\lambda\\right)\n"
},
{
"math_id": 11,
"text": "\\bf p_1,p_2"
},
{
"math_id": 12,
"text": "\\varphi"
},
{
"math_id": 13,
"text": "\\lambda"
},
{
"math_id": 14,
"text": "\\begin{align}\n{\\bf p_2} &= (\\lambda_2, \\varphi_2) \\\\\n{\\bf p_1} &= (\\lambda_1, \\varphi_1)\n\\end{align}"
},
{
"math_id": 15,
"text": "\\begin{align}\n{\\bf p_2} &= (\\cos(\\lambda_2)\\cos(\\varphi_2), \\;\\sin(\\lambda_2)\\cos(\\varphi_2), \\;\\sin(\\varphi_2)) \\\\\n{\\bf p_1} &= (\\cos(\\lambda_1)\\cos(\\varphi_1), \\;\\sin(\\lambda_1)\\cos(\\varphi_1), \\;\\sin(\\varphi_1))\n\\end{align}"
},
{
"math_id": 16,
"text": "\\lambda_1, \\lambda_2"
},
{
"math_id": 17,
"text": "-\\lambda_1"
},
{
"math_id": 18,
"text": "\\lambda' = \\Delta \\lambda"
},
{
"math_id": 19,
"text": "\\begin{align}\n{\\bf p_2'}\t&= (\\cos(\\lambda')\\cos(\\varphi_2), \\;\\sin(\\lambda')\\cos(\\varphi_2), \\;\\sin(\\varphi_2)) \\\\\n{\\bf p_1'}\t&= (\\cos(0)\\cos(\\varphi_1), \\;\\sin(0)\\cos(\\varphi_1), \\;\\sin(\\varphi_1)) \\\\\n\t\t\t&= (\\cos(\\varphi_1), \\;0, \\;\\sin(\\varphi_1))\n\\end{align}"
},
{
"math_id": 20,
"text": "\\theta"
},
{
"math_id": 21,
"text": "{\\bf p_1}"
},
{
"math_id": 22,
"text": "{\\bf p_2}"
},
{
"math_id": 23,
"text": "\\begin{align}\n\\cos(\\theta) &= \\langle{\\bf p_1},{\\bf p_2}\\rangle = \\langle{\\bf p_1'},{\\bf p_2'}\\rangle = \\cos(\\lambda')\\cos(\\varphi_1)\\cos(\\varphi_2) + \\sin(\\varphi_1)\\sin(\\varphi_2) \\\\\n\t\t\t&= \\sin(\\varphi_2)\\sin(\\varphi_1) + \\cos(\\varphi_2)\\cos(\\varphi_1) - \\cos(\\varphi_2)\\cos(\\varphi_1) + \\cos(\\lambda')\\cos(\\varphi_2)\\cos(\\varphi_1) \\\\\n\t\t\t&= \\cos(\\Delta \\varphi) + \\cos(\\varphi_2)\\cos(\\varphi_1)(-1 + \\cos(\\lambda')) \\Rightarrow \\\\\n\\operatorname{hav}\\left(\\theta\\right)\n\t\t\t&= \\operatorname{hav}\\left(\\Delta \\varphi \\right) + \\cos(\\varphi_2)\\cos(\\varphi_1)\\operatorname{hav}\\left(\\lambda' \\right)\n\\end{align}\n"
}
] |
https://en.wikipedia.org/wiki?curid=590971
|
590995
|
Intermodulation
|
Non-linear effect in amplitude modulation
Intermodulation (IM) or intermodulation distortion (IMD) is the amplitude modulation of signals containing two or more different frequencies, caused by nonlinearities or time variance in a system. The intermodulation between frequency components will form additional components at frequencies that are not just at harmonic frequencies (integer multiples) of either, like harmonic distortion, but also at the sum and difference frequencies of the original frequencies and at sums and differences of multiples of those frequencies.
Intermodulation is caused by non-linear behaviour of the signal processing (physical equipment or even algorithms) being used. The theoretical outcome of these non-linearities can be calculated by generating a Volterra series of the characteristic, or more approximately by a Taylor series.
Practically all audio equipment has some non-linearity, so it will exhibit some amount of IMD, which however may be low enough to be imperceptible by humans. Due to the characteristics of the human auditory system, the same percentage of IMD is perceived as more bothersome when compared to the same amount of harmonic distortion.
Intermodulation is also usually undesirable in radio, as it creates unwanted spurious emissions, often in the form of sidebands. For radio transmissions this increases the occupied bandwidth, leading to adjacent channel interference, which can reduce audio clarity or increase spectrum usage.
IMD is only distinct from harmonic distortion in that the stimulus signal is different. The same nonlinear system will produce both total harmonic distortion (with a solitary sine wave input) and IMD (with more complex tones). In music, for instance, IMD is intentionally applied to electric guitars using overdriven amplifiers or effects pedals to produce new tones at "sub"harmonics of the tones being played on the instrument. See Power chord#Analysis.
IMD is also distinct from intentional modulation (such as a frequency mixer in superheterodyne receivers) where signals to be modulated are presented to an intentional nonlinear element (multiplied). See non-linear mixers such as mixer diodes and even single-transistor oscillator-mixer circuits. However, while the intermodulation products of the received signal with the local oscillator signal are intended, superheterodyne mixers can, at the same time, also produce unwanted intermodulation effects from strong signals near in frequency to the desired signal that fall within the passband of the receiver.
Causes of intermodulation.
A linear time-invariant system cannot produce intermodulation. If the input of a linear time-invariant system is a signal of a single frequency, then the output is a signal of the same frequency; only the amplitude and phase can differ from the input signal.
Non-linear systems generate harmonics in response to sinusoidal input, meaning that if the input of a non-linear system is a signal of a single frequency, formula_0 then the output is a signal which includes a number of integer multiples of the input frequency signal; (i.e. some of formula_1).
Intermodulation occurs when the input to a non-linear system is composed of two or more frequencies. Consider an input signal that contains three frequency components atformula_2, formula_3, and formula_4; which may be expressed as
formula_5
where the formula_6 and formula_7 are the amplitudes and phases of the three components, respectively.
We obtain our output signal, formula_8, by passing our input through a non-linear function formula_9:
formula_10
formula_8 will contain the three frequencies of the input signal, formula_2, formula_3, and formula_4 (which are known as the "fundamental" frequencies), as well as a number of linear combinations of the fundamental frequencies, each in the form
formula_11
where formula_12, formula_13, and formula_14 are arbitrary integers which can assume positive or negative values. These are the intermodulation products (or IMPs).
In general, each of these frequency components will have a different amplitude and phase, which depends on the specific non-linear function being used, and also on the amplitudes and phases of the original input components.
More generally, given an input signal containing an arbitrary number formula_15 of frequency components formula_16, the output signal will contain a number of frequency components, each of which may be described by
formula_17
where the coefficients formula_18 are arbitrary integer values.
Intermodulation order.
The "order" formula_19 of a given intermodulation product is the sum of the absolute values of the coefficients,
formula_20
For example, in our original example above, third-order intermodulation products (IMPs) occur where formula_21:
In many radio and audio applications, odd-order IMPs are of most interest, as they fall within the vicinity of the original frequency components, and may therefore interfere with the desired behaviour. For example, intermodulation distortion from the third order (IMD3) of a circuit can be seen by looking at a signal that is made up of two sine waves, one at formula_32 and one at formula_33. When you cube the sum of these sine waves you will get sine waves at various frequencies including formula_34 and formula_35. If formula_32 and formula_33 are large but very close together then formula_34 and formula_35 will be very close to formula_32 and formula_33.
Passive intermodulation (PIM).
As explained in a previous section, intermodulation can only occur in non-linear systems. Non-linear systems are generally composed of "active" components, meaning that the components must be biased with an external power source which is not the input signal (i.e. the active components must be "turned on").
Passive intermodulation (PIM), however, occurs in passive devices (which may include cables, antennas etc.) that are subjected to two or more high power tones. The PIM product is the result of the two (or more) high power tones mixing at device nonlinearities such as junctions of dissimilar metals or metal-oxide junctions, such as loose corroded connectors. The higher the signal amplitudes, the more pronounced the effect of the nonlinearities, and the more prominent the intermodulation that occurs — even though upon initial inspection, the system would appear to be linear and unable to generate intermodulation.
The requirement for "two or more high power tones" need not be discrete tones. Passive intermodulation can also occur between different frequencies (i.e. different "tones") within a single broadband carrier. These PIMs would show up as sidebands in a telecommunication signal, which interfere with adjacent channels and impede reception.
Passive intermodulations are a major concern in modern communication systems in cases when a single antenna is used for both high power transmission signals as well as low power receive signals (or when a transmit antenna is in close proximity to a receive antenna). Although the power in the passive intermodulation signal is typically many orders of magnitude lower than the power of the transmit signal, the power in the passive intermodulation signal is often times on the same order of magnitude (and possibly higher) than the power of the receive signal. Therefore, if a passive intermodulation finds its way to receive path, it cannot be filtered or separated from the receive signal. The receive signal would therefore be clobbered by the passive intermodulation signal.
Sources of passive intermodulation.
Ferromagnetic materials are the most common materials to avoid and include ferrites, nickel, (including nickel plating) and steels (including some stainless steels). These materials exhibit hysteresis when exposed to reversing magnetic fields, resulting in PIM generation.
Passive intermodulation can also be generated in components with manufacturing or workmanship defects, such as cold or cracked solder joints or poorly made mechanical contacts. If these defects are exposed to high radio frequency currents, passive intermodulation can be generated. As a result, radio frequency equipment manufacturers perform factory PIM tests on components, to eliminate passive intermodulation caused by these design and manufacturing defects.
Passive intermodulation can also be inherent in the design of a high power radio frequency component where radio frequency current is forced to narrow channels or restricted.
In the field, passive intermodulation can be caused by components that were damaged in transit to the cell site, installation workmanship issues and by external passive intermodulation sources. Some of these include:
Passive intermodulation testing.
IEC 62037 is the international standard for passive intermodulation testing and gives specific details as to passive intermodulation measurement setups. The standard specifies the use of two +43 dBm (20 W) tones for the test signals for passive intermodulation testing. This power level has been used by radio frequency equipment manufacturers for more than a decade to establish PASS / FAIL specifications for radio frequency components.
Intermodulation in electronic circuits.
Slew-induced distortion (SID) can produce intermodulation distortion (IMD) when the first signal is slewing (changing voltage) at the limit of the amplifier's power bandwidth product. This induces an effective reduction in gain, partially amplitude-modulating the second signal. If SID only occurs for a portion of the signal, it is called "transient" intermodulation distortion.
Measurement.
Intermodulation distortion in audio is usually specified as the root mean square (RMS) value of the various sum-and-difference signals as a percentage of the original signal's root mean square voltage, although it may be specified in terms of individual component strengths, in decibels, as is common with radio frequency work. Audio system measurements (Audio IMD) include SMPTE standard RP120-1994 where two signals (at 60 Hz and 7 kHz, with 4:1 amplitude ratios) are used for the test; many other standards (such as DIN, CCIF) use other frequencies and amplitude ratios. Opinion varies over the ideal ratio of test frequencies (e.g. 3:4, or almost — but not exactly — 3:1 for example).
After feeding the equipment under test with low distortion input sinewaves, the output distortion can be measured by using an electronic filter to remove the original frequencies, or spectral analysis may be made using Fourier transformations in software or a dedicated spectrum analyzer, or when determining intermodulation effects in communications equipment, may be made using the receiver under test itself.
In radio applications, intermodulation may be measured as adjacent channel power ratio. Hard to test are intermodulation signals in the GHz-range generated from passive devices (PIM: passive intermodulation). Manufacturers of these scalar PIM-instruments are Summitek and Rosenberger. The newest developments are PIM-instruments to measure also the distance to the PIM-source. Anritsu offers a radar-based solution with low accuracy and Heuermann offers a frequency converting vector network analyzer solution with high accuracy.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "~f_a,"
},
{
"math_id": 1,
"text": "~ f_a, 2f_a, 3f_a, 4f_a, \\ldots"
},
{
"math_id": 2,
"text": "~f_a"
},
{
"math_id": 3,
"text": "~ f_b"
},
{
"math_id": 4,
"text": "~f_c"
},
{
"math_id": 5,
"text": "\\ x(t) = M_a \\sin(2 \\pi f_a t + \\phi_a) + M_b \\sin(2 \\pi f_b t + \\phi_b) + M_c \\sin(2 \\pi f_c t + \\phi_c)"
},
{
"math_id": 6,
"text": "\\ M"
},
{
"math_id": 7,
"text": "\\ \\phi"
},
{
"math_id": 8,
"text": "\\ y(t)"
},
{
"math_id": 9,
"text": "G"
},
{
"math_id": 10,
"text": "\\ y(t) = G\\left(x(t)\\right)\\,"
},
{
"math_id": 11,
"text": "\\ k_af_a + k_bf_b + k_cf_c"
},
{
"math_id": 12,
"text": "~k_a"
},
{
"math_id": 13,
"text": "~ k_b"
},
{
"math_id": 14,
"text": "~k_c"
},
{
"math_id": 15,
"text": "N"
},
{
"math_id": 16,
"text": "f_a, f_b, \\ldots, f_N"
},
{
"math_id": 17,
"text": "k_a f_a + k_b f_b + \\cdots + k_N f_N,\\,"
},
{
"math_id": 18,
"text": "k_a, k_b, \\ldots, k_N"
},
{
"math_id": 19,
"text": "\\ O"
},
{
"math_id": 20,
"text": "\\ O = \\left|k_a\\right| + \\left|k_b\\right| + \\cdots + \\left|k_N\\right|,"
},
{
"math_id": 21,
"text": "\\ |k_a|+|k_b|+|k_c| = 3"
},
{
"math_id": 22,
"text": "f_a + f_b + f_c"
},
{
"math_id": 23,
"text": "f_a + f_b - f_c"
},
{
"math_id": 24,
"text": "f_a + f_c - f_b"
},
{
"math_id": 25,
"text": "f_b + f_c - f_a"
},
{
"math_id": 26,
"text": "2f_a - f_b"
},
{
"math_id": 27,
"text": "2f_a - f_c"
},
{
"math_id": 28,
"text": "2f_b - f_a"
},
{
"math_id": 29,
"text": "2f_b - f_c"
},
{
"math_id": 30,
"text": "2f_c - f_a"
},
{
"math_id": 31,
"text": "2f_c - f_b"
},
{
"math_id": 32,
"text": "f_1"
},
{
"math_id": 33,
"text": "f_2"
},
{
"math_id": 34,
"text": "2\\times f_2-f_1"
},
{
"math_id": 35,
"text": "2\\times f_1-f_2"
}
] |
https://en.wikipedia.org/wiki?curid=590995
|
59099571
|
Virtual breakdown mechanism
|
The Virtual breakdown mechanism is a concept in the field of electrochemistry. In electrochemical reactions, when the cathode and the anode are close enough to each other ("i.e.", so-called "nanogap electrochemical cells"), the double layer of the regions from the two electrodes is overlapped, forming a large electric field uniformly distributed inside the entire electrode gap. Such high electric fields can significantly enhance the ion migration inside bulk solutions and thus increase the entire reaction rate, akin to the "breakdown" of the reactant(s). However, it is fundamentally different from the traditional "breakdown".
The Virtual breakdown mechanism was discovered in 2017 when researchers studied pure water electrolysis based on deep-sub-Debye-length nanogap electrochemical cells. Furthermore, researchers found the relation of the gap distance between cathodes and anodes to the performance of electrochemical reactions.
Electric field distribution.
The fundamental difference between traditional cells and nanogap cells is their electric potential distribution. This is the premise of the "virtual breakdown" effect.
For electrochemical reactions with high-concentration electrolyte in the macrosystem, the Debye-length is quite small. Due to the screening effect almost all of the potential drop is confined within the small Debye-length region (or double layer region). The potential in bulk solution (far from the electrodes) does not change too much, meaning that there is nearly zero electric field inside the bulk solution. However, when the counter electrode is within the Debye-length region ("i.e"., nanogap electrochemical cells), two double layers from anode and cathode overlap with each other. The electrostatic potential inside the entire gap changes dramatically, meaning that the huge electric field is uniformly distributed across the entire gap.
Pure water electrolysis.
We shall consider pure water electrolysis as an example to explain the concept of the Virtual breakdown mechanism.
Pure water electrolysis in macrosystem.
For the analysis of water electrolysis, we shall use H3O+ ions (also known as oxonium ions) at the cathode, as an example to explain the traditional reactions.
Water molecules self-ionize to H3O+ and OH− ions. Near the cathode surface (within the double layer region), newly generated H3O+ ions become hydrogen gas after obtaining electrons from the cathode; however because there is nearly no electric field inside the bulk solution (see section "Electric field distribution"), OH− ions can only transport through the bulk solution very slowly by diffusion. Moreover, in pure water the intrinsic H3O+ concentration is only 10−7 mol/L, not enough to neutralize the newly generated OH− ions. In this way OH− ions accumulate locally at the cathode surface (turning the solution near cathode into alkaline). Due to Le Chatelier's principle for water self-ionization,
<chem>H3O+ + OH- <=> 2H2O </chem>
the OH− ions accumulation impede further self-ionization of the water, which reduces the hydrogen evolution rate and eventually prevents water electrolysis. In this case water electrolysis becomes very slow or even halts; this manifests as a large equivalent resistance between the two electrodes.
This is why in the macrosystem pure water cannot be electrolyzed efficiently - the fundamental reason is the lack of rapid ion transport inside the bulk solution.
Pure water electrolysis in nanogap cell.
In nanogap cells the high electric field can distribute uniformly across the entire gap (see section "Electric field distribution"). This is different from ion transport in the macrosystem: now newly generated OH− ions can immediately migrate from cathode to anode. In the case where the two electrodes are close enough, the mass transport rate can be even larger than the electron-transfer rate. This results in OH− ions clustering for electron-transfer at the anode, rather than accumulating at the cathode. In this way the entire reaction can keep going and not self-limit.
Notice that for pure water electrolysis in nanogap cells, the net OH− ion accumulation near the anode not only increases the local reactant concentration but also decreases the overpotential requirement (as in the Frumkin effect). According to Butler–Volmer equation, such ion accumulation increases the electrolysis current, i.e. the water splitting throughput and efficiency.
Thus even pure water can be efficiently electrolyzed, when the electrode gap is small enough.
Virtual breakdown mechanism.
In reality water molecule dissociation (the splitting into H3O+ and OH− ions) occurs only at the electrode region (because of the ions continuously consumed at the two electrodes); however it effectively appears that the molecules split in the middle of the gap, with H3O+ ions migrating towards the cathode and OH− ions migrating towards the anode, respectively. The assistance of the huge electric field in the nano-gap (see section "Electric field distribution") not only increases the transport rate but also the water molecules' ionization has been enhanced ("i.e." local concentration has been enhanced). Looking from a microscopic perspective, the total effect appears like the breakdown of water molecules.
However this effect is not traditional breakdown, which in fact requires a much larger electric field around 1 V/Å. In the nanogap cells the huge electric field is still not large enough to split water molecules directly. However it can take advantage of the self-ionization of water, facilitating the equilibrium reaction to shift in the ionization direction.
<chem>2H2O -> H3O+ + OH- </chem>
Such field-assisted ionization, with the fast ion transport (mainly migration), performs very in a similar way to the breakdown of water molecules; that is why this field-assisted effect is named the "virtual breakdown mechanism".
Consider the equation of conductivity,
formula_0
Here the ion charges are not changed. The ion concentration is enhanced but only contributes to the conductivity partially. The fundamental change here is that "apparent mobility" has been significantly enhanced, as the "breakdown" effect. (In traditional electrochemical cells, although the ion intrinsic mobility is high, since there is nearly zero electric field inside bulk solution, it cannot contribute to the conductivity.) Consider the equivalent resistance between the two electrodes, as given by:
formula_1
When we decrease the gap distance between the two electrodes, not only does the value of "L" decrease but also the value of resistivity decreases as well; this in fact contributes more to the decrease of the total resistance.
This "virtual breakdown mechanism" can be applied to almost all kinds of weakly-ionized materials; in fact, such weaker ionization can lead to larger Debye-length inside the solution. At the same size scale it actually helps to achieve the virtual breakdown effect.
Gap size effect.
The phase diagram shows the importance of the electrode gap distance to the performance of electrochemical reactions. For traditional macrosystems, where the electrode gap distance is much larger than the Debye-length, two half-reactions are decoupled and cannot influence each other. Normally the electrochemical current is limited by a slow diffusion step. When the gap distance is reduced to around the Debye-length, a large electric field can form between the two electrodes (due to double layers and the two regions overlapping with each other); this enhances the mass transport rate. In this region the electrolysis current is very sensitive to the gap distance and the reactions are migration-rate limited. When the gap distance is further reduced to the deep-sub-Debye-length region, the mass transport can be enhanced further to a level even faster than the electron-transfer step. In this region, even when we shrink the gap distance further, the current cannot be enlarged any more, meaning that the current has reached saturation. Here the two half-reactions are coupled together and the reactions are limited by the electron-transfer steps.
Therefore, by just adjusting the gap distance, the fundamental performance of the electrochemical reactions can be significantly changed.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sigma=nq\\mu"
},
{
"math_id": 1,
"text": "R=\\rho{L \\over S}"
}
] |
https://en.wikipedia.org/wiki?curid=59099571
|
59100475
|
Akivis algebra
|
In mathematics, and in particular the study of algebra, an Akivis algebra is a nonassociative algebra equipped with a binary operator, the commutator formula_0 and a ternary operator, the associator formula_1 that satisfy a particular relationship known as the Akivis identity. They are named in honour of Russian mathematician Maks A. Akivis.
Formally, if formula_2 is a vector space over a field formula_3 of characteristic zero, we say formula_2 is an akivis algebra if the operation formula_4 is bilinear and anticommutative; and the trilinear operator formula_5 satisfies the "Akivis identity":
formula_6
An Akivis algebra with formula_7 is a Lie algebra, for the Akivis identity reduces to the Jacobi identity. Note that the terms on the right hand side have positive sign for even permutations and negative sign for odd permutations of formula_8.
Any algebra (even if nonassociative) is an Akivis algebra if we define formula_9 and formula_10. It is known that all Akivis algebras may be represented as a subalgebra of a (possibly nonassociative) algebra in this way (for associative algebras, the associator is identically zero, and the Akivis identity reduces to the Jacobi identity).
|
[
{
"math_id": 0,
"text": "[x,y]"
},
{
"math_id": 1,
"text": "[x,y,z]"
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "\\mathbb{F}"
},
{
"math_id": 4,
"text": "\\left(x,y\\right)\\mapsto\\left[x,y\\right]"
},
{
"math_id": 5,
"text": "\\left(x,y,z\\right)\\mapsto\\left[x,y,z\\right]"
},
{
"math_id": 6,
"text": "\n\\left[\\left[x,y\\right],z\\right]+\n\\left[\\left[y,z\\right],x\\right]+\n\\left[\\left[z,x\\right],y\\right]=\n\\left[x,y,z\\right]+\n\\left[y,z,x\\right]+\n\\left[z,x,y\\right]-\n\\left[x,z,y\\right]-\n\\left[y,x,z\\right]-\n\\left[z,y,x\\right].\n"
},
{
"math_id": 7,
"text": "\\left[x,y,z\\right]=0"
},
{
"math_id": 8,
"text": "x,y,z"
},
{
"math_id": 9,
"text": "\\left[x,y\\right]=xy-yx"
},
{
"math_id": 10,
"text": "\\left[x,y,z\\right]=(xy)z-x(yz)"
}
] |
https://en.wikipedia.org/wiki?curid=59100475
|
59101123
|
MacCullagh ellipsoid
|
The MacCullagh ellipsoid is defined by the equation:
formula_0
where formula_1 is the energy and formula_2 are the components of the angular momentum, given in body's principal reference frame, with corresponding principal moments of inertia formula_3. The construction of such ellipsoid was conceived by James MacCullagh.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{x^2}{A} + \\frac{y^2}{B} + \\frac{z^2}{C} = 2 E,"
},
{
"math_id": 1,
"text": "E"
},
{
"math_id": 2,
"text": "x,y,z"
},
{
"math_id": 3,
"text": "A,B,C"
}
] |
https://en.wikipedia.org/wiki?curid=59101123
|
59113526
|
Ionescu-Tulcea theorem
|
Probability theorem
In the mathematical theory of probability, the Ionescu-Tulcea theorem, sometimes called the Ionesco Tulcea extension theorem, deals with the existence of probability measures for probabilistic events consisting of a countably infinite number of individual probabilistic events. In particular, the individual events may be independent or dependent with respect to each other. Thus, the statement goes beyond the mere existence of countable product measures. The theorem was proved by Cassius Ionescu-Tulcea in 1949.
Statement of the theorem.
Suppose that formula_0 is a probability space and formula_1 for formula_2 is a sequence of measurable spaces. For each formula_2 let
formula_3
be the Markov kernel derived from formula_4 and formula_5, where
formula_6
Then there exists a sequence of probability measures
formula_7 defined on the product space for the sequence formula_8, formula_9
and there exists a uniquely defined probability measure formula_10 on formula_11, so that
formula_12
is satisfied for each formula_13 and formula_14. (The measure formula_10 has conditional probabilities equal to the stochastic kernels.)
Applications.
The construction used in the proof of the Ionescu-Tulcea theorem is often used in the theory of Markov decision processes, and, in particular, the theory of Markov chains.
|
[
{
"math_id": 0,
"text": " (\\Omega_0, \\mathcal A_0, P_0) "
},
{
"math_id": 1,
"text": " (\\Omega_i, \\mathcal A_i) "
},
{
"math_id": 2,
"text": " i \\in \\N "
},
{
"math_id": 3,
"text": " \\kappa_i \\colon (\\Omega^{i-1}, \\mathcal A^{i-1}) \\to (\\Omega_i, \\mathcal A_i) "
},
{
"math_id": 4,
"text": " (\\Omega^{i-1}, \\mathcal A^{i-1}) "
},
{
"math_id": 5,
"text": " (\\Omega_i, \\mathcal A_i), "
},
{
"math_id": 6,
"text": " \\Omega^i:=\\prod_{k=0}^i\\Omega_k \\text{ and } \\mathcal A^i:= \\bigotimes_{k=0}^i \\mathcal A_k."
},
{
"math_id": 7,
"text": " P_i:=P_0 \\otimes \\bigotimes_{k=1}^i \\kappa_k "
},
{
"math_id": 8,
"text": " (\\Omega^i, \\mathcal A^i) "
},
{
"math_id": 9,
"text": " i \\in \\N, "
},
{
"math_id": 10,
"text": " P "
},
{
"math_id": 11,
"text": " \\left(\\prod_{k=0}^\\infty \\Omega_k, \\bigotimes_{k=0}^\\infty \\mathcal A_k\\right) "
},
{
"math_id": 12,
"text": " P_i(A)=P\\left( A \\times \\prod_{k=i+1}^\\infty \\Omega_k \\right) "
},
{
"math_id": 13,
"text": " A \\in \\mathcal A^i "
},
{
"math_id": 14,
"text": " i \\in\\N "
}
] |
https://en.wikipedia.org/wiki?curid=59113526
|
59113779
|
Tannery's theorem
|
Mathematical analysis theorem
In mathematical analysis, Tannery's theorem gives sufficient conditions for the interchanging of the limit and infinite summation operations. It is named after Jules Tannery.
Statement.
Let formula_0 and suppose that formula_1. If formula_2 and formula_3, then formula_4.
Proofs.
Tannery's theorem follows directly from Lebesgue's dominated convergence theorem applied to the sequence space formula_5.
An elementary proof can also be given.
Example.
Tannery's theorem can be used to prove that the binomial limit and the infinite series characterizations of the exponential formula_6 are equivalent. Note that
formula_7
Define formula_8. We have that formula_9 and that formula_10, so Tannery's theorem can be applied and
formula_11
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " S_n = \\sum_{k=0}^\\infty a_k(n) "
},
{
"math_id": 1,
"text": " \\lim_{n\\to\\infty} a_k(n) = b_k "
},
{
"math_id": 2,
"text": " |a_k(n)| \\le M_k "
},
{
"math_id": 3,
"text": " \\sum_{k=0}^\\infty M_k < \\infty "
},
{
"math_id": 4,
"text": " \\lim_{n\\to\\infty} S_n = \\sum_{k=0}^{\\infty} b_k "
},
{
"math_id": 5,
"text": "\\ell^1"
},
{
"math_id": 6,
"text": " e^x "
},
{
"math_id": 7,
"text": " \\lim_{n\\to\\infty} \\left(1 + \\frac{x}{n}\\right)^n = \\lim_{n\\to\\infty} \\sum_{k=0}^n {n \\choose k} \\frac{x^k}{n^k}. "
},
{
"math_id": 8,
"text": " a_k(n) = {n \\choose k} \\frac{x^k}{n^k} "
},
{
"math_id": 9,
"text": " |a_k(n)| \\leq \\frac{|x|^k}{k!} "
},
{
"math_id": 10,
"text": " \\sum_{k=0}^\\infty \\frac{|x|^k}{k!} = e^{|x|} < \\infty "
},
{
"math_id": 11,
"text": " \\lim_{n\\to\\infty} \\sum_{k=0}^\\infty {n \\choose k} \\frac{x^k}{n^k}\n=\\sum_{k=0}^\\infty \\lim_{n\\to\\infty} {n \\choose k} \\frac{x^k}{n^k}\n=\\sum_{k=0}^\\infty \\frac{x^k}{k!}\n= e^x. "
}
] |
https://en.wikipedia.org/wiki?curid=59113779
|
5911859
|
Ax–Kochen theorem
|
On the existence of zeros of homogeneous polynomials over the p-adic numbers
The Ax–Kochen theorem, named for James Ax and Simon B. Kochen, states that for each positive integer "d" there is a finite set "Yd" of prime numbers, such that if "p" is any prime not in "Yd" then every homogeneous polynomial of degree "d" over the p-adic numbers in at least "d"2 + 1 variables has a nontrivial zero.
The proof of the theorem.
The proof of the theorem makes extensive use of methods from mathematical logic, such as model theory.
One first proves Serge Lang's theorem, stating that the analogous theorem is true for the field F"p"(("t")) of formal Laurent series over a finite field F"p" with formula_0. In other words, every homogeneous polynomial of degree "d" with more than "d"2 variables has a non-trivial zero (so F"p"(("t")) is a C2 field).
Then one shows that if two Henselian valued fields have equivalent valuation groups and residue fields, and the residue fields have characteristic 0, then they are elementarily equivalent (which means that a first order sentence is true for one if and only if it is true for the other).
Next one applies this to two fields, one given by an ultraproduct over all primes of the fields F"p"(("t")) and the other given by an ultraproduct over all primes of the "p"-adic fields "Q""p".
Both residue fields are given by an ultraproduct over the fields F"p", so are isomorphic and have characteristic 0, and both value groups are the same, so the ultraproducts are elementarily equivalent. (Taking ultraproducts is used to force the residue field to have characteristic 0; the residue fields of F"p"(("t"))
and "Q""p" both have non-zero characteristic "p".)
The elementary equivalence of these ultraproducts implies that for any sentence in the language of valued fields, there is a finite set "Y" of exceptional primes, such that for any "p" not in this set the sentence is true for F"p"(("t")) if and only if it is true for the field of "p"-adic numbers. Applying this to the sentence stating that every non-constant homogeneous polynomial of degree "d" in at least "d"2+1 variables represents 0, and using Lang's theorem, one gets the Ax–Kochen theorem.
Alternative proof.
Jan Denef found a purely geometric proof for a conjecture of Jean-Louis Colliot-Thélène which generalizes the Ax–Kochen theorem.
Exceptional primes.
Emil Artin conjectured this theorem with the finite exceptional set "Yd" being empty (that is, that all "p"-adic fields are C2), but Guy Terjanian found the following 2-adic counterexample for "d" = 4. Define
formula_1
Then "G" has the property that it is 1 mod 4 if some "x" is odd, and 0 mod 16 otherwise. It follows easily from this that the homogeneous form
"G"(x) + "G"(y) + "G"(z) + 4"G"(u) + 4"G"(v) + 4"G"(w)
of degree "d" = 4 in 18 > "d"2 variables has no non-trivial zeros over the 2-adic integers.
Later Terjanian showed that for each prime "p" and multiple "d" > 2 of "p"("p" − 1), there is a form over the "p"-adic numbers of degree "d" with more than "d"2 variables but no nontrivial zeros. In other words, for all "d" > 2, "Yd" contains all primes "p" such that "p"("p" − 1) divides "d".
gave an explicit but very large bound for the exceptional set of primes "p". If the degree "d" is 1, 2, or 3 the exceptional set is empty. showed that if "d" = 5 the exceptional set is bounded by 13, and showed that for "d" = 7 the exceptional set is bounded by 883 and for "d" = 11 it is bounded by 8053.
|
[
{
"math_id": 0,
"text": "Y_d = \\varnothing"
},
{
"math_id": 1,
"text": " G(x) = G(x_1,x_2,x_3) = \\sum x_i^4 - \\sum_{i\\,<\\,j} x_i^2 x_j^2 - x_1 x_2 x_3 (x_1 + x_2+x_3). "
}
] |
https://en.wikipedia.org/wiki?curid=5911859
|
59120371
|
Jerzy Baksalary
|
Polish mathematician (1944–2005)
Jerzy Kazimierz Baksalary (25 June 1944 – 8 March 2005) was a Polish mathematician who specialized in mathematical statistics and linear algebra. In 1990 he was appointed professor of mathematical sciences. He authored over 170 academic papers published and won one of the Ministry of National Education awards.
Biography.
Early life and education (1944 – 1988).
Baksalary was born in Poznań, Poland on 25 June 1944. From 1969 to 1988, he worked at the Agricultural University of Poznań.
In 1975, Baksalary received a PhD degree from Adam Mickiewicz University in Poznań; his thesis on linear statistical models was supervised by Tadeusz Caliński. He received a Habilitation in 1984, also from Adam Mickiewicz University, where his "Habilitationsschrift" was also on linear statistical models.
Career (1988 – 2005).
In 1988, Baksalary joined the Tadeusz Kotarbiński Pedagogical University in Zielona Góra, Poland, being the university's rector from 1990 to 1996. In 1990, he became a "Professor of Mathematical Sciences", a title received from the President of Poland. For the 1989–1990 academic year, he moved to the University of Tampere in Finland. Later on, he joined the University of Zielona Góra.
2005 death and legacy.
Baksalary died in Poznań on 8 March 2005. His funeral was held there on 15 March 2005. There, Caliński praised Baksalary for his "contributions to the Poznań school of mathematical statistics and biometry".
Memorial events in honor of Baksalary were also held at two conferences after his death:
Research.
In 1979, Baksalary and Radosław Kala proved that the matrix equation formula_0 has a solution for some matrices "X" and "Y" if and only if formula_1. (Here, formula_2 denotes some g-inverse of the matrix "A".) This is equivalent to a 1952 result by W. E. Roth on the same equation, which states that the equation has a solution iff the ranks of the block matrices formula_3 and formula_4 are equal.
In 1980, he and Kala extended this result to the matrix equation formula_5, proving that it can be solved if and only if formula_6, where formula_7 and formula_8.146 (Here, the notation formula_9, formula_10 is adopted for any matrix "M".146)
In 1981, Baksalary and Kala proved that for a Gauss-Markov model formula_11, where the vector-valued variable has expectation formula_12 and variance "V" (a dispersion matrix), then for any function "F", a best linear unbiased estimator of formula_12 which is a function of formula_13 exists iff formula_14. The condition is equivalent to stating that formula_15, where formula_16 denotes the rank of the respective matrix.
In 1995, Baksalary and Sujit Kumar Mitra introduced the "left-star" and "right-star" partial orderings on the set of complex matrices, which are defined as follows. The matrix "A" is below the matrix "B" in the left-star ordering, written formula_17, iff formula_18 and formula_19, where formula_20 denotes the column span and formula_21 denotes the conjugate transpose.76 Similarly, "A" is below "B" in the right-star ordering, written formula_22, iff formula_23 and formula_24.76
In 2000, Jerzy Baksalary and Oskar Maria Baksalary characterized all situations when a linear combination formula_25 of two idempotent matrices can itself be idempotent. These include three previously known cases formula_26, formula_27, or formula_28, previously found by Rao and Mitra (1971); and one additional case where formula_29 and formula_30.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "AX - YB = C"
},
{
"math_id": 1,
"text": "(I - A^-A)C(I - B^-B) = 0"
},
{
"math_id": 2,
"text": "A^-"
},
{
"math_id": 3,
"text": "\\begin{bmatrix}\n A & C\\\\\n 0 & B\\\\\n\\end{bmatrix}"
},
{
"math_id": 4,
"text": "\\begin{bmatrix}\n A & 0\\\\\n 0 & B\\\\\n\\end{bmatrix}"
},
{
"math_id": 5,
"text": "AXB + CYD = E"
},
{
"math_id": 6,
"text": "K_GK_AE = 0, K_AER_D = 0, K_CER_B = 0, ER_BR_H = 0"
},
{
"math_id": 7,
"text": "G := K_AC"
},
{
"math_id": 8,
"text": "H := DR_B"
},
{
"math_id": 9,
"text": "K_M := I - MM^-"
},
{
"math_id": 10,
"text": "R_M := I - M^-M"
},
{
"math_id": 11,
"text": "\\{y, X\\beta, V\\}"
},
{
"math_id": 12,
"text": "X\\beta"
},
{
"math_id": 13,
"text": "Fy"
},
{
"math_id": 14,
"text": "C(X)\\subset C(TF')"
},
{
"math_id": 15,
"text": "r(X\\vdots TF') = r(X)"
},
{
"math_id": 16,
"text": "r(\\cdot)"
},
{
"math_id": 17,
"text": "A ~*< B"
},
{
"math_id": 18,
"text": "A^*A = A^*B"
},
{
"math_id": 19,
"text": "\\mathcal{M}(A)\\subseteq \\mathcal{M}(B)"
},
{
"math_id": 20,
"text": "\\mathcal{M}(\\cdot)"
},
{
"math_id": 21,
"text": "A^*"
},
{
"math_id": 22,
"text": "A <*~ B"
},
{
"math_id": 23,
"text": "AA^* = BA^*"
},
{
"math_id": 24,
"text": "\\mathcal{M}(A^*) \\subseteq \\mathcal{M}(B^*)"
},
{
"math_id": 25,
"text": "P = c_1P_1 + c_2P_2"
},
{
"math_id": 26,
"text": "P = P_1 + P_2"
},
{
"math_id": 27,
"text": "P = P_1 - P_2"
},
{
"math_id": 28,
"text": "P = P_2 - P_1"
},
{
"math_id": 29,
"text": "c_2 = 1 - c_1"
},
{
"math_id": 30,
"text": "(P_1 - P_2)^2 = 0"
}
] |
https://en.wikipedia.org/wiki?curid=59120371
|
59125
|
Comma
|
Punctuation mark (,)
The comma , is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical, others give it the appearance of a miniature filled-in figure 9; placed on the baseline. In many typefaces it is the same shape as an apostrophe or single closing quotation mark ’.
The comma is used in many contexts and languages, mainly to separate parts of a sentence such as clauses, and items in lists mainly when there are three or more items listed. The word "comma" comes from the Greek (), which originally meant a cut-off piece, specifically in grammar, a short clause.
A comma-shaped mark is used as a diacritic in several writing systems and is considered distinct from the cedilla. In Byzantine and modern copies of Ancient Greek, the "rough" and "smooth breathings" () appear above the letter. In Latvian, Romanian, and Livonian, the comma diacritic appears below the letter, as in ș.
In spoken language, a common rule of thumb is that the function of a comma is generally performed by a pause.
"In this article," ⟨x⟩ "denotes a grapheme (writing) and" /x/ "denotes a phoneme (sound)."
History.
The development of punctuation is much more recent than the alphabet.
In the 3rd century BC, Aristophanes of Byzantium invented a system of single dots () at varying levels, which separated verses and indicated the amount of breath needed to complete each fragment of the text when reading aloud. The different lengths were signified by a dot at the bottom, middle, or top of the line. For a short passage, a in the form of a dot ⟨·⟩ was placed mid-level. This is the origin of the concept of a comma, although the name came to be used for the mark itself instead of the clause it separated.
The mark used today is descended from a /, a diagonal slash known as , used from the 13th to 17th centuries to represent a pause. The modern comma was first used by Aldus Manutius.
Uses in English.
In general, the comma shows that the words immediately before the comma are less closely or exclusively linked grammatically to those immediately after the comma than they might be otherwise. The comma performs a number of functions in English writing. It is used in generally similar ways in other languages, particularly European ones, although the rules on comma usage – and their rigidity – vary from language to language.
List separator and the serial (Oxford) comma.
Commas are placed between items in lists, as in "They own a cat, a dog, two rabbits, and seven mice."
Whether the final conjunction, most frequently "and", should be preceded by a comma, called the "serial comma", is one of the most disputed linguistic or stylistic questions in English:
The serial comma is used much more often, usually routinely, in the United States. A majority of American style guides mandate its use, including "The Chicago Manual of Style", Strunk and White's classic "The Elements of Style" and the U.S. Government Publishing Office's "Style Manual". Conversely, the "AP Stylebook" for journalistic writing advises against it.
The serial comma is also known as the Oxford comma, Harvard comma, or series comma. Although less common in British English, its usage occurs within both American and British English. It is called the Oxford comma because of its long history of use by Oxford University Press.
According to "New Hart's Rules", "house style will dictate" whether to use the serial comma. "The general rule is that one style or the other should be used consistently." No association with region or dialect is suggested, other than that its use has been strongly advocated by Oxford University Press. Its use is preferred by Fowler's "Modern English Usage". It is recommended by the United States Government Printing Office, Harvard University Press, and the classic "Elements of Style" of Strunk and White.
Use of a comma may prevent ambiguity:
The serial comma does not eliminate all confusion. Consider the following sentence:
As a rule of thumb, "The Guardian Style Guide" suggests that straightforward lists ("he ate ham, eggs and chips") do not need a comma before the final "and", but sometimes it can help the reader ("he ate cereal, kippers, bacon, eggs, toast and marmalade, and tea"). "The Chicago Manual of Style" and other academic writing guides require the serial comma: all lists must have a comma before the "and" prefacing the last item in a series (<templatestyles src="Crossreference/styles.css" />).
If the individual items of a list are long, complex, affixed with description, or themselves contain commas, semicolons may be preferred as separators, and the list may be introduced with a colon.
In news headlines, a comma might replace the word "and", even if there are only two items, in order to save space, as in this headline from Reuters:
Separation of clauses.
Commas are often used to separate clauses. In English, a comma is often used to separate a dependent clause from the independent clause if the dependent clause comes first: "After I fed the cat, I brushed my clothes." (Compare this with "I brushed my clothes after I fed the cat.") A relative clause takes commas if it is non-restrictive, as in "I cut down all the trees, which were over six feet tall." (Without the comma, this would mean that only the trees more than six feet tall were cut down.) Some style guides prescribe that two independent clauses joined by a coordinating conjunction ("for", "and", "nor", "but", "or", "yet", "so") must be separated by a comma placed before the conjunction. In the following sentences, where the second clause is independent (because it can stand alone as a sentence), the comma is considered by those guides to be necessary:
In the following sentences, where the second half of the sentence is a dependent clause (because it does not contain an explicit subject), those guides prescribe that the comma be omitted:
However, such guides permit the comma to be omitted if the second independent clause is very short, typically when the second independent clause is an imperative, as in:
The above guidance is not universally accepted or applied. Long coordinate clauses, particularly when separated by "but", are often separated by commas:
In some languages, such as German and Polish, stricter rules apply on comma use between clauses, with dependent clauses always being set off with commas, and commas being generally proscribed before certain coordinating conjunctions.
The joining of two independent sentences with a comma and no conjunction (as in "It is nearly half past five, we cannot reach town before dark.") is known as a "comma splice" and is sometimes considered an error in English; in most cases a semicolon should be used instead. A comma splice should not be confused, though, with the literary device called "asyndeton", in which coordinating conjunctions are purposely omitted for a specific stylistic effect.
A much debated comma is the one in the Second Amendment to the United States Constitution, which says "A well regulated Militia being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed." but ratified by several states as "A well regulated Militia being necessary to the security of a free State, the right of the people to keep and bear Arms shall not be infringed." which has caused much debate on its interpretation.
Certain adverbs.
Commas are always used to set off certain adverbs at the beginning of a sentence, including "however", "in fact", "therefore", "nevertheless", "moreover", "furthermore", and "still".
If these adverbs appear in the middle of a sentence, they are followed and preceded by a comma. As in the second of the two examples below, if a semicolon separates the two sentences and the second sentence starts with an adverb, this adverb is preceded by a semicolon and followed by a comma.
Using commas to offset certain adverbs is optional, including "then", "so", "yet", "instead", and "too" (meaning "also").
Parenthetical phrases.
Commas are often used to enclose parenthetical words and phrases within a sentence (i.e., information that is not essential to the meaning of the sentence). Such phrases are both preceded and followed by a comma, unless that would result in a doubling of punctuation marks or the parenthetical is at the start or end of the sentence. The following are examples of types of parenthetical phrases:
The parenthesization of phrases may change the connotation, reducing or eliminating
ambiguity. In the following example, the thing in the first sentence that
is relaxing is the cool day, whereas in the second sentence, it is the walk since the introduction of commas makes "on a cool day" parenthetical:
"They took a walk on a cool day that was relaxing."
"They took a walk, on a cool day, that was relaxing."
As more phrases are introduced, ambiguity accumulates, but when commas separate
each phrase, the phrases clearly become modifiers of just one thing. In the
second sentence below, that thing is "the walk":
"They took a walk in the park on a cool day that was relaxing."
"They took a walk, in the park, on a cool day, that was relaxing."
Between adjectives.
A comma is used to separate "coordinate adjectives" (i.e., adjectives that directly and equally modify the following noun). Adjectives are considered coordinate if the meaning would be the same if their order were reversed or if "and" were placed between them. For example:
Before quotations.
Some writers precede quoted material that is the grammatical object of an active verb of speaking or writing with a comma, as in "Mr. Kershner says, "You should know how to use a comma."" Quotations that follow and support an assertion are often preceded by a colon rather than a comma.
Other writers do not put a comma before quotations unless one would occur anyway. Thus, they would write "Mr. Kershner says "You should know how to use a comma.""
In dates.
Month day, year.
When a date is written as a month followed by a day followed by a year, a comma separates the day from the year: December 19, 1941. This style is common in American English. The comma is used to avoid confusing consecutive numbers: December 19 1941.
Most style manuals, including "The Chicago Manual of Style"
and the "AP Stylebook",
also recommend that the year be treated as a parenthetical, requiring a second comma after it: "Feb. 14, 1987, was the target date."
If just the month and year are given, no commas are used: "Her daughter may return in June 2009 for the reunion."
Day month year.
When the day precedes the month, the month name separates the numeric day and year, so commas are not necessary to separate them: "The Raid on Alexandria was carried out on 19 December 1941."
In geographical names.
Commas are used to separate parts of geographical references, such as city and state ("Dallas, Texas") or city and country ("Kampala, Uganda"). Additionally, most style manuals, including "The Chicago Manual of Style"
and the "AP Stylebook",
recommend that the second element be treated as a parenthetical, requiring a second comma after: "The plane landed in Kampala, Uganda, that evening."
The United States Postal Service and Royal Mail recommend leaving out punctuation when writing addresses on actual letters and packages, as the marks hinder optical character recognition. Canada Post has similar guidelines, only making very limited use of hyphens.
In mathematics.
Similar to the case in natural languages, commas are often used to delineate the boundary between multiple mathematical objects in a list (e.g., formula_0). Commas are also used to indicate the comma derivative of a tensor.
In numbers.
In representing large numbers, from the right side to the left, English texts usually use commas to separate each group of three digits in front of the decimal. This is almost always done for numbers of six or more digits, and often for four or five digits but not in front of the number itself. However, in much of Europe, Southern Africa and Latin America, periods or spaces are used instead; the comma is used as a decimal separator, equivalent to the use in English of the decimal point. In India, the groups are two digits, except for the rightmost group, which is of three digits. In some styles, the comma may not be used for this purpose at all (e.g. in the SI writing style); a space may be used to separate groups of three digits instead.
In names.
Commas are used when rewriting names to present the surname first, generally in instances of alphabetization by surname: "Smith, John". They are also used before many titles that follow a name: "John Smith, Ph.D."
It can also be used in regnal names followed by their occupation: "Louis XIII, king of France and Navarre".
Similarly in lists that are presented with an inversion: "socks, green: 3 pairs; socks, red: 2 pairs; tie, regimental: 1".
Ellipsis.
Commas may be used to indicate that a word, or a group of words, has been omitted, as in "The cat was white; the dog, brown." (Here the comma replaces "was".)
Vocative.
Commas are placed before, after, or around a noun or pronoun used independently in speaking to some person, place, or thing:
Between the subject and predicate.
In his 1785 essay "An Essay on Punctuation", Joseph Robertson advocated a comma between the subject and predicate of long sentences for clarity; however, this usage is regarded as an error in modern times.
Differences between American and British usage in placement of commas and quotation marks.
The comma and the quotation mark can be paired in several ways.
In Great Britain and many other parts of the world, punctuation is usually placed within quotation marks only if it is part of what is being quoted or referred to:
In American English, the comma was commonly included inside a quotation mark:
During the Second World War, the British carried the comma over into abbreviations. Specifically, "Special Operations, Executive" was written "S.O.,E.". Nowadays, even the full stops are frequently discarded.
Languages other than English.
Western Europe.
Western European languages like German, French, Italian, Spanish, and Portuguese use the same comma as English, with similar spacing, though usage may be somewhat different. For instance, in Standard German, subordinate clauses are always preceded by commas.
Comma variants.
The basic comma is defined in Unicode as , and many variants by typography or language are also defined.
Some languages use a completely different sort of character for the purpose of the comma.
There are also a number of comma-like diacritics with "COMMA" in their Unicode names that are not intended for use as punctuation. A comma-like low quotation mark is also available (shown below; corresponding sets of raised single quotation marks and double-quotation marks are not shown).
There are various other Unicode characters that include commas or comma-like figures with other characters or marks, that are not shown in these tables.
Greece.
<templatestyles src="Template:Visible anchor/styles.css" />Modern Greek uses the same Unicode comma for its () and it is officially romanized as a Latin comma, but it has additional roles owing to its conflation with the former hypodiastole, a curved interpunct used to disambiguate certain homonyms. As such, the comma functions as a silent letter in a handful of Greek words, principally distinguishing (, 'whatever') from (, 'that').
East Asia.
<templatestyles src="Template:Visible anchor/styles.css" />The enumeration or ideographic comma () is used in Chinese,20 Japanese punctuation, and somewhat in Korean punctuation. In China and Korea, this comma () is usually only used to separate items in lists, while it is the more common form of comma in Japan (, lit. 'clause mark').
In documents that mix Japanese and Latin scripts, the full-width comma () is used; this is the standard form of comma () in China. Since East Asian typography permits commas to join dependent clauses dealing with certain topics or lines of thought, commas may be used in ways that would be considered comma splices in English.
Korean punctuation uses both commas and interpuncts for lists.
In Unicode 5.2.0, "numbers with commas" ( through ) were added to the Enclosed Alphanumeric Supplement block for compatibility with the ARIB STD B24 character set.
West Asia.
<templatestyles src="Template:Visible anchor/styles.css" />The comma in the Arabic script used by languages including Arabic, Urdu, and Persian, is "upside-down" ⟨⟩ (), in order to distinguish it from the Arabic diacritic ⟨⟩ representing the vowel , which is similarly shaped. In Arabic texts, the Western-styled comma () is used as a decimal point.
Hebrew script is also written from right to left. However, Hebrew punctuation includes only a regular comma ⟨⟩.
South Asia.
<templatestyles src="Template:Visible anchor/styles.css" />Reversed comma () is used in Sindhi when written in Arabic script. It is distinct from the standard Arabic comma.
Dravidian languages such as Tamil, Telugu, Kannada, and Malayalam also use the punctuation mark in similar usage to that of European languages with similar spacing.
Computing.
In the common character encoding systems Unicode and ASCII, character 44 (0x2C) corresponds to the comma symbol. The HTML numeric character reference is codice_0.
In many computer languages commas are used as a field delimiter to separate arguments to a function, to separate elements in a list, and to perform data designation on multiple variables at once.
In the C programming language the comma symbol is an operator which evaluates its first argument (which may have side-effects) and then returns the value of its evaluated second argument. This is useful in "for" statements and macros.
In Smalltalk and APL, the comma operator is used to concatenate collections, including strings. In APL, it is also used monadically to rearrange the items of an array into a list.
In Prolog, the comma is used to denote Logical Conjunction ("and").
The comma-separated values (CSV) format is very commonly used in exchanging text data between database and spreadsheet formats.
Diacritical usage.
The comma is used as a diacritic mark in Romanian under ⟨s⟩ (⟨Ș⟩, ⟨ș⟩), and under ⟨t⟩ (⟨Ț⟩, ⟨ț⟩). A cedilla is occasionally used instead of it, but this is technically incorrect. The symbol ⟨d̦⟩ ('d with comma below') was used as part of the Romanian transitional alphabet (19th century) to indicate the sounds denoted by the Latin letter ⟨z⟩ or letters ⟨dz⟩, where derived from a Cyrillic ѕ (⟨ѕ⟩, ). The comma and the cedilla are both derivative of ⟨ʒ⟩ (a small cursive ⟨z⟩) placed below the letter. From this standpoint alone, ⟨ș⟩, ⟨ț⟩, and ⟨d̦⟩ could potentially be regarded as stand-ins for /sz/, /tz/, and /dz/ respectively.
In Latvian, the comma is used on the letters ⟨ģ⟩, ⟨ķ⟩, ⟨ļ⟩, ⟨ņ⟩, and historically also ⟨ŗ⟩, to indicate palatalization. Because the lowercase letter ⟨g⟩ has a descender, the comma is rotated 180° and placed over the letter. Although their Adobe glyph names are 'letter with comma', their names in the Unicode Standard are 'letter with a cedilla'. They were introduced to the Unicode standard before 1992 and, per Unicode Consortium policy, their names cannot be altered. In the late 1920s and 1930s, the Latgalian orthography used in Siberia used additional letters with comma ⟨⟩, ⟨⟩, ⟨⟩, ⟨⟩, ⟨⟩, ⟨⟩, ⟨⟩, ⟨⟩
In Livonian, whose alphabet is based on a mixture of Latvian and Estonian alphabets, the comma is used on the letters ⟨ḑ⟩, ⟨ļ⟩, ⟨ņ⟩, ⟨ŗ⟩, ⟨ț⟩ to indicate palatalization in the same fashion as Latvian, except that Livonian uses ⟨ḑ⟩ and ⟨ț⟩ to represent the same palatal plosive phonemes which Latvian writes as ⟨ģ⟩ and ⟨ķ⟩ respectively.
In Czech and Slovak, the diacritic in the characters ⟨ď⟩, ⟨ť⟩, and ⟨ľ⟩ resembles a superscript comma, but it is used instead of a caron because the letter has an ascender. Other ascender letters with carons, such as letters ⟨ȟ⟩ (used in Finnish Romani and Lakota) and ⟨ǩ⟩ (used in Skolt Sami), did not modify their carons to superscript commas.
In 16th-century Guatemala, the archaic letter cuatrillo with a comma (⟨Ꜯ⟩ and ⟨ꜯ⟩) was used to write Mayan languages.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(3, 5, 12)"
}
] |
https://en.wikipedia.org/wiki?curid=59125
|
591253
|
Kirchhoff's circuit laws
|
Two equalities that deal with the current and potential difference
Kirchhoff's circuit laws are two equalities that deal with the current and potential difference (commonly known as voltage) in the lumped element model of electrical circuits. They were first described in 1845 by German physicist Gustav Kirchhoff. This generalized the work of Georg Ohm and preceded the work of James Clerk Maxwell. Widely used in electrical engineering, they are also called Kirchhoff's rules or simply Kirchhoff's laws. These laws can be applied in time and frequency domains and form the basis for network analysis.
Both of Kirchhoff's laws can be understood as corollaries of Maxwell's equations in the low-frequency limit. They are accurate for DC circuits, and for AC circuits at frequencies where the wavelengths of electromagnetic radiation are very large compared to the circuits.
Kirchhoff's current law.
This law, also called Kirchhoff's first law, or Kirchhoff's junction rule, states that, for any node (junction) in an electrical circuit, the sum of currents flowing into that node is equal to the sum of currents flowing out of that node; or equivalently:
"The algebraic sum of currents in a network of conductors meeting at a point is zero."
Recalling that current is a signed (positive or negative) quantity reflecting direction towards or away from a node, this principle can be succinctly stated as: formula_0 where "n" is the total number of branches with currents flowing towards or away from the node.
Kirchhoff's circuit laws were originally obtained from experimental results. However, the current law can be viewed as an extension of the conservation of charge, since charge is the product of current and the time the current has been flowing. If the net charge in a region is constant, the current law will hold on the boundaries of the region. This means that the current law relies on the fact that the net charge in the wires and components is constant.
Uses.
A matrix version of Kirchhoff's current law is the basis of most circuit simulation software, such as SPICE. The current law is used with Ohm's law to perform nodal analysis.
The current law is applicable to any lumped network irrespective of the nature of the network; whether unilateral or bilateral, active or passive, linear or non-linear.
Kirchhoff's voltage law.
This law, also called Kirchhoff's second law, or Kirchhoff's loop rule, states the following:
"The directed sum of the potential differences (voltages) around any closed loop is zero."
Similarly to Kirchhoff's current law, the voltage law can be stated as: formula_1
Here, n is the total number of voltages measured.
<templatestyles src="Math_proof/styles.css" />Derivation of Kirchhoff's voltage law
A similar derivation can be found in "The Feynman Lectures on Physics, Volume II, Chapter 22: AC Circuits".
Consider some arbitrary circuit. Approximate the circuit with lumped elements, so that time-varying magnetic fields are contained to each component and the field in the region exterior to the circuit is negligible. Based on this assumption, the Maxwell–Faraday equation reveals that formula_2 in the exterior region. If each of the components has a finite volume, then the exterior region is simply connected, and thus the electric field is conservative in that region. Therefore, for any loop in the circuit, we find that formula_3 where formula_4 are paths around the "exterior" of each of the components, from one terminal to another.
Note that this derivation uses the following definition for the voltage rise from formula_5 to formula_6:
formula_7
However, the electric potential (and thus voltage) can be defined in other ways, such as via the Helmholtz decomposition.
Generalization.
In the low-frequency limit, the voltage drop around any loop is zero. This includes imaginary loops arranged arbitrarily in space – not limited to the loops delineated by the circuit elements and conductors. In the low-frequency limit, this is a corollary of Faraday's law of induction (which is one of Maxwell's equations).
This has practical application in situations involving "static electricity".
Limitations.
Kirchhoff's circuit laws are the result of the lumped-element model and both depend on the model being applicable to the circuit in question. When the model is not applicable, the laws do not apply.
The current law is dependent on the assumption that the net charge in any wire, junction or lumped component is constant. Whenever the electric field between parts of the circuit is non-negligible, such as when two wires are capacitively coupled, this may not be the case. This occurs in high-frequency AC circuits, where the lumped element model is no longer applicable. For example, in a transmission line, the charge density in the conductor may be constantly changing.
On the other hand, the voltage law relies on the fact that the actions of time-varying magnetic fields are confined to individual components, such as inductors. In reality, the induced electric field produced by an inductor is not confined, but the leaked fields are often negligible.
Modelling real circuits with lumped elements.
The lumped element approximation for a circuit is accurate at low frequencies. At higher frequencies, leaked fluxes and varying charge densities in conductors become significant. To an extent, it is possible to still model such circuits using parasitic components. If frequencies are too high, it may be more appropriate to simulate the fields directly using finite element modelling or other techniques.
To model circuits so that both laws can still be used, it is important to understand the distinction between "physical" circuit elements and the "ideal" lumped elements. For example, a wire is not an ideal conductor. Unlike an ideal conductor, wires can inductively and capacitively couple to each other (and to themselves), and have a finite propagation delay. Real conductors can be modeled in terms of lumped elements by considering parasitic capacitances distributed between the conductors to model capacitive coupling, or parasitic (mutual) inductances to model inductive coupling. Wires also have some self-inductance.
Example.
Assume an electric network consisting of two voltage sources and three resistors.
According to the first law:
formula_8
Applying the second law to the closed circuit "s"1, and substituting for voltage using Ohm's law gives:
formula_9
The second law, again combined with Ohm's law, applied to the closed circuit "s"2 gives:
formula_10
This yields a system of linear equations in "i"1, "i"2, "i"3:
formula_11
which is equivalent to
formula_12
Assuming
formula_13
the solution is
formula_14
The current "i"3 has a negative sign which means the assumed direction of "i"3 was incorrect and "i"3 is actually flowing in the direction opposite to the red arrow labeled "i"3. The current in "R"3 flows from left to right.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sum_{i=1}^n I_i = 0"
},
{
"math_id": 1,
"text": "\\sum_{i=1}^n V_i = 0"
},
{
"math_id": 2,
"text": "\\nabla\\times\\mathbf{E} = -\\frac{\\partial\\mathbf{B}}{\\partial t} = \\mathbf{0}"
},
{
"math_id": 3,
"text": "\\sum_i V_i = - \\sum_i \\int_{\\mathcal{P}_i}\\mathbf{E}\\cdot\\mathrm{d}\\mathbf{l} = \\oint\\mathbf{E}\\cdot\\mathrm{d}\\mathbf{l} = 0"
},
{
"math_id": 4,
"text": "\\mathcal{P}_i"
},
{
"math_id": 5,
"text": "a"
},
{
"math_id": 6,
"text": "b"
},
{
"math_id": 7,
"text": "V_{a \\to b} = -\\int_{\\mathcal{P}_{a \\to b}}\\mathbf{E}\\cdot \\mathrm{d}\\mathbf{l}"
},
{
"math_id": 8,
"text": " i_1 - i_2 - i_3 = 0"
},
{
"math_id": 9,
"text": "-R_2 i_2 + \\mathcal{E}_1 - R_1 i_1 = 0"
},
{
"math_id": 10,
"text": "-R_3 i_3 - \\mathcal{E}_2 - \\mathcal{E}_1 + R_2 i_2 = 0"
},
{
"math_id": 11,
"text": "\\begin{cases}\n i_1 - i_2 - i_3 & = 0 \\\\\n -R_2 i_2 + \\mathcal{E}_1 - R_1 i_1 & = 0 \\\\\n-R_3 i_3 - \\mathcal{E}_2 - \\mathcal{E}_1 + R_2 i_2 & = 0\n\\end{cases}"
},
{
"math_id": 12,
"text": "\\begin{cases}\n i_1 + (- i_2) + (- i_3) & = 0 \\\\\nR_1 i_1 + R_2 i_2 + 0 i_3 & = \\mathcal{E}_1 \\\\\n0 i_1 + R_2 i_2 - R_3 i_3 & = \\mathcal{E}_1 + \\mathcal{E}_2\n\\end{cases}"
},
{
"math_id": 13,
"text": "\\begin{align}\n R_1 &= 100\\Omega, & R_2 &= 200\\Omega, & R_3 &= 300\\Omega, \\\\\n \\mathcal{E}_1 &= 3\\text{V}, & \\mathcal{E}_2 &= 4\\text{V}\n\\end{align}"
},
{
"math_id": 14,
"text": "\\begin{cases}\ni_1 = \\frac{1}{1100}\\text{A} \\\\[6pt]\ni_2 = \\frac{4}{275}\\text{A} \\\\[6pt]\ni_3 = -\\frac{3}{220}\\text{A}\n\\end{cases}"
}
] |
https://en.wikipedia.org/wiki?curid=591253
|
591280
|
Kirchhoff's law of thermal radiation
|
Law of wavelength-specific emission and absorption
In heat transfer, Kirchhoff's law of thermal radiation refers to wavelength-specific radiative emission and absorption by a material body in thermodynamic equilibrium, including radiative exchange equilibrium. It is a special case of Onsager reciprocal relations as a consequence of the time reversibility of microscopic dynamics, also known as microscopic reversibility.
A body at temperature "T" radiates electromagnetic energy. A perfect black body in thermodynamic equilibrium absorbs all light that strikes it, and radiates energy according to a unique law of radiative emissive power for temperature "T" (Stefan–Boltzmann law), universal for all perfect black bodies. Kirchhoff's law states that:
<templatestyles src="Block indent/styles.css"/>
Here, the dimensionless coefficient of absorption (or the absorptivity) is the fraction of incident light (power) at each spectral frequency that is absorbed by the body when it is radiating and absorbing in thermodynamic equilibrium.
In slightly different terms, the emissive power of an arbitrary opaque body of fixed size and shape at a definite temperature can be described by a dimensionless ratio, sometimes called the emissivity: the ratio of the emissive power of the body to the emissive power of a black body of the same size and shape at the same fixed temperature. With this definition, Kirchhoff's law states, in simpler language:
<templatestyles src="Block indent/styles.css"/>For an arbitrary body emitting and absorbing thermal radiation in thermodynamic equilibrium, the emissivity function is equal to the absorptivity function.
In some cases, emissive power and absorptivity may be defined to depend on angle, as described below. The condition of thermodynamic equilibrium is necessary in the statement, because the equality of emissivity and absorptivity often does not hold when the material of the body is not in thermodynamic equilibrium.
Kirchhoff's law has another corollary: the emissivity cannot exceed one (because the absorptivity cannot, by conservation of energy), so it is not possible to thermally radiate more energy than a black body, at equilibrium. In negative luminescence the angle and wavelength integrated absorption exceeds the material's emission; however, such systems are powered by an external source and are therefore not in thermodynamic equilibrium.
Principle of detailed balance.
Kirchhoff's law of thermal radiation has a refinement in that not only is thermal emissivity equal to absorptivity, it is equal "in detail". Consider a leaf. It is a poor absorber of green light (around 470 nm), which is why it looks green. By the principle of detailed balance, it is also a poor emitter of green light.
In other words, if a material, illuminated by black-body radiation of temperature formula_0, is dark at a certain frequency formula_1, then its thermal radiation will also be dark at the same frequency formula_1 and the same temperature formula_0.
More generally, all intensive properties are balanced in detail. So for example, the absorptivity at a certain incidence direction, for a certain frequency, of a certain polarization, is the same as the emissivity at the same direction, for the same frequency, of the same polarization. This is the principle of detailed balance.
<templatestyles src="Block indent/styles.css"/>In equilibrium the power radiated and absorbed by the body must be equal for any particular element of area of the body, for any particular direction of polarization, and for any frequency range.
History.
Before Kirchhoff's law was recognized, it had been experimentally established that a good absorber is a good emitter, and a poor absorber is a poor emitter. Naturally, a good reflector must be a poor absorber. This is why, for example, lightweight emergency thermal blankets are based on reflective metallic coatings: they lose little heat by radiation.
Kirchhoff's great insight was to recognize the universality and uniqueness of the function that describes the black body emissive power. But he did not know the precise form or character of that universal function. Attempts were made by Lord Rayleigh and Sir James Jeans 1900–1905 to describe it in classical terms, resulting in Rayleigh–Jeans law. This law turned out to be inconsistent yielding the ultraviolet catastrophe. The correct form of the law was found by Max Planck in 1900, assuming quantized emission of radiation, and is termed Planck's law. This marks the advent of quantum mechanics.
Theory.
In a blackbody enclosure that contains electromagnetic radiation with a certain amount of energy at thermodynamic equilibrium, this "photon gas" will have a Planck distribution of energies.
One may suppose a second system, a cavity with walls that are opaque, rigid, and not perfectly reflective to any wavelength, to be brought into connection, through an optical filter, with the blackbody enclosure, both at the same temperature. Radiation can pass from one system to the other. For example, suppose in the second system, the density of photons at narrow frequency band around wavelength formula_2 were higher than that of the first system. If the optical filter passed only that frequency band, then there would be a net transfer of photons, and their energy, from the second system to the first. This is in violation of the second law of thermodynamics, which requires that there can be no net transfer of heat between two bodies at the same temperature.
In the second system, therefore, at each frequency, the walls must absorb and emit energy in such a way as to maintain the black body distribution. Hence absorptivity and emissivity must be equal. The absorptivity formula_3 of the wall is the ratio of the energy absorbed by the wall to the energy incident on the wall, for a particular wavelength. Thus the absorbed energy is formula_4 where formula_5 is the intensity of black-body radiation at wavelength formula_2 and temperature formula_0. Independent of the condition of thermal equilibrium, the emissivity of the wall is defined as the ratio of emitted energy to the amount that would be radiated if the wall were a perfect black body. The emitted energy is thus formula_6 where formula_7 is the emissivity at wavelength formula_2. For the maintenance of thermal equilibrium, these two quantities must be equal, or else the distribution of photon energies in the cavity will deviate from that of a black body. This yields Kirchhoff's law:
formula_8
By a similar, but more complicated argument, it can be shown that, since black-body radiation is equal in every direction (isotropic), the emissivity and the absorptivity, if they happen to be dependent on direction, must again be equal for any given direction.
Average and overall absorptivity and emissivity data are often given for materials with values which "differ" from each other. For example, white paint is quoted as having an absorptivity of 0.16, while having an emissivity of 0.93. This is because the absorptivity is averaged with weighting for the solar spectrum, while the emissivity is weighted for the emission of the paint itself at normal ambient temperatures. The absorptivity quoted in such cases is being calculated by:
formula_9
while the average emissivity is given by:
formula_10
where formula_11 is the emission spectrum of the sun, and formula_6 is the emission spectrum of the paint. Although, by Kirchhoff's law, formula_12 in the above equations, the above "averages" formula_13 and formula_14 are not generally equal to each other. The white paint will serve as a very good insulator against solar radiation, because it is very reflective of the solar radiation, and although it therefore emits poorly in the solar band, its temperature will be around room temperature, and it will emit whatever radiation it has absorbed in the infrared, where its emission coefficient is high.
Planck's derivation.
Historically, Planck derived the black body radiation law and detailed balance according to a classical thermodynamic argument, with a single heuristic step, which was later interpreted as a quantization hypothesis.
In Planck's set up, he started with a large Hohlraum at a fixed temperature formula_0. At thermal equilibrium, the Hohlraum is filled with a distribution of EM waves at thermal equilibrium with the walls of the Hohlraum. Next, he considered connecting the Hohlraum to a single small resonator, such as Hertzian resonators. The resonator reaches a certain form of thermal equilibrium with the Hohlraum, when the spectral input into the resonator equals the spectral output at the resonance frequency.
Next, suppose there are two Hohlraums at the same fixed temperature formula_0, then Planck argued that the thermal equilibrium of the small resonator is the same when connected to either Hohlraum. For, we can disconnect the resonator from one Hohlraum and connect it to another. If the thermal equilibrium were different, then we have just transported energy from one to another, violating the second law. Therefore, the spectrum of all black bodies are identical at the same temperature.
Using a heuristic of quantization, which he gleaned from Boltzmann, Planck argued that a resonator tuned to frequency formula_1, with average energy formula_15, would contain entropyformula_16for some constant formula_17 (later termed the Planck constant). Then applying formula_18, Planck obtained the black body radiation law.
Another argument that does not depend on the precise form of the entropy function, can be given as follows. Next, suppose we have a material that violates Kirchhoff's law when integrated, such that the total coefficient of absorption is not equal to the coefficient of emission at a certain formula_0, then if the material at temperature formula_0 is placed into a Hohlraum at temperature formula_0, it would spontaneously emit more than it absorbs, or conversely, thus spontaneously creating a temperature difference, violating the second law.
Finally, suppose we have a material that violates Kirchhoff's law "in detail", such that such that the total coefficient of absorption is not equal to the coefficient of emission at a certain formula_0 and "at a certain frequency" formula_1, then since it does not violate Kirchhoff's law when integrated, there must exist two frequencies formula_19, such that the material absorbs more than it emits at formula_20, and conversely at formula_21. Now, place this material in one Hohlraum. It would spontaneously create a shift in the spectrum, making it higher at formula_21 than at formula_20. However, this then allows us to tap from one Hohlraum with a resonator tuned at formula_21, then detach and attach to another Hohlraum at the same temperature, thus transporting energy from one to another, violating the second law.
We may apply the same argument for polarization and direction of radiation, obtaining the full principle of detailed balance.
Black bodies.
Near-black materials.
It has long been known that a lamp-black coating will make a body nearly black. Some other materials are nearly black in particular wavelength bands. Such materials do not survive all the very high temperatures that are of interest.
An improvement on lamp-black is found in manufactured carbon nanotubes. Nano-porous materials can achieve refractive indices nearly that of vacuum, in one case obtaining average reflectance of 0.045%.
Opaque bodies.
Bodies that are opaque to thermal radiation that falls on them are valuable in the study of heat radiation. Planck analyzed such bodies with the approximation that they be considered topologically to have an interior and to share an interface. They share the interface with their contiguous medium, which may be rarefied material such as air, or transparent material, through which observations can be made. The interface is not a material body and can neither emit nor absorb. It is a mathematical surface belonging jointly to the two media that touch it. It is the site of refraction of radiation that penetrates it and of reflection of radiation that does not. As such it obeys the Helmholtz reciprocity principle. The opaque body is considered to have a material interior that absorbs all and scatters or transmits none of the radiation that reaches it through refraction at the interface. In this sense the material of the opaque body is black to radiation that reaches it, while the whole phenomenon, including the interior and the interface, does not show perfect blackness. In Planck's model, perfectly black bodies, which he noted do not exist in nature, besides their opaque interior, have interfaces that are perfectly transmitting and non-reflective.
Cavity radiation.
The walls of a cavity can be made of opaque materials that absorb significant amounts of radiation at all wavelengths. It is not necessary that every part of the interior walls be a good absorber at every wavelength. The effective range of absorbing wavelengths can be extended by the use of patches of several differently absorbing materials in parts of the interior walls of the cavity. In thermodynamic equilibrium the cavity radiation will precisely obey Planck's law. In this sense, thermodynamic equilibrium cavity radiation may be regarded as thermodynamic equilibrium black-body radiation to which Kirchhoff's law applies exactly, though no perfectly black body in Kirchhoff's sense is present.
A theoretical model considered by Planck consists of a cavity with perfectly reflecting walls, initially with no material contents, into which is then put a small piece of carbon. Without the small piece of carbon, there is no way for non-equilibrium radiation initially in the cavity to drift towards thermodynamic equilibrium. When the small piece of carbon is put in, it radiation frequencies so that the cavity radiation comes to thermodynamic equilibrium.
A hole in the wall of a cavity.
For experimental purposes, a hole in a cavity can be devised to provide a good approximation to a black surface, but will not be perfectly Lambertian, and must be viewed from nearly right angles to get the best properties. The construction of such devices was an important step in the empirical measurements that led to the precise mathematical identification of Kirchhoff's universal function, now known as Planck's law.
Kirchhoff's perfect black bodies.
Planck also noted that the perfect black bodies of Kirchhoff do not occur in physical reality. They are theoretical fictions. Kirchhoff's perfect black bodies absorb all the radiation that falls on them, right in an infinitely thin surface layer, with no reflection and no scattering. They emit radiation in perfect accord with Lambert's cosine law.
Original statements.
Gustav Kirchhoff stated his law in several papers in 1859 and 1860, and then in 1862 in an appendix to his collected reprints of those and some related papers.
Prior to Kirchhoff's studies, it was known that for total heat radiation, the ratio of emissive power to absorptive ratio was the same for all bodies emitting and absorbing thermal radiation in thermodynamic equilibrium. This means that a good absorber is a good emitter. Naturally, a good reflector is a poor absorber. For wavelength specificity, prior to Kirchhoff, the ratio was shown experimentally by Balfour Stewart to be the same for all bodies, but the universal value of the ratio had not been explicitly considered in its own right as a function of wavelength and temperature.
Kirchhoff's original contribution to the physics of thermal radiation was his postulate of a perfect black body radiating and absorbing thermal radiation in an enclosure opaque to thermal radiation and with walls that absorb at all wavelengths. Kirchhoff's perfect black body absorbs all the radiation that falls upon it.
Every such black body emits from its surface with a spectral radiance that Kirchhoff labeled "I" (for specific intensity, the traditional name for spectral radiance).
<templatestyles src="Block indent/styles.css"/>"Kirchhoff's postulated spectral radiance I was a universal function, one and the same for all black bodies, only depending on wavelength and temperature."
The precise mathematical expression for that universal function "I" was very much unknown to Kirchhoff, and it was just postulated to exist, until its precise mathematical expression was found in 1900 by Max Planck. It is nowadays referred to as Planck's law.
Then, at each wavelength, for thermodynamic equilibrium in an enclosure, opaque to heat rays, with walls that absorb some radiation at every wavelength:
<templatestyles src="Block indent/styles.css"/>"For an arbitrary body radiating and emitting thermal radiation, the ratio E / A between the emissive spectral radiance, E, and the dimensionless absorptive ratio, A, is one and the same for all bodies at a given temperature. That ratio E / A is equal to the emissive spectral radiance I of a perfect black body, a universal function only of wavelength and temperature."
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "T"
},
{
"math_id": 1,
"text": "\\nu"
},
{
"math_id": 2,
"text": "\\lambda"
},
{
"math_id": 3,
"text": "\\alpha_\\lambda"
},
{
"math_id": 4,
"text": "\\alpha_\\lambda E_{b \\lambda}(\\lambda,T)"
},
{
"math_id": 5,
"text": "E_{b \\lambda}(\\lambda,T)"
},
{
"math_id": 6,
"text": "\\varepsilon_\\lambda E_{b \\lambda}(\\lambda,T)"
},
{
"math_id": 7,
"text": "\\varepsilon_\\lambda"
},
{
"math_id": 8,
"text": "\\alpha_\\lambda = \\varepsilon_\\lambda"
},
{
"math_id": 9,
"text": "\\alpha_{\\mathrm{sun}}=\\displaystyle\\frac{\\int_0^\\infty \\alpha_\\lambda(\\lambda)I_{\\lambda \\mathrm{sun}} (\\lambda)\\,d\\lambda} {\\int_0^\\infty I_{\\lambda \\mathrm{sun}}(\\lambda)\\,d\\lambda}"
},
{
"math_id": 10,
"text": "\\varepsilon_{\\mathrm{paint}}=\\frac{\\int_0^\\infty \\varepsilon_\\lambda (\\lambda,T) E_{b\\lambda}(\\lambda,T)\\,d\\lambda}{\\int_0^\\infty E_{b \\lambda}(\\lambda,T)\\,d\\lambda}"
},
{
"math_id": 11,
"text": "I_{\\lambda \\mathrm{sun}}"
},
{
"math_id": 12,
"text": "\\varepsilon_\\lambda=\\alpha_\\lambda"
},
{
"math_id": 13,
"text": "\\alpha_{\\mathrm{sun}}"
},
{
"math_id": 14,
"text": "\\varepsilon_{\\mathrm{paint}}"
},
{
"math_id": 15,
"text": "E"
},
{
"math_id": 16,
"text": "S_\\nu = k_B\\left[\\left(1 + \\frac{E}{h\\nu}\\right)\\ln\\left(1 + \\frac{E}{h\\nu}\\right) - \\frac{E}{h\\nu}\\ln \\frac{E}{h\\nu}\\right]"
},
{
"math_id": 17,
"text": "h"
},
{
"math_id": 18,
"text": "k_B T = (\\partial_E S)^{-1}"
},
{
"math_id": 19,
"text": "\\nu_1 \\neq \\nu_2"
},
{
"math_id": 20,
"text": "\\nu_1"
},
{
"math_id": 21,
"text": "\\nu_2"
}
] |
https://en.wikipedia.org/wiki?curid=591280
|
59129360
|
Beilinson–Bernstein localization
|
In mathematics, especially in representation theory and algebraic geometry, the Beilinson–Bernstein localization theorem relates D-modules on flag varieties "G"/"B" to representations of the Lie algebra formula_0 attached to a reductive group "G". It was introduced by .
Extensions of this theorem include the case of partial flag varieties "G"/"P", where "P" is a parabolic subgroup in and a theorem relating "D"-modules on the affine Grassmannian to representations of the Kac–Moody algebra formula_1 in .
Statement.
Let "G" be a reductive group over the complex numbers, and "B" a Borel subgroup. Then there is an equivalence of categories
formula_2
On the left is the category of D-modules on "G/B". On the right "χ" is a homomorphism "χ : Z(U(g)) → C " from the centre of the universal enveloping algebra,
formula_3
corresponding to the weight "-ρ ∈ t*" given by minus half the sum over the positive roots of "g". The above action of "W" on "t* = Spec Sym(t)" is shifted so as to fix "-ρ".
Twisted version.
There is an equivalence of categories
formula_4
for any "λ ∈ t*" such that "λ-ρ" does not pair with any positive root "α" to give a nonpositive integer (it is "regular dominant"):
formula_5
Here "χ" is the central character corresponding to "λ-ρ", and "Dλ" is the sheaf of rings on "G/B" formed by taking the *-pushforward of "DG/U" along the "T"-bundle "G/U → G/B", a sheaf of rings whose center is the constant sheaf of algebras "U(t)", and taking the quotient by the central character determined by "λ" (not "λ-ρ").
Example: "SL2".
The Lie algebra of vector fields on the projective line P1 is identified with "sl2", and
formula_6
via
formula_7
It can be checked linear combinations of three vector fields C ⊂ P1 are the only vector fields extending to ∞ ∈ P1. Here,
formula_8
is sent to zero.
The only finite dimensional "sl2" representation on which "Ω" acts by zero is the trivial representation "k", which is sent to the constant sheaf, i.e. the ring of functions "O ∈ D-Mod". The Verma module of weight 0 is sent to the D-Module "δ" supported at "0" ∈ P1.
Each finite dimensional representation corresponds to a different twist.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathfrak g"
},
{
"math_id": 1,
"text": "\\widehat \\mathfrak g"
},
{
"math_id": 2,
"text": " \\mathcal{D}\\text{-Mod}(G/B)\\ \\simeq\\ \\left(U(\\mathfrak{g})/\\ker\\chi\\right) \\text{-Mod}."
},
{
"math_id": 3,
"text": " Z(U(\\mathfrak{g}))\\ \\simeq\\ \\text{Sym}(\\mathfrak{t})^{W,\\rho},"
},
{
"math_id": 4,
"text": " \\mathcal{D}_\\lambda\\text{-Mod}(G/B)\\ \\simeq\\ \\left(U(\\mathfrak{g})/\\ker\\chi_\\lambda\\right) \\text{-Mod}."
},
{
"math_id": 5,
"text": " (\\lambda-\\rho, \\alpha)\\ \\in\\ \\mathbf{C}-\\mathbf{Z}_{\\le 0}."
},
{
"math_id": 6,
"text": " U(\\mathfrak{sl}_2)/\\Omega\\ \\simeq \\ \\mathcal{D}(\\mathbf{P}^1)"
},
{
"math_id": 7,
"text": " (e,h,f) \\ \\mapsto \\ (\\partial_z, -2z\\partial_z, z^2\\partial_z)"
},
{
"math_id": 8,
"text": "\\Omega\\ =\\ ef+fe+\\frac{1}{2}h^2"
}
] |
https://en.wikipedia.org/wiki?curid=59129360
|
59130306
|
Unicity (data analysis)
|
Unicity (formula_0) is a risk metric for measuring the re-identifiability of high-dimensional anonymous data. First introduced in 2013, unicity is measured by the number of points "p" needed to uniquely identify an individual in a data set. The fewer points needed, the more unique the traces are and the easier they would be to re-identify using outside information.
In a high-dimensional, human behavioural data set, such as mobile phone meta-data, for each person, there exists potentially thousands of different records. In the case of mobile phone meta-data, credit card transaction histories and many other types of personal data, this information includes the time and location of an individual.
In research, unicity is widely used to illustrate the re-identifiability of anonymous data sets. In 2013 researchers from the MIT Media Lab showed that only 4 points needed to uniquely identify 95% of individual trajectories in a de-identified data set of 1.5 million mobility trajectories. These "points" were location-time pairs that appeared with the resolution of 1 hour and 0.15 km² to 15 km². These results were shown to hold true for credit card transaction data as well with 4 points being enough to re-identify 90% of trajectories. Further research studied the unicity of the apps installed by people on their smartphones, the trajectories of vehicles, mobile phone data from Boston and Singapore, and, public transport data in Singapore obtained from smartcards.
Measuring unicity.
Unicity (formula_0) is formally defined as the expected value of the fraction of uniquely identifiable trajectories, given "p" points selected from those trajectories uniformly at random. A full computation of formula_0 of a data set formula_1 requires picking "p" points uniformly at random from each trajectory formula_2, and then checking whether or not any other trajectory also contains those "p" points. Averaging over all possible sets of "p" points for each trajectory results in a value for formula_0. This is usually prohibitively expensive as it requires considering every possible set of "p" points for each trajectory in the data set — trajectories that sometimes contain thousands of points.
Instead, unicity is usually estimated using sampling techniques. Specifically, given a data set formula_1, the estimated unicity is computed by sampling from formula_1 a fraction of the trajectories formula_3 and then checking whether each of the trajectories formula_4 are unique in formula_1 given "p" randomly selected points from each formula_5. The fraction of formula_3 that is uniquely identifiable is then the unicity estimate.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\varepsilon_p"
},
{
"math_id": 1,
"text": "D"
},
{
"math_id": 2,
"text": "T_i \\in D"
},
{
"math_id": 3,
"text": "S"
},
{
"math_id": 4,
"text": "T_j \\in S"
},
{
"math_id": 5,
"text": "T_j"
}
] |
https://en.wikipedia.org/wiki?curid=59130306
|
59131
|
Colon (punctuation)
|
Punctuation mark with two dots
The colon, :, is a punctuation mark consisting of two equally sized dots aligned vertically. A colon often precedes an explanation, a list, or a quoted sentence. It is also used between hours and minutes in time, between certain elements in medical journal citations, between chapter and verse in Bible citations, and, in the US, for salutations in business letters and other formal letter writing.
History.
In Ancient Greek, in rhetoric and prosody, the term ("", lit. 'limb, member of a body') did not refer to punctuation, but to a member or section of a complete thought or passage; see also "Colon (rhetoric)". From this usage, in palaeography, a colon is a clause or group of clauses written as a line in a manuscript.
In the 3rd century BC, Aristophanes of Byzantium is alleged to have devised a punctuation system, in which the end of such a was thought to occasion a medium-length breath, and was marked by a middot ·. In practice, evidence is scarce for its early usage, but it was revived later as the "ano teleia", the modern Greek semicolon. Some writers also used a double dot symbol ⁚, that later came to be used as a full stop or to mark a change of speaker. (See also "Punctuation in Ancient Greek".)
In 1589, in "The Arte of English Poesie", the English term "colon" and the corresponding punctuation mark : is attested:
For these respectes the auncient reformers of language, inuented, three maner of pauses [...] The shortest pause or intermission they called "comma" [...] The second they called "colon", not a peece but as it were a member for his larger length, because it occupied twise as much time as the comma. The third they called "periodus", [...]
In 1622, in Nicholas Okes' print of William Shakespeare's "Othello", the typographical construction of a colon followed by a hyphen or dash to indicate a restful pause is attested. This construction, known as the "dog's bollocks", was once common in British English, though this usage is now discouraged.
As late as the 18th century, John Mason related the appropriateness of a colon to the length of the pause taken when reading the text aloud, but silent reading eventually replaced this with other considerations.
Usage in English.
In modern English usage, a complete sentence precedes a colon, while a list, description, explanation, or definition follows it. The elements which follow the colon may or may not be a complete sentence: since the colon is preceded by a sentence, it is a complete sentence whether what follows the colon is another sentence or not. While it is acceptable to capitalise the first letter after the colon in American English, it is not the case in British English, except where a proper noun immediately follows a colon.
"Daequan was so hungry that he ate everything in the house: chips, cold pizza, pretzels and dip, hot dogs, peanut butter, and candy."
"Bertha is so desperate that she'll date anyone, even William: he's uglier than a squashed toad on the highway, and that's on his good days."
"For years while I was reading Shakespeare's "Othello" and criticism on it, I had to constantly look up the word "egregious" since the villain uses that word: outstandingly bad or shocking."
"I guess I can say I had a rough weekend: I had chest pain and spent all Saturday and Sunday in the emergency room."
Some writers use fragments (incomplete sentences) before a colon for emphasis or stylistic preferences (to show a character's voice in literature), as in this example:
"Dinner: chips and juice. What a well-rounded diet I have."
"The Bedford Handbook" describes several uses of a colon. For example, one can use a colon after an independent clause to direct attention to a list, an appositive, or a quotation, and it can be used between independent clauses if the second summarizes or explains the first. In non-literary or non-expository uses, one may use a colon after the salutation in a formal letter, to indicate hours and minutes, to show proportions, between a title and subtitle, and between city and publisher in bibliographic entries.
Luca Serianni, an Italian scholar who helped to define and develop the colon as a punctuation mark, identified four punctuational modes for it: "syntactical-deductive", "syntactical-descriptive", "appositive", and "segmental".
Syntactical-deductive.
The colon introduces the logical consequence, or effect, of a fact stated before.
"There was only one possible explanation: the train had never arrived."
Syntactical-descriptive.
In this sense the colon introduces a description; in particular, it makes explicit the elements of a set.
"I have three sisters: Daphne, Rose, and Suzanne."
Syntactical-descriptive colons may separate the numbers indicating hours, minutes, and seconds in abbreviated measures of time.
"The concert begins at 21:45."
"The rocket launched at 09:15:05."
British English and Australian English, however, more frequently uses a point for this purpose:
"The programme will begin at 8.00 pm."
"You will need to arrive by 14.30."
A colon is also used in the descriptive location of a book verse if the book is divided into verses, such as in the Bible or the Quran:
"Isaiah 42:8"
"Deuteronomy 32:39"
"Quran 10:5"
"Luruns could not speak: he was drunk."
Appositive.
An appositive colon also separates the subtitle of a work from its principal title. (In effect, the example given above illustrates an appositive use of the colon as an abbreviation for the conjunction "because".) Dillon has noted the impact of colons on scholarly articles, but the reliability of colons as a predictor of quality or impact has also been challenged. In titles, neither needs to be a complete sentence as titles do not represent expository writing:
"Star Wars Episode VI: Return of the Jedi"
Segmental.
Like a dash or quotation mark, a segmental colon introduces speech. The segmental function was once a common means of indicating an unmarked quotation on the same line. The following example is from the grammar book "The King's English":
"Benjamin Franklin proclaimed the virtue of frugality: A penny saved is a penny earned."
This form is still used in British industry-standard templates for written performance dialogues, such as in a play. The colon indicates that the words following an character's name are spoken by that character.
"Patient: Doctor, I feel like a pair of curtains."
"Doctor: Pull yourself together!"
The uniform visual pattern of codice_0 placement on a script page assists an actor in scanning for the lines of their assigned character during rehearsal, especially if a script is undergoing rewrites between rehearsals.
Use of capitals.
Use of capitalization or lower-case after a colon varies. In British English, and in most Commonwealth countries, the word following the colon is in lower case unless it is normally capitalized for some other reason, as with proper nouns and acronyms. British English also capitalizes a new sentence introduced by a colon's segmental use.
American English permits writers to similarly capitalize the first word of any independent clause following a colon. This follows the guidelines of some modern American style guides, including those published by the Associated Press and the Modern Language Association. "The Chicago Manual of Style", however, requires capitalization only when the colon introduces a direct quotation, a direct question, or two or more complete sentences.
In many European languages, the colon is usually followed by a lower-case letter unless the upper case is required for other reasons, as with British English. German usage requires capitalization of independent clauses following a colon. Dutch further capitalizes the first word of any quotation following a colon, even if it is not a complete sentence on its own.
Spacing and parentheses.
In print, a thin space was traditionally placed before a colon and a thick space after it. In modern English-language printing, no space is placed before a colon and a single space is placed after it. In French-language typing and printing, the traditional rules are preserved.
One or two spaces may be and have been used after a colon. The older convention (designed to be used by monospaced fonts) was to use "two" spaces after a colon.
In modern typography, a colon will be placed outside the closing parenthesis introducing a list. In very early English typography, it could be placed inside, as seen in Roger Williams' 1643 book about the Native American languages of New England.
Usage in other languages.
Suffix separator.
In Finnish and Swedish, the colon can appear inside words in a manner similar to the apostrophe in the English possessive case, connecting a grammatical suffix to an abbreviation or initialism, a special symbol, or a digit (e.g., Finnish "USA:n" and Swedish "USA:s" for the genitive case of "USA", Finnish "%:ssa" for the inessive case of "%", or Finnish "20:een" for the illative case of "20").
Abbreviation mark.
Written Swedish uses colons in contractions, such as "S:t" for "Sankt" (Swedish for "Saint") – for example in the name of the Stockholm metro station "S:t Eriksplan", and "k:a" for "kyrka" ("church") – for instance Svenska k:a (Svenska kyrkan), the Evangelical Lutheran national Church of Sweden. This can even occur in people's names, for example ("" for "Axelson"). Early Modern English texts also used colons to mark abbreviations.
Word separator.
In Ethiopia, both Amharic and Ge'ez script used and sometimes still use a colon-like mark as word separator.
Historically, a colon-like mark was used as a word separator in Old Turkic script.
End of sentence or verse.
In Armenian, a colon indicates the end of a sentence, similar to a Latin full stop or period.
In liturgical Hebrew, the sof pasuq is used in some writings such as prayer books to signal the end of a verse.
Score divider.
In German, Hebrew, and sometimes in English, a colon divides the scores of opponents in sports and games. A result of 149–0 would be written as 149 : 0 in German and in Hebrew.
Mathematics and logic.
The colon is used in mathematics, cartography, model building, and other fields—in this context it denotes a ratio or a scale, as in 3∶1 (pronounced "three to one").
When a ratio is reduced to a simpler form, such as 10∶15 to 2∶3, this may be expressed with a double colon as 10∶15∶∶2∶3; this would be read "10 is to 15 as 2 is to 3". This form is also used in tests of logic where the question of "Dog is to Puppy as Cat is to _____?" can be expressed as "Dog∶Puppy∶∶Cat∶_____". For these usages the proper Unicode symbol is that is a little higher up than the normal colon. Compare 2∶3 (ratio colon) with 2:3 (normal colon).
In some languages (e.g. German, Russian, and French), the colon is the commonly used sign for division (instead of ÷).
<templatestyles src="Crossreference/styles.css" />
The notation |G : H| may also denote the index of a subgroup.
The notation indicates that f is a function with domain X and codomain Y.
The combination with an equal sign (≔) is used for definitions.
In mathematical logic, when using set-builder notation for describing the characterizing property of a set, it is used as an alternative to a vertical bar (which is the ISO 31-11 standard), to mean "such that". Example:
formula_0 ("S" is the set of all x in formula_1 (the real numbers) such that x is strictly greater than 1 and strictly smaller than 3)
In older literature on mathematical logic, it is used to indicate how expressions should be bracketed (see Glossary of "Principia Mathematica").
In type theory and programming language theory, the colon sign after a term is used to indicate its type, sometimes as a replacement to the "∈" symbol. Example:
formula_2.
A colon is also sometimes used to indicate a tensor contraction involving two indices, and a double colon (::) for a contraction over four indices.
A colon is also used to denote a parallel sum operation involving two operands (many authors, however, instead use a ∥ sign and a few even a ∗ for this purpose).
Computing.
The character was on early typewriters and therefore appeared in most text encodings, such as Baudot code and EBCDIC. It was placed at code 58 in ASCII and from there inherited into Unicode. Unicode also defines several related characters:
Programming languages.
A number of programming languages, most notably ALGOL, Pascal and Ada, use a colon and equals sign as the assignment operator, to distinguish it from a single equals which is an equality test (C instead used a single equals as assignment, and a double equals as the equality test).
Many languages including C and Java use the colon to indicate the text before it is a label, such as a target for a goto or an introduction to a case in a switch statement.131 In a related use, Python uses a colon to separate a control statement (the "clause header") from the block of statements it controls (the "suite"):
if test(x):
print("test(x) is true!")
else:
print("test(x) is not true...")
In a number of languages, including JavaScript, colons are used to define name–value pairs in a dictionary or object. This is also used by data formats such as JSON. Some other languages use an equals sign.
var obj = {
name: "Charles",
age: 18,
The colon is used as part of the conditional operator in C and many other languages.90
C++ uses a double colon as the scope resolution operator, and class member access. Most other languages use a period but C++ had to use this for compatibility with C. Another language using colons for scope resolution is Erlang, which uses a single colon.
In BASIC, it is used as a separator between the statements or instructions in a single line. Most other languages use a semicolon, but BASIC had used semicolon to separate items in print statements.
In Forth, a colon "precedes" definition of a new word.
Haskell uses a colon (pronounced as "cons", short for "construct") as an operator to add an element to the front of a list:
"child" : ["woman", "man"] -- equals ["child","woman","man"]
while a double colon codice_1 is read as "has type of" (compare scope resolution operator):
The ML languages (such as Standard ML) have the above reversed, where the double colon (codice_1) is used to add an element to the front of a list; and the single colon (codice_3) is used for type guards.
MATLAB uses the colon as a binary operator that generates vectors, as well as to select particular portions of existing matrices.
APL uses the colon:
The colon is also used in many operating systems commands.
In the esoteric programming language INTERCAL, the colon is called "two-spot" and is used to identify a 32-bit variable—distinct from a spot (.) which identifies a 16-bit variable.3
Addresses.
Internet URLs use the colon to separate the protocol (such as ) from the hostname or IP address.
In an IPv6 address, colons (and one optional double colon) separate up to 8 groups of 16 bits in hexadecimal representation. In a URL, a colon follows the initial scheme name (such as HTTP and FTP), and separates a port number from the hostname or IP address.
In Microsoft Windows filenames, the colon is reserved for use in alternate data streams and cannot appear in a filename. It was used as the directory separator in Classic Mac OS, and was difficult to use in early versions of the newer BSD-based macOS due to code swapping the slash and colon to try to preserve this usage. In most systems it is often difficult to put a colon in a filename as the shell interprets it for other purposes.
CP/M and early versions of MSDOS required the colon after the names of devices, such as though this gradually disappeared except for disks (where it had to be between the disk name and the required path representation of the file as in codice_5). This then migrated to use in URLs.
Text markup.
It is often used as a single post-fix delimiter, signifying a token keyword had immediately preceded it or the transition from one mode of character string interpretation to another related mode. Some applications, such as the widely used MediaWiki, utilize the colon as both a pre-fix and post-fix delimiter.
In wiki markup, the colon is often used to indent text. Common usage includes separating or marking comments in a discussion as replies, or to distinguish certain parts of a text.
In human-readable text messages, a colon, or multiple colons, is sometimes used to denote an action (similar to how asterisks are used) or to emote (for example, in vBulletin). In the action denotation usage it has the inverse function of quotation marks, denoting actions where unmarked text is assumed to be dialogue. For example:
Tom: Pluto is so small; it should not be considered a planet. It is tiny!
Mark: Oh really? ::drops Pluto on Tom's head:: Still think it's small now?
Colons may also be used for sounds, e.g., ::click::, though sounds can also be denoted by asterisks or other punctuation marks.
Colons can also be used to represent eyes in emoticons.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "S = \\{x \\in \\mathbb{R} : 1 < x < 3 \\}"
},
{
"math_id": 1,
"text": "\\mathbb{R}"
},
{
"math_id": 2,
"text": "\\lambda x . x \\mathrel{:} A \\to A "
}
] |
https://en.wikipedia.org/wiki?curid=59131
|
591359
|
Dialetheism
|
View that there are statements that are both true and false
Dialetheism (; from Greek 'twice' and 'truth') is the view that there are statements that are both true and false. More precisely, it is the belief that there can be a true statement whose negation is also true. Such statements are called "true contradictions", "dialetheia", or nondualisms.
Dialetheism is not a system of formal logic; instead, it is a thesis about truth that influences the construction of a formal logic, often based on pre-existing systems. Introducing dialetheism has various consequences, depending on the theory into which it is introduced. A common mistake resulting from this is to reject dialetheism on the basis that, in traditional systems of logic (e.g., classical logic and intuitionistic logic), every statement becomes a theorem if a contradiction is true, trivialising such systems when dialetheism is included as an axiom. Other logical systems, however, do not explode in this manner when contradictions are introduced; such contradiction-tolerant systems are known as paraconsistent logics. Dialetheists who do not want to allow that every statement is true are free to favour these over traditional, explosive logics.
Graham Priest defines dialetheism as the view that there are true contradictions. Jc Beall is another advocate; his position differs from Priest's in advocating constructive (methodological) deflationism regarding the truth predicate.
Motivations.
Dialetheism resolves certain paradoxes.
The liar paradox and Russell's paradox deal with self-contradictory statements in classical logic and naïve set theory, respectively. Contradictions are problematic in these theories because they cause the theories to explode—if a contradiction is true, then every proposition is true. The classical way to solve this problem is to ban contradictory statements: to revise the axioms of the logic so that self-contradictory statements do not appear (just as with the Russell's paradox). Dialetheists, on the other hand, respond to this problem by accepting the contradictions as true. Dialetheism allows for the unrestricted axiom of comprehension in set theory, claiming that any resulting contradiction is a theorem.
However, self-referential paradoxes, such as the Strengthened Liar can be avoided without revising the axioms by abandoning classical logic and accepting more than two truth values with the help of many-valued logic, such as fuzzy logic or Łukasiewicz logic.
Human reasoning.
Ambiguous situations may cause humans to affirm both a proposition and its negation. For example, if John stands in the doorway to a room, it may seem reasonable both to affirm that "John is in the room" and to affirm that "John is not in the room".
Critics argue that this merely reflects an ambiguity in our language rather than a dialetheic quality in our thoughts; if we replace the given statement with one that is less ambiguous (such as "John is halfway in the room" or "John is in the doorway"), the contradiction disappears. The statements appeared contradictory only because of a syntactic play; here, the actual meaning of "being in the room" is not the same in both instances, and thus each sentence is not the exact logical negation of the other: therefore, they are not necessarily contradictory.
Moreover, John appears to be standing in a conjunction of two concepts. He is in a and not a at the same time, but not in a and "not in" a at the same time (that would result in a contradiction). He is on his logical connective truth-functional operator, which shows the recurrent ambiguity of human language that often fails to capture the nature of some logical statements.
Apparent dialetheism in other philosophical doctrines.
The Jain philosophical doctrine of anekantavada—non-one-sidedness—states that all statements are true in some sense and false in another. Some interpret this as saying that dialetheia not only exist but are ubiquitous. Technically, however, a "logical contradiction" is a proposition that is true and false in the "same" sense; a proposition which is true in one sense and false in another does not constitute a logical contradiction. (For example, although in one sense a man cannot both be a "father" and "celibate"—leaving aside such cases as either a celibate man adopting a child or a man fathering a child and only later adopting celibacy—there is no contradiction for a man to be a "spiritual" father and also celibate; the sense of the word father is different here. In another example, although at the same time George W. Bush cannot both be president and not be president, he was president from 2001-2009, but was not president before 2001 or after 2009, so in different times he was both president and not president.)
The Buddhist logic system, named "Catuṣkoṭi", similarly implies that a statement and its negation may possibly co-exist.
Graham Priest argues in "Beyond the Limits of Thought" that dialetheia arise at the borders of expressibility, in a number of philosophical contexts other than formal semantics.
Formal consequences.
In classical logics, taking a contradiction formula_0 (see List of logic symbols) as a premise (that is, taking as a premise the truth of both formula_1 and formula_2), allows us to prove any statement formula_3. Indeed, since formula_1 is true, the statement formula_4 is true (by generalization). Taking formula_4 together with formula_2 is a disjunctive syllogism from which we can conclude formula_3. (This is often called the "principle of explosion", since the truth of a contradiction is imagined to make the number of theorems in a system "explode".)
Advantages.
The proponents of dialetheism mainly advocate its ability to avoid problems faced by other more orthodox resolutions as a consequence of their appeals to hierarchies. According to Graham Priest, "the whole point of the dialetheic solution to the semantic paradoxes is to get rid of the distinction between object language and meta-language". Another possibility is to utilize dialetheism along with a paraconsistent logic to resurrect the program of logicism advocated for by Frege and Russell. This even allows one to prove the truth of otherwise unprovable theorems such as the well-ordering theorem and the falsity of others such as the continuum hypothesis.
There are also dialetheic solutions to the sorites paradox.
Criticisms.
One criticism of dialetheism is that it fails to capture a crucial feature about negation, known as absoluteness of disagreement.
Imagine John's utterance of "P". Sally's typical way of disagreeing with John is a consequent utterance of ¬"P". Yet, if we accept dialetheism, Sally's so uttering does not prevent her from also accepting "P"; after all, "P" may be a dialetheia and therefore it and its negation are both true. The absoluteness of disagreement is lost.
A response is that disagreement can be displayed by uttering "¬"P" and, furthermore, "P" is not a dialetheia". However, the most obvious codification of ""P" is not a dialetheia" is ¬("P" formula_5 ¬"P"). But "this itself" could be a dialetheia as well. One dialetheist response is to offer a distinction between assertion and rejection. This distinction might be hashed out in terms of the traditional distinction between logical qualities, or as a distinction between two illocutionary speech acts: assertion and rejection. Another criticism is that dialetheism cannot describe logical consequences, once we believe in the relevance of logical consequences, because of its inability to describe hierarchies.
Absoluteness of disagreement is a powerful criticism that is not rescued by the ability to assert "this statement is not a dialetheia", as self-referential statements regarding dialetheia also prevent absoluteness in assertion, even regarding its own existence. P = "Dialetheia exist". I then assert that "P is a dialetheia".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "p \\wedge \\neg p"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "\\neg p"
},
{
"math_id": 3,
"text": "q"
},
{
"math_id": 4,
"text": "p \\vee q"
},
{
"math_id": 5,
"text": "\\wedge"
}
] |
https://en.wikipedia.org/wiki?curid=591359
|
591394
|
Principle of explosion
|
Theorem which states that any statement can be proven from a contradiction
In classical logic, intuitionistic logic, and similar logical systems, the principle of explosion is the law according to which any statement can be proven from a contradiction. That is, from a contradiction, any proposition (including its negation) can be inferred; this is known as deductive explosion.
The proof of this principle was first given by 12th-century French philosopher William of Soissons. Due to the principle of explosion, the existence of a contradiction (inconsistency) in a formal axiomatic system is disastrous; since any statement can be proven, it trivializes the concepts of truth and falsity. Around the turn of the 20th century, the discovery of contradictions such as Russell's paradox at the foundations of mathematics thus threatened the entire structure of mathematics. Mathematicians such as Gottlob Frege, Ernst Zermelo, Abraham Fraenkel, and Thoralf Skolem put much effort into revising set theory to eliminate these contradictions, resulting in the modern Zermelo–Fraenkel set theory.
As a demonstration of the principle, consider two contradictory statements—"All lemons are yellow" and "Not all lemons are yellow"—and suppose that both are true. If that is the case, anything can be proven, e.g., the assertion that "unicorns exist", by using the following argument:
In a different solution to the problems posed by the principle of explosion, some mathematicians have devised alternative theories of logic called "paraconsistent logics", which allow some contradictory statements to be proven without affecting the truth value of (all) other statements.
Symbolic representation.
In symbolic logic, the principle of explosion can be expressed schematically in the following way:
<templatestyles src="Block indent/styles.css"/>formula_0
For any statements "P" and "Q", if "P" and not-"P" are both true, then it logically follows that "Q" is true.
Proof.
Below is a formal proof of the principle using symbolic logic.
This is just the symbolic version of the informal argument given in the introduction, with formula_1 standing for "all lemons are yellow" and formula_2 standing for "Unicorns exist". We start out by assuming that (1) all lemons are yellow and that (2) not all lemons are yellow. From the proposition that all lemons are yellow, we infer that (3) either all lemons are yellow or unicorns exist. But then from this and the fact that not all lemons are yellow, we infer that (4) unicorns exist by disjunctive syllogism.
Semantic argument.
An alternate argument for the principle stems from model theory. A sentence formula_1 is a "semantic consequence" of a set of sentences formula_3 only if every model of formula_3 is a model of formula_1. However, there is no model of the contradictory set formula_4. A fortiori, there is no model of formula_4 that is not a model of formula_2. Thus, vacuously, every model of formula_4 is a model of formula_2. Thus formula_2 is a semantic consequence of formula_4.
Paraconsistent logic.
Paraconsistent logics have been developed that allow for subcontrary-forming operators. Model-theoretic paraconsistent logicians often deny the assumption that there can be no model of formula_5 and devise semantical systems in which there are such models. Alternatively, they reject the idea that propositions can be classified as true or false. Proof-theoretic paraconsistent logics usually deny the validity of one of the steps necessary for deriving an explosion, typically including disjunctive syllogism, disjunction introduction, and "reductio ad absurdum".
Usage.
The metamathematical value of the principle of explosion is that for any logical system where this principle holds, any derived theory which proves ⊥ (or an equivalent form, formula_6) is worthless because "all" its statements would become theorems, making it impossible to distinguish truth from falsehood. That is to say, the principle of explosion is an argument for the law of non-contradiction in classical logic, because without it all truth statements become meaningless.
Reduction in proof strength of logics without ex falso are discussed in minimal logic.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " P, \\lnot P \\vdash Q"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "Q"
},
{
"math_id": 3,
"text": "\\Gamma"
},
{
"math_id": 4,
"text": "(P \\wedge \\lnot P)"
},
{
"math_id": 5,
"text": "\\{\\phi , \\lnot \\phi \\}"
},
{
"math_id": 6,
"text": "\\phi \\land \\lnot \\phi"
}
] |
https://en.wikipedia.org/wiki?curid=591394
|
59141049
|
Polynomial-time counting reduction
|
Problem transformation for counting solutions
In the computational complexity theory of counting problems, a polynomial-time counting reduction is a type of reduction (a transformation from one problem to another) used to define the notion of completeness for the complexity class ♯P. These reductions may also be called polynomial many-one counting reductions or weakly parsimonious reductions; they are analogous to many-one reductions for decision problems and they generalize the parsimonious reductions.
Definition.
A polynomial-time counting reduction is usually used to transform instances of a known-hard problem formula_0 into instances of another problem formula_1 that is to be proven hard. It consists of two functions formula_2 and formula_3, both of which must be computable in polynomial time. The function formula_2 transforms inputs for formula_0 into inputs for formula_1, and the function formula_3 transforms outputs for formula_1 into outputs for formula_0.
These two functions must preserve the correctness of the output. That is, suppose that one transforms an input formula_4 for problem formula_0 to an input formula_5 for problem formula_1, and then one solves formula_6 to produce an output formula_7. It must be the case that the transformed output formula_8 is the correct output for the original input formula_4. That is, if the input-output relations of formula_0 and formula_1 are expressed as functions, then their function composition must obey the identity formula_9. Alternatively, expressed in terms of algorithms, one possible algorithm for solving formula_0 would be to apply formula_2 to transform the problem into an instance of formula_1, solve that instance, and then apply formula_3 to transform the output of formula_1 into the correct answer for formula_0.
Relation to other kinds of reduction.
As a special case, a parsimonious reduction is a polynomial-time transformation formula_2 on the inputs to problems that preserves the exact values of the outputs. Such a reduction can be viewed as a polynomial-time counting reduction, by using the identity function as the function formula_3.
Applications in complexity theory.
A functional problem (specified by its inputs and desired outputs) belongs to the complexity class ♯P if there exists a non-deterministic Turing machine that runs in polynomial time, for which the output to the problem is the number of accepting paths of the Turing machine. Intuitively, such problems count the number of solutions to problems in the complexity class NP. A functional problem formula_1 is said to be ♯P-hard if there exists a polynomial-time counting reduction from every problem formula_0 in ♯P to formula_1. If, in addition, formula_1 itself belongs to ♯P, then formula_1 is said to be ♯P-complete. (Sometimes, as in Valiant's original paper proving the completeness of the permanent of 0–1 matrices, a weaker notion of reduction, Turing reduction, is instead used for defining ♯P-completeness.)
The usual method of proving a problem formula_1 in ♯P to be ♯P-complete is to start with a single known ♯P-complete problem formula_0 and find a polynomial-time counting reduction from formula_0 to formula_1. If this reduction exists, then there exists a reduction from any other problem in ♯P to formula_1, obtained by composing a reduction from the other problem to formula_0 with the reduction from formula_0 to formula_1.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "Y"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "g"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "y=f(x)"
},
{
"math_id": 6,
"text": "y"
},
{
"math_id": 7,
"text": "z"
},
{
"math_id": 8,
"text": "g(z)"
},
{
"math_id": 9,
"text": "X=g\\circ Y\\circ f"
}
] |
https://en.wikipedia.org/wiki?curid=59141049
|
591452
|
UG
|
UG, U.G., or Ug may refer to:
<templatestyles src="Template:TOC_right/styles.css" />
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title .
|
[
{
"math_id": 0,
"text": "U_g"
}
] |
https://en.wikipedia.org/wiki?curid=591452
|
591492
|
Complementary good
|
Concept in economics
In economics, a complementary good is a good whose appeal increases with the popularity of its complement. Technically, it displays a negative cross elasticity of demand and that demand for it increases when the price of another good decreases. If formula_0 is a complement to formula_1, an increase in the price of formula_0 will result in a negative movement along the demand curve of formula_0 and cause the demand curve for formula_1 to shift inward; less of each good will be demanded. Conversely, a decrease in the price of formula_0 will result in a positive movement along the demand curve of formula_0 and cause the demand curve of formula_1 to shift outward; more of each good will be demanded. This is in contrast to a substitute good, whose demand decreases when its substitute's price decreases.
When two goods are complements, they experience "joint demand" - the demand of one good is linked to the demand for another good. Therefore, if a higher quantity is demanded of one good, a higher quantity will also be demanded of the other, and "vice versa". For example, the demand for razor blades may depend on the number of razors in use; this is why razors have sometimes been sold as loss leaders, to increase demand for the associated blades. Another example is that sometimes a toothbrush is packaged free with toothpaste. The toothbrush is a complement to the toothpaste; the cost of producing a toothbrush may be higher than toothpaste, but its sales depends on the demand of toothpaste.
All non-complementary goods can be considered substitutes. If formula_2 and formula_3 are rough complements in an everyday sense, then consumers are willing to pay more for each marginal unit of good formula_2 as they accumulate more formula_3. The opposite is true for substitutes: the consumer is willing to pay less for each marginal unit of good "formula_4" as it accumulates more of good "formula_3".
Complementarity may be driven by psychological processes in which the consumption of one good (e.g., cola) stimulates demand for its complements (e.g., a cheeseburger). Consumption of a food or beverage activates a goal to consume its complements: foods that consumers believe would taste better together. Drinking cola increases consumers' willingness to pay for a cheeseburger. This effect appears to be contingent on consumer perceptions of these relationships rather than their sensory properties.
Examples.
An example of this would be the demand for cars and petrol. The supply and demand for cars is represented by the figure, with the initial demand formula_5. Suppose that the initial price of cars is represented by formula_6 with a quantity demanded of formula_7. If the price of petrol were to decrease by some amount, this would result in a higher quantity of cars demanded. This higher quantity demanded would cause the demand curve to shift rightward to a new position formula_8. Assuming a constant supply curve formula_9 of cars, the new increased quantity demanded will be at formula_10 with a new increased price formula_11. Other examples include automobiles and fuel, mobile phones and cellular service, printer and cartridge, among others.
Perfect complement.
A "perfect complement" is a good that "must" be consumed with another good. The indifference curve of a perfect complement exhibits a right angle, as illustrated by the figure. Such preferences can be represented by a Leontief utility function.
Few goods behave as perfect complements. One example is a left shoe and a right; shoes are naturally sold in pairs, and the ratio between sales of left and right shoes will never shift noticeably from 1:1.
The degree of complementarity, however, does not have to be mutual; it can be measured by the cross price elasticity of demand. In the case of video games, a specific video game (the complement good) has to be consumed with a video game console (the base good). It does not work the other way: a video game console does not have to be consumed with that game.
Example.
In marketing, complementary goods give additional market power to the producer. It allows vendor lock-in by increasing switching costs. A few types of pricing strategy exist for a complementary good and its base good:
Gross complements.
Sometimes the complement-relationship between two goods is not intuitive and must be verified by inspecting the cross-elasticity of demand using market data.
Mosak's definition states "a good formula_2 is a gross complement of formula_3 if formula_12 is negative, where formula_13 for formula_14 denotes the ordinary individual demand for a certain good." In fact, in Mosak's case, formula_2 is not a gross complement of formula_3 but formula_3 is a gross complement of formula_2. The elasticity does not need to be symmetrical. Thus, formula_3 is a gross complement of formula_2 while formula_2 can simultaneously be a gross substitutes for formula_3.
Proof.
The standard Hicks decomposition of the effect on the ordinary demand for a good formula_2 of a simple price change in a good formula_3, utility level formula_15 and chosen bundle formula_16 is
formula_17
If formula_2 is a gross substitute for formula_3, the left-hand side of the equation and the first term of right-hand side are positive. By the symmetry of Mosak's perspective, evaluating the equation with respect to formula_18, the first term of right-hand side stays the same while some extreme cases exist where formula_18 is large enough to make the whole right-hand-side negative. In this case, formula_3 is a gross complement of formula_2. Overall, formula_2 and formula_3 are not symmetrical.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "y"
},
{
"math_id": 4,
"text": "z"
},
{
"math_id": 5,
"text": "D_1"
},
{
"math_id": 6,
"text": "P_1"
},
{
"math_id": 7,
"text": "Q_1"
},
{
"math_id": 8,
"text": "D_2"
},
{
"math_id": 9,
"text": "S"
},
{
"math_id": 10,
"text": "Q_2"
},
{
"math_id": 11,
"text": "P_2"
},
{
"math_id": 12,
"text": "\\frac{\\partial f_x (p, \\omega)}{\\partial p_y}"
},
{
"math_id": 13,
"text": "f_i (p, \\omega)"
},
{
"math_id": 14,
"text": "i = 1, 2 , \\ldots , n"
},
{
"math_id": 15,
"text": "\\tau^*"
},
{
"math_id": 16,
"text": "z^* = (x^*, y^*, \\dots)"
},
{
"math_id": 17,
"text": "\\frac{\\partial f_x(p, \\omega)}{\\partial p_y} = \\frac{\\partial h_x (p, \\tau^*)}{\\partial p_y} - y^* \\frac{\\partial f_x(p, \\omega)}{\\partial \\omega}"
},
{
"math_id": 18,
"text": "x^*"
}
] |
https://en.wikipedia.org/wiki?curid=591492
|
59150309
|
Cuckoo filter
|
Data structure for approximate set membership
A cuckoo filter is a space-efficient probabilistic data structure that is used to test whether an element is a member of a set, like a Bloom filter does. False positive matches are possible, but false negatives are not – in other words, a query returns either "possibly in set" or "definitely not in set". A cuckoo filter can also delete existing items, which is
not supported by Bloom filters.
In addition, for applications that store many items and
target moderately low false positive rates, cuckoo filters can achieve
lower space overhead than space-optimized Bloom filters.
Cuckoo filters were first described in 2014.
Algorithm description.
A cuckoo filter uses a hash table based on cuckoo hashing to store the fingerprints of items. The data structure is broken into buckets of some size formula_0. To insert the fingerprint of an item formula_1, one first computes two potential buckets formula_2 and formula_3 where formula_1 could go. These buckets are calculated using the formula
formula_4
formula_5
Note that, due to the symmetry of the XOR operation, one can compute formula_3 from formula_2, and formula_2 from formula_3. As defined above, formula_6; it follows that formula_7. These properties are what make it possible to store the fingerprints with cuckoo hashing.
The fingerprint of formula_1 is placed into one of buckets formula_2 and formula_3. If the buckets are full, then one of the fingerprints in the bucket is evicted using cuckoo hashing, and placed into the other bucket where it can go. If that bucket, in turn, is also full, then that may trigger another eviction, etc.
The hash table can achieve both high utilization (thanks to cuckoo hashing), and compactness because only fingerprints are stored. Lookup and delete operations of a cuckoo filter are straightforward.
There are a maximum of two buckets to check by formula_2 and formula_3. If found, the appropriate lookup or delete operation can be performed in formula_8 time. Often, in practice, formula_0 is a constant.
In order for the hash table to offer theoretical guarantees, the fingerprint size formula_9 must be at least formula_10 bits. Subject to this constraint, cuckoo filters guarantee a false-positive rate of at most formula_11.
Comparison to Bloom filters.
A cuckoo filter is similar to a Bloom filter in that they both are fast and compact, and they may both return false positives as answers to set-membership queries:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "b"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "h_1(x)"
},
{
"math_id": 3,
"text": "h_2(x)"
},
{
"math_id": 4,
"text": "h_1(x)=\\text{hash}(x)"
},
{
"math_id": 5,
"text": "h_2(x)=h_1(x)\\oplus\\text{hash}(\\text{fingerprint}(x))"
},
{
"math_id": 6,
"text": "h_2(x) = h_1(x)\\oplus\\text{hash}(\\text{fingerprint}(x))"
},
{
"math_id": 7,
"text": "h_1(x) = h_2(x)\\oplus\\text{hash}(\\text{fingerprint}(x))"
},
{
"math_id": 8,
"text": "O(b)"
},
{
"math_id": 9,
"text": "f"
},
{
"math_id": 10,
"text": "\\Omega((\\log n) / b)"
},
{
"math_id": 11,
"text": "\\epsilon \\le b/2^{f - 1}"
},
{
"math_id": 12,
"text": "1.44\\log_2(1/\\epsilon)"
},
{
"math_id": 13,
"text": "\\epsilon"
},
{
"math_id": 14,
"text": "(\\log_2(1/\\epsilon) + 1 + \\log_2 b)/\\alpha"
},
{
"math_id": 15,
"text": "\\alpha"
},
{
"math_id": 16,
"text": "95.5\\%"
},
{
"math_id": 17,
"text": "\\log_2(1/\\epsilon)"
},
{
"math_id": 18,
"text": "2b"
},
{
"math_id": 19,
"text": "O(1)"
}
] |
https://en.wikipedia.org/wiki?curid=59150309
|
5915049
|
Lindley's paradox
|
Statistical paradox
Lindley's paradox is a counterintuitive situation in statistics in which the Bayesian and frequentist approaches to a hypothesis testing problem give different results for certain choices of the prior distribution. The problem of the disagreement between the two approaches was discussed in Harold Jeffreys' 1939 textbook; it became known as Lindley's paradox after Dennis Lindley called the disagreement a paradox in a 1957 paper.
Although referred to as a "paradox", the differing results from the Bayesian and frequentist approaches can be explained as using them to answer fundamentally different questions, rather than actual disagreement between the two methods.
Nevertheless, for a large class of priors the differences between the frequentist and Bayesian approach are caused by keeping the significance level fixed: as even Lindley recognized, "the theory does not justify the practice of keeping the significance level fixed" and even "some computations by Prof. Pearson in the discussion to that paper emphasized how the significance level would have to change with the sample size, if the losses and prior probabilities were kept fixed". In fact, if the critical value increases with the sample size suitably fast, then the disagreement between the frequentist and Bayesian approaches becomes negligible as the sample size increases.
The paradox continues to be a source of active discussion.
Description of the paradox.
The result formula_0 of some experiment has two possible explanations – hypotheses formula_1 and formula_2 – and some prior distribution formula_3 representing uncertainty as to which hypothesis is more accurate before taking into account formula_0.
Lindley's paradox occurs when
These results can occur at the same time when formula_1 is very specific, formula_2 more diffuse, and the prior distribution does not strongly favor one or the other, as seen below.
Numerical example.
The following numerical example illustrates Lindley's paradox. In a certain city 49,581 boys and 48,870 girls have been born over a certain time period. The observed proportion formula_0 of male births is thus / ≈ 0.5036. We assume the fraction of male births is a binomial variable with parameter formula_6 We are interested in testing whether formula_7 is 0.5 or some other value. That is, our null hypothesis is formula_8 and the alternative is formula_9
Frequentist approach.
The frequentist approach to testing formula_1 is to compute a p-value, the probability of observing a fraction of boys at least as large as formula_0 assuming formula_1 is true. Because the number of births is very large, we can use a normal approximation for the fraction of male births formula_10 with formula_11 and formula_12 to compute
formula_13
We would have been equally surprised if we had seen female births, i.e. formula_14 so a frequentist would usually perform a two-sided test, for which the p-value would be formula_15 In both cases, the p-value is lower than the significance level α = 5%, so the frequentist approach rejects formula_4 as it disagrees with the observed data.
Bayesian approach.
Assuming no reason to favor one hypothesis over the other, the Bayesian approach would be to assign prior probabilities formula_16 and a uniform distribution to formula_7 under formula_17 and then to compute the posterior probability of formula_1 using Bayes' theorem:
formula_18
After observing formula_19 boys out of formula_20 births, we can compute the posterior probability of each hypothesis using the probability mass function for a binomial variable:
formula_21
where formula_22 is the Beta function.
From these values, we find the posterior probability of formula_23 which strongly favors formula_1 over formula_2.
The two approaches—the Bayesian and the frequentist—appear to be in conflict, and this is the "paradox".
Reconciling the Bayesian and frequentist approaches.
Almost sure hypothesis testing.
Naaman proposed an adaption of the significance level to the sample size in order to control false positives: "α""n", such that "α""n"
"n" − "r" with "r" > 1/2.
At least in the numerical example, taking "r"
1/2, results in a significance level of 0.00318, so the frequentist would not reject the null hypothesis, which is in agreement with the Bayesian approach.
Uninformative priors.
If we use an uninformative prior and test a hypothesis more similar to that in the frequentist approach, the paradox disappears.
For example, if we calculate the posterior distribution formula_24, using a uniform prior distribution on formula_7 (i.e. formula_25), we find
formula_26
If we use this to check the probability that a newborn is more likely to be a boy than a girl, i.e. formula_27 we find
formula_28
In other words, it is very likely that the proportion of male births is above 0.5.
Neither analysis gives an estimate of the effect size, directly, but both could be used to determine, for instance, if the fraction of boy births is likely to be above some particular threshold.
The lack of an actual paradox.
The apparent disagreement between the two approaches is caused by a combination of factors. First, the frequentist approach above tests formula_1 without reference to formula_2. The Bayesian approach evaluates formula_1 as an alternative to formula_2 and finds the first to be in better agreement with the observations. This is because the latter hypothesis is much more diffuse, as formula_7 can be anywhere in formula_29, which results in it having a very low posterior probability. To understand why, it is helpful to consider the two hypotheses as generators of the observations:
Most of the possible values for formula_7 under formula_2 are very poorly supported by the observations. In essence, the apparent disagreement between the methods is not a disagreement at all, but rather two different statements about how the hypotheses relate to the data:
The ratio of the sex of newborns is improbably 50/50 male/female, according to the frequentist test. Yet 50/50 is a better approximation than most, but not "all", other ratios. The hypothesis formula_31 would have fit the observation much better than almost all other ratios, including formula_32
For example, this choice of hypotheses and prior probabilities implies the statement "if formula_7 > 0.49 and formula_7 < 0.51, then the prior probability of formula_7 being exactly 0.5 is 0.50/0.51 ≈ 98%". Given such a strong preference for formula_33 it is easy to see why the Bayesian approach favors formula_1 in the face of formula_34 even though the observed value of formula_0 lies formula_35 away from 0.5. The deviation of over 2"σ" from formula_1 is considered significant in the frequentist approach, but its significance is overruled by the prior in the Bayesian approach.
Looking at it another way, we can see that the prior distribution is essentially flat with a delta function at formula_36 Clearly, this is dubious. In fact, picturing real numbers as being continuous, it would be more logical to assume that it would be impossible for any given number to be exactly the parameter value, i.e., we should assume formula_37
A more realistic distribution for formula_7 in the alternative hypothesis produces a less surprising result for the posterior of formula_38 For example, if we replace formula_2 with formula_39 i.e., the maximum likelihood estimate for formula_40 the posterior probability of formula_1 would be only 0.07 compared to 0.93 for formula_41 (of course, one cannot actually use the MLE as part of a prior distribution).
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "H_0"
},
{
"math_id": 2,
"text": "H_1"
},
{
"math_id": 3,
"text": "\\pi"
},
{
"math_id": 4,
"text": "H_0,"
},
{
"math_id": 5,
"text": "H_1."
},
{
"math_id": 6,
"text": "\\theta."
},
{
"math_id": 7,
"text": "\\theta"
},
{
"math_id": 8,
"text": "H_0: \\theta = 0.5,"
},
{
"math_id": 9,
"text": "H_1: \\theta \\neq 0.5."
},
{
"math_id": 10,
"text": "X \\sim N(\\mu, \\sigma^2),"
},
{
"math_id": 11,
"text": "\\mu = np = n\\theta = 98\\,451 \\times 0.5 = 49\\,225.5"
},
{
"math_id": 12,
"text": "\\sigma^2 = n\\theta (1 - \\theta) = 98\\,451 \\times 0.5 \\times 0.5 = 24\\,612.75,"
},
{
"math_id": 13,
"text": "\\begin{align}\n P(X \\geq x \\mid \\mu = 49\\,225.5) = \\int_{x = 49\\,581}^{98\\,451} \\frac{1}{\\sqrt{2\\pi\\sigma^2}} e^{-\\frac12 \\left(\\frac{u - \\mu}{\\sigma}\\right)^2} \\,du \\\\\n = \\int_{x = 49\\,581}^{98\\,451} \\frac{1}{\\sqrt{2\\pi(24\\,612.75)}} e^{-\\frac{(u - 49\\,225.5)^2}{2 \\times 24\\,612.75}} \\,du \\approx 0.0117.\n\\end{align}\n"
},
{
"math_id": 14,
"text": "x \\approx 0.4964,"
},
{
"math_id": 15,
"text": "p \\approx 2 \\times 0.0117 = 0.0235."
},
{
"math_id": 16,
"text": "\\pi(H_0) = \\pi(H_1) = 0.5"
},
{
"math_id": 17,
"text": "H_1,"
},
{
"math_id": 18,
"text": " P(H_0 \\mid k) = \\frac{P(k \\mid H_0) \\pi(H_0)}{P(k \\mid H_0) \\pi(H_0) + P(k \\mid H_1) \\pi(H_1)}."
},
{
"math_id": 19,
"text": "k = 49\\,581"
},
{
"math_id": 20,
"text": "n = 98\\,451"
},
{
"math_id": 21,
"text": "\\begin{align}\n P(k \\mid H_0) & = {n \\choose k} (0.5)^k (1 - 0.5)^{n-k} \\approx 1.95 \\times 10^{-4}, \\\\\n P(k \\mid H_1) & = \\int_0^1 {n \\choose k} \\theta^k (1 - \\theta)^{n-k} \\,d\\theta = {n \\choose k} \\operatorname{\\Beta}(k + 1, n - k + 1) = 1 / (n + 1) \\approx 1.02 \\times 10^{-5},\n\\end{align}"
},
{
"math_id": 22,
"text": "\\operatorname{\\Beta}(a, b)"
},
{
"math_id": 23,
"text": "P(H_0 \\mid k) \\approx 0.95,"
},
{
"math_id": 24,
"text": "P(\\theta \\mid x, n)"
},
{
"math_id": 25,
"text": "\\pi(\\theta \\in [0, 1]) = 1"
},
{
"math_id": 26,
"text": " P(\\theta \\mid k, n) = \\operatorname{\\Beta}(k + 1, n - k + 1)."
},
{
"math_id": 27,
"text": "P(\\theta > 0.5 \\mid k, n),"
},
{
"math_id": 28,
"text": " \\int_{0.5}^1 \\operatorname{\\Beta}(49\\,582, 48\\,871) \\approx 0.983."
},
{
"math_id": 29,
"text": "[0, 1]"
},
{
"math_id": 30,
"text": "\\theta \\approx 0.500"
},
{
"math_id": 31,
"text": "\\theta \\approx 0.504"
},
{
"math_id": 32,
"text": "\\theta \\approx 0.500."
},
{
"math_id": 33,
"text": "\\theta = 0.5,"
},
{
"math_id": 34,
"text": "x \\approx 0.5036,"
},
{
"math_id": 35,
"text": "2.28 \\sigma"
},
{
"math_id": 36,
"text": "\\theta = 0.5."
},
{
"math_id": 37,
"text": "P(\\theta = 0.5) = 0."
},
{
"math_id": 38,
"text": "H_0."
},
{
"math_id": 39,
"text": "H_2: \\theta = x,"
},
{
"math_id": 40,
"text": "\\theta,"
},
{
"math_id": 41,
"text": "H_2"
}
] |
https://en.wikipedia.org/wiki?curid=5915049
|
591513
|
Optical cavity
|
Arrangement of mirrors forming a cavity resonator for light waves
An optical cavity, resonating cavity or optical resonator is an arrangement of mirrors or other optical elements that forms a cavity resonator for light waves. Optical cavities are a major component of lasers, surrounding the gain medium and providing feedback of the laser light. They are also used in optical parametric oscillators and some interferometers. Light confined in the cavity reflects multiple times, producing modes with certain resonance frequencies. Modes can be decomposed into longitudinal modes that differ only in frequency and transverse modes that have different intensity patterns across the cross section of the beam. Many types of optical cavities produce standing wave modes.
Different resonator types are distinguished by the focal lengths of the two mirrors and the distance between them. Flat mirrors are not often used because of the difficulty of aligning them to the needed precision. The geometry (resonator type) must be chosen so that the beam remains stable, i.e. the size of the beam does not continually grow with multiple reflections. Resonator types are also designed to meet other criteria such as a minimum beam waist or having no focal point (and therefore no intense light at a single point) inside the cavity.
Optical cavities are designed to have a large Q factor, meaning a beam undergoes many oscillation cycles with little attenuation. In the regime of high Q values, this is equivalent to the frequency line width being small compared to the resonant frequency of the cavity.
Resonator modes.
Light confined in a resonator will reflect multiple times from the mirrors, and due to the effects of interference, only certain patterns and frequencies of radiation will be sustained by the resonator, with the others being suppressed by destructive interference. In general, radiation patterns which are reproduced on every round-trip of the light through the resonator are the most stable. These are known as the "modes" of the resonator.
Resonator modes can be divided into two types: longitudinal modes, which differ in frequency from each other; and transverse modes, which may differ in both frequency and the intensity pattern of the light. The basic, or fundamental transverse mode of a resonator is a Gaussian beam.
Resonator types.
The most common types of optical cavities consist of two facing plane (flat) or spherical mirrors. The simplest of these is the plane-parallel or Fabry–Pérot cavity, consisting of two opposing flat mirrors. While simple, this arrangement is rarely used in large-scale lasers due to the difficulty of alignment; the mirrors must be aligned parallel within a few seconds of arc, or "walkoff" of the intracavity beam will result in it spilling out of the sides of the cavity. However, this problem is much reduced for very short cavities with a small mirror separation distance ("L" < 1 cm). Plane-parallel resonators are therefore commonly used in microchip and microcavity lasers and semiconductor lasers. In these cases, rather than using separate mirrors, a reflective optical coating may be directly applied to the laser medium itself. The plane-parallel resonator is also the basis of the Fabry–Pérot interferometer.
For a resonator with two mirrors with radii of curvature "R"1 and "R"2, there are a number of common cavity configurations. If the two radii are equal to half the cavity length ("R"1 = "R"2 = "L" /2), a concentric or spherical resonator results. This type of cavity produces a diffraction-limited beam waist in the centre of the cavity, with large beam diameters at the mirrors, filling the whole mirror aperture. Similar to this is the hemispherical cavity, with one plane mirror and one mirror of radius equal to the cavity length.
A common and important design is the confocal resonator, with mirrors of equal radii to the cavity length ("R"1 = "R"2 = "L"). This design produces the smallest possible beam diameter at the cavity mirrors for a given cavity length, and is often used in lasers where the purity of the transverse mode pattern is important.
A concave-convex cavity has one convex mirror with a negative radius of curvature. This design produces no intracavity focus of the beam, and is thus useful in very high-power lasers where the intensity of the light might be damaging to the intracavity medium if brought to a focus.
A transparent dielectric sphere, such as a liquid droplet, can also form an optical cavity. In 1986 Richard K. Chang et al. demonstrated lasing using ethanol microdroplets (20–40 micrometers in radius) doped with rhodamine 6G dye. This type of optical cavity exhibits optical resonances when the size of the sphere, the optical wavelength, or the refractive index is varied. The resonance is known as morphology-dependent resonance.
Stability.
Only certain ranges of values for "R"1, "R"2, and "L" produce stable resonators in which periodic refocussing of the intracavity beam is produced. If the cavity is unstable, the beam size will grow without limit, eventually growing larger than the size of the cavity mirrors and being lost. By using methods such as ray transfer matrix analysis, it is possible to calculate a stability criterion:
formula_0
Values which satisfy the inequality correspond to stable resonators.
The stability can be shown graphically by defining a stability parameter, "g" for each mirror:
formula_1,
and plotting "g"1 against "g"2 as shown. Areas bounded by the line "g"1 "g"2 = 1 and the axes are stable. Cavities at points exactly on the line are marginally stable; small variations in cavity length can cause the resonator to become unstable, and so lasers using these cavities are in practice often operated just inside the stability line.
A simple geometric statement describes the regions of stability: A cavity is stable if the line segments between the mirrors and their centers of curvature overlap, but one does not lie entirely within the other.
In the confocal cavity, if a ray is deviated from its original direction in the middle of the cavity, its displacement after reflecting from one of the mirrors is larger than in any other cavity design. This prevents amplified spontaneous emission and is important for designing high power amplifiers with good beam quality.
Practical resonators.
If the optical cavity is not empty (e.g., a laser cavity which contains the gain medium), the value of "L" needs to be adjusted to account for the index of refraction of the medium. Optical elements such as lenses placed in the cavity alter the stability and mode size. In addition, for most gain media, thermal and other inhomogeneities create a variable lensing effect in the medium, which must be considered in the design of the laser resonator.
Practical laser resonators may contain more than two mirrors; three- and four-mirror arrangements are common, producing a "folded cavity". Commonly, a pair of curved mirrors form one or more confocal sections, with the rest of the cavity being quasi-collimated and using plane mirrors. The shape of the laser beam depends on the type of resonator: The beam produced by stable, paraxial resonators can be well modeled by a Gaussian beam. In special cases the beam can be described as a single transverse mode and the spatial properties can be well described by the Gaussian beam, itself. More generally, this beam may be described as a superposition of transverse modes. Accurate description of such a beam involves expansion over some complete, orthogonal set of functions (over two-dimensions) such as Hermite polynomials or the Ince polynomials. Unstable laser resonators on the other hand, have been shown to produce fractal shaped beams.
Some intracavity elements are usually placed at a beam waist between folded sections. Examples include acousto-optic modulators for cavity dumping and vacuum spatial filters for transverse mode control. For some low power lasers, the laser gain medium itself may be positioned at a beam waist. Other elements, such as filters, prisms and diffraction gratings often need large quasi-collimated beams.
These designs allow compensation of the cavity beam's astigmatism, which is produced by Brewster-cut elements in the cavity. A Z-shaped arrangement of the cavity also compensates for coma while the 'delta' or X-shaped cavity does not.
Out of plane resonators lead to rotation of the beam profile and more stability. The heat generated in the gain medium leads to frequency drift of the cavity, therefore the frequency can be actively stabilized by locking it to unpowered cavity. Similarly the pointing stability of a laser may still be improved by spatial filtering by an optical fibre.
Alignment.
Precise alignment is important when assembling an optical cavity. For best output power and beam quality, optical elements must be aligned such that the path followed by the beam is centered through each element.
Simple cavities are often aligned with an alignment laser—a well-collimated visible laser that can be directed along the axis of the cavity. Observation of the path of the beam and its reflections from various optical elements allows the elements' positions and tilts to be adjusted.
More complex cavities may be aligned using devices such as electronic autocollimators and laser beam profilers.
Optical delay lines.
Optical cavities can also be used as multipass optical delay lines, folding a light beam so that a long path-length may be achieved in a small space. A plane-parallel cavity with flat mirrors produces a flat zigzag light path, but as discussed above, these designs are very sensitive to mechanical disturbances and walk-off. When curved mirrors are used in a nearly confocal configuration, the beam travels on a circular zigzag path. The latter is called a Herriott-type delay line. A fixed insertion mirror is placed off-axis near one of the curved mirrors, and a mobile pickup mirror is similarly placed near the other curved mirror. A flat linear stage with one pickup mirror is used in case of flat mirrors and a rotational stage with two mirrors is used for the Herriott-type delay line.
The rotation of the beam inside the cavity alters the polarization state of the beam. To compensate for this, a single pass delay line is also needed, made of either a three or two mirrors in a 3d respective 2d retro-reflection configuration on top of a linear stage. To adjust for beam divergence a second car on the linear stage with two lenses can be used. The two lenses act as a telescope producing a flat phase front of a Gaussian beam on a virtual end mirror.
|
[
{
"math_id": 0,
"text": " 0 \\leqslant \\left( 1 - \\frac{L}{R_1} \\right) \\left( 1 - \\frac{L}{R_2} \\right) \\leqslant 1."
},
{
"math_id": 1,
"text": " g_1 = 1 - \\frac{L}{R_1} ,\\qquad g_2 = 1 - \\frac{L}{R_2}"
}
] |
https://en.wikipedia.org/wiki?curid=591513
|
59152
|
Asterisk
|
Typographical symbol or glyph (*)
The asterisk ( *), from Late Latin , from Ancient Greek , , "little star", is a typographical symbol. It is so called because it resembles a conventional image of a heraldic star.
Computer scientists and mathematicians often vocalize it as star (as, for example, in "the A* search algorithm" or "C*-algebra"). An asterisk is usually five- or six-pointed in print and six- or eight-pointed when handwritten, though more complex forms exist. Its most common use is to call out a footnote. It is also often used to censor offensive words.
In computer science, the asterisk is commonly used as a wildcard character, or to denote pointers, repetition, or multiplication.
History.
The asterisk was already in use as a symbol in ice age cave paintings. There is also a two-thousand-year-old character used by Aristarchus of Samothrace called the , ※, which he used when proofreading Homeric poetry to mark lines that were duplicated. Origen is known to have also used the asteriskos to mark missing Hebrew lines from his Hexapla. The asterisk evolved in shape over time, but its meaning as a symbol used to correct defects remained.
In the Middle Ages, the asterisk was used to emphasize a particular part of text, often linking those parts of the text to a marginal comment. However, an asterisk was not always used.
One hypothesis to the origin of the asterisk is that it stems from the 5000-year-old Sumerian character dingir, 𒀭, though this hypothesis seems to only be based on visual appearance.
Usage.
Censorship.
When toning down expletives, asterisks are often used to replace letters. For example, the word "badword" might become "ba***rd", "b*****d", "b******" or even "*******". Vowels tend to be censored with an asterisk more than consonants, but the intelligibility of censored profanities with multiple syllables such as "b*dw*rd" and "b*****d" or "ba****d", or uncommon ones is higher if put in context with surrounding text.
When a document containing classified information is published, the document may be "sanitized" (redacted) by replacing the classified information with asterisks. For example, the Intelligence and Security Committee Russia report.
Competitive sports and games.
In colloquial usage, an asterisk attached to a sporting record indicates that it is somehow tainted. This is because results that have been considered dubious or set aside are recorded in the record books with an asterisk rendering to a footnote explaining the reason or reasons for concern.
Baseball.
The usage of the term in sports arose during the 1961 baseball season in which Roger Maris of the New York Yankees was threatening to break Babe Ruth's 34-year-old single-season home run record. Ruth had amassed 60 home runs in a season with only 154 games, but Maris was playing the first season in the American League's newly expanded 162-game season. Baseball Commissioner Ford C. Frick, a friend of Ruth's during the legendary slugger's lifetime, held a press conference to announce his "ruling" that should Maris take longer than 154 games both records would be acknowledged by Major League Baseball, but that some "distinctive mark" [his term] be placed next to Maris', which should be listed alongside Ruth's achievement in the "record books". The asterisk as such a mark was suggested at that time by New York Daily News sportswriter Dick Young, not Frick. The reality, however, was that MLB actually had no direct control over any record books until many years later, and it all was merely a suggestion on Frick's part. Within a few years the controversy died down and all prominent baseball record keepers listed Maris as the single-season record holder for as long as he held the record.
Nevertheless, the stigma of holding a tainted record remained with Maris for many years, and the concept of a real or figurative asterisk denoting less-than-accepted "official" records has become widely used in sports and other competitive endeavors. A 2001 TV movie about Maris's record-breaking season was called "61*" (pronounced "sixty-one asterisk") in reference to the controversy.
Uproar over the integrity of baseball records and whether or not qualifications should be added to them arose again in the late 1990s, when a steroid-fueled power explosion led to the shattering of Maris' record. Even though it was obvious - and later admitted - by Mark McGwire that he was heavily on steroids when he hit 70 home runs in 1998, ruling authorities did nothing - to the annoyance of many fans and sportswriters. Three years later self-confessed steroid-user Barry Bonds pushed that record out to 73, and fans once again began to call for an asterisk in the sport's record books.
Fans were especially critical and clamored louder for baseball to act during the 2007 season, as Bonds approached and later broke Hank Aaron's career home run record of 755.
The Houston Astros' 2017 World Series win was marred after an investigation by MLB revealed the team's involvement in a sign-stealing scheme during that season. Fans, appalled by what they perceived to be overly lenient discipline against the Astros players, nicknamed the team the "Houston Asterisks".
In recent years, the asterisk has come into use on baseball scorecards to denote a "great defensive play."
Other sports.
During the first decades of the 21st century, the term "asterisk" to denote a tainted accomplishment caught on in other sports first in North America and then, due in part to North American sports' widespread media exposure, around the world.
Computing.
Programming languages.
Many programming languages and calculators use the asterisk as a symbol for multiplication. It also has a number of special meanings in specific languages, for instance:
Comments in programming languages.
In the B programming language and languages that borrow syntax from it, such as C, PHP, Java, or C#, comments in the source code (for information to people, ignored by the compiler) are marked by an asterisk combined with the slash:
/* This section displays message if user input was not valid
(comment ignored by compiler) */
Some Pascal-like programming languages, for example, Object Pascal, Modula-2, Modula-3, and Oberon, as well as several other languages including ML, Wolfram Language (Mathematica), AppleScript, OCaml, Standard ML, and Maple, use an asterisk combined with a parenthesis:
(* Do not change this variable - it is used later
(comment ignored by compiler) *)
CSS also uses the slash-star comment format.
body {
/* This ought to make the text more readable for far-sighted people */
font-size: 24pt;
Each computing language has its own way of handling comments; and similar notations are not universal.
History of information technology.
The asterisk was a supported symbol on the IBM 026 Keypunch (introduced in 1949 and used to create punch cards with data for early computer systems). It was also included in the FIELDATA character encoding and the ASCII standard.
Fluid mechanics.
In fluid mechanics an asterisk in superscript is sometimes used to mean a property at sonic speed.
Linguistics.
In linguistics, an asterisk may be used for a range of purposes depending on what is being discussed. The symbol is used to indicate reconstructed words of proto-languages (for which there are no records). For modern languages, it may be placed before posited problematic word forms, phrases or sentences to flag that they are hypothetical, ungrammatical, unpronounceable, etc.
Historical linguist August Schleicher is cited as first using the asterisk for linguistic purposes, specifically for unattested forms that are linguistic reconstructions.
Using the asterisk for descriptive and not just historical purposes arose in the 20th century. By analogy with its use in historical linguistics, the asterisk was variously prepended to "hypothetical" or "unattested" elements in modern language. Its usage also expanded to include "non-existent" or "impossible" forms. Leonard Bloomfield (1933) uses the asterisk with forms such as "*cran," impossible to occur in isolation: "cran-" only occurs within the compound "cranberry". Such usage for a "non-existent form" was also found in French, German and Italian works in the middle of the 20th century.
Asterisk usage in linguistics later came to include not just impossible forms, but "ungrammatical sentences", those that are "ill formed for the native speaker". The expansion of asterisk usage to entire sentences is often credited to Noam Chomsky, but Chomsky in 1968 already describes this usage as "conventional". Linguist Fred Householder claims some credit, but Giorgio Graffi argues that using an asterisk for this purpose predates his works.
The meaning of the asterisk usage in specific linguistic works may go unelucidated so can be unclear. Linguistics sometimes uses double asterisks (), another symbol such as the question mark, or both symbols (e.g. ) to indicate degrees of unacceptability.
Historical linguistics.
In historical linguistics, the asterisk marks words or phrases that are not directly recorded in texts or other media, and that are therefore reconstructed on the basis of other linguistic material by the comparative method.
In the following example, the Proto-Germanic word is a reconstructed form.
A double asterisk () sometimes indicates an intermediary or proximate reconstructed form (e.g. a single asterisk for reconstructed thirteenth century Chinese and a double asterisk for reconstructions of older Ancient Chinese or a double asterisk for proto-Popolocan and a single asterisk for intermediary forms).
In other cases, the double asterisk denotes a form that would be expected according to a rule, but is not actually found. That is, it indicates a reconstructed form that is not found or used, and in place of which "another" form is found in actual usage:
Ungrammaticality.
In most areas of linguistics, but especially in syntax, an asterisk in front of a word or phrase indicates that the word or phrase is not used because it is ungrammatical.
An asterisk before a parenthesis indicates that the lack of the word or phrase inside is ungrammatical, while an asterisk after the opening bracket of the parenthesis indicates that the existence of the word or phrase inside is ungrammatical—e.g., the following indicates "go the station" would be ungrammatical:
Use of an asterisk to denote forms or sentences that are ungrammatical is often complemented by the use of the question mark () to indicate a word, phrase or sentence that is avoided, questionable or strange, but not necessarily outright ungrammatical.
Other sources go further and use several symbols (e.g. the asterisk, question mark, and degree symbol ) to indicate gradations or a continuum of acceptability.
Ambiguity.
Since a word marked with an asterisk could mean either "unattested" or "impossible", it is important in some contexts to distinguish these meanings. In general, authors retain asterisks for "unattested", and prefix , , , or for the latter meaning. An alternative is to append the asterisk (or another symbol, possibly to differentiate between even more cases) at the end.
Optimality theory.
In optimality theory, asterisks are used as "violation marks" in tableau cells to denote a violation of a constraint by an output form.
Phonetic transcription.
In phonetic transcription using the International Phonetic Alphabet and similar systems, an asterisk was historically used to denote that the word it preceded was a proper noun. See this example from W. Perrett's 1921 transcription of Gottfried Keller's :
("")
This convention is no longer usual.
Mathematics.
The asterisk has many uses in mathematics. The following list highlights some common uses and is not exhaustive.
The asterisk is used in all branches of mathematics to designate a correspondence between two quantities denoted by the same letter – one with the asterisk and one without.
Mathematical typography.
In fine mathematical typography, the Unicode character (in HTML, &lowast;; not to be confused with ) is available. This character also appeared in the position of the regular asterisk in the PostScript symbol character set in the "Symbol" font included with Windows and Macintosh operating systems and with many printers. It should be used for a large asterisk that lines up with the other mathematical operators, sitting on the math centerline rather than on the text baseline.
Star of Life.
A Star of Life, a six-bar asterisk overlaid with the Rod of Asclepius (the symbol of health), may be used as an alternative to cross or crescent symbols on ambulances.
Statistical results.
In many scientific publications, the asterisk is employed as a shorthand to denote the statistical significance of results when testing hypotheses. When the likelihood that a result occurred by chance alone is below a certain level, one or more asterisks are displayed. Popular significance levels are <0.05 (*), <0.01 (**), and <0.001 (***).
Telephony.
On a tone dialling telephone keypad, the asterisk (called "star") is one of the two special keys (the other is the key – almost invariably replaced by the number sign # (called 'pound sign' (US), 'hash' (other countries), or 'hex'), and is found to the left of the zero). They are used to navigate menus in systems such as voice mail, or in vertical service codes.
Encodings.
The Unicode standard has a variety of asterisk-like characters, compared in the table below. (Characters will display differently in different browsers and fonts.) The reason there are so many is chiefly because of the controversial decision to include in Unicode the entire Zapf Dingbats symbol font.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "p^*"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "y"
},
{
"math_id": 3,
"text": "y^*"
},
{
"math_id": 4,
"text": "\\{\\ast\\}"
},
{
"math_id": 5,
"text": "*: A^k \\rightarrow A^{n-k}"
},
{
"math_id": 6,
"text": "f"
},
{
"math_id": 7,
"text": "f_*"
},
{
"math_id": 8,
"text": "\\bar{z}"
},
{
"math_id": 9,
"text": "\\mathbb{C}^* = \\mathbb{C}\\setminus\\{0\\}."
},
{
"math_id": 10,
"text": "V"
},
{
"math_id": 11,
"text": "V^*"
},
{
"math_id": 12,
"text": "H^k(X)"
},
{
"math_id": 13,
"text": "H^*(X)"
},
{
"math_id": 14,
"text": "z^*"
},
{
"math_id": 15,
"text": "t^*"
},
{
"math_id": 16,
"text": "z"
},
{
"math_id": 17,
"text": "t"
},
{
"math_id": 18,
"text": "f \\ast g"
},
{
"math_id": 19,
"text": "g"
},
{
"math_id": 20,
"text": ":"
},
{
"math_id": 21,
"text": "\\parallel"
}
] |
https://en.wikipedia.org/wiki?curid=59152
|
5915493
|
High-frequency ventilation
|
High-frequency ventilation is a type of mechanical ventilation which utilizes a respiratory rate greater than four times the normal value (>150 (Vf) breaths per minute) and very small tidal volumes. High frequency ventilation is thought to reduce ventilator-associated lung injury (VALI), especially in the context of ARDS and acute lung injury. This is commonly referred to as lung protective ventilation. There are different types of high-frequency ventilation. Each type has its own unique advantages and disadvantages. The types of HFV are characterized by the delivery system and the type of exhalation phase.
High-frequency ventilation may be used alone, or in combination with conventional mechanical ventilation. In general, those devices that need conventional mechanical ventilation do not produce the same lung protective effects as those that can operate without tidal breathing. Specifications and capabilities will vary depending on the device manufacturer.
Physiology.
With conventional ventilation where tidal volumes (VT) exceed dead space(VDEAD), gas exchange is largely related to bulk flow of gas to the alveoli. With high-frequency ventilation, the tidal volumes used are smaller than anatomical and equipment dead space and therefore alternative mechanisms of gas exchange occur.
High-frequency jet ventilation (passive).
In the UK, the Mistral or Monsoon jet ventilator (Acutronic Medical Systems) is most commonly used. In the United States the Bunnell LifePulse jet ventilator is most commonly used.
HFJV minimizes movement of the thorax and abdomen and facilitates surgical procedures where even slight motion artifact from spontaneous or intermittent positive pressure ventilation may significantly affect the duration and success of the procedure (for example atrial fibrillation ablation). HFJV does NOT allow: setting specific tidal volume, sampling ETCO2 (and because of this, frequent ABGs are required to measure PaCO2). In HFJV a jet is applied with a set driving pressure, followed by passive exhalation for a very short period before the next jet is delivered, creating "auto-PEEP" (called pause pressure by the jet ventilator). The risk of excessive breath-stacking leading to barotrauma and pneumothorax is low but not zero.
In HFJV exhalation is passive (depends on passive lung and chest-wall recoil) whereas in HFOV gas movement is caused by in-and-out movement of the “loudspeaker” oscillator membrane. Thus in HFOV both inspiration and expiration are actively caused by the oscillator, and passive exhalation is not allowed.
Bunnell LifePulse jet ventilator.
High-frequency jet ventilation (HFJV) is provided by the Bunnell Life Pulse High-Frequency Ventilator. HFJV employs an endotracheal tube adaptor in place for the normal 15 mm ET tube adaptor. A high pressure "jet" of gas flows out of the adaptor and into the airway. This jet of gas occurs for a very brief duration, about 0.02 seconds, and at high-frequency: 4-11 hertz. Tidal volumes ≤ 1 ml/Kg are used during HFJV. This combination of small tidal volumes delivered for very short periods of time creates the lowest possible distal airway and alveolar pressures produced by a mechanical ventilator. Exhalation is passive. Jet ventilators utilize various I:E ratios—between 1:1.1 and 1:12—to help achieve optimal exhalation. Conventional mechanical breaths are sometimes used to aid in reinflating the lung. Optimal PEEP is used to maintain alveolar inflation and promote ventilation-to-perfusion matching. Jet ventilation has been shown to reduce ventilator induced lung injury by as much as 20%. Usage of high-frequency jet ventilation is recommended in neonates and adults with severe lung injury.
Indications for use.
The Bunnell Life Pulse High-Frequency Ventilator is indicated for use in ventilating critically ill infants with pulmonary interstitial emphysema (PIE). Infants studied ranged in birth weight from 750 to 3529 grams and in gestation age from 24 to 41 weeks.
The Bunnell Life Pulse High-Frequency Ventilator is also indicated for use in ventilating
critically ill infants with respiratory distress syndrome (RDS) complicated by pulmonary air leaks who are, in the opinion of their physicians, failing on conventional ventilation. Infants of this description studied ranged in birth weight from 600 to 3660 grams and in gestational age from 24 to 38 weeks.
Adverse effects.
The adverse side effects noted during the use of high-frequency ventilation include those
commonly found during the use of conventional positive pressure ventilators. These adverse effects include:
Contraindications.
High-frequency jet ventilation is contraindicated in patients requiring tracheal tubes smaller than 2.5 mm ID.
Settings and parameters.
Settings that can be adjusted in HFJV include 1) inspiratory time, 2) driving pressure, 3) frequency, 4) FiO2, and 5) humidity. Increases in FiO2, inspiratory time, and frequency improve oxygenation (by increasing "auto-PEEP" or pause pressure), while an increase in driving pressure and a decrease in frequency improve ventilation.
Peak inspiratory pressure (PIP).
The peak inspiratory pressure (PIP) window displays the average PIP. During startup a PIP sample is taken with every inhalation cycle and is averaged with all other samples taken over the most recent ten-second period. After regular operation begins, samples are averaged over the most recent twenty-second period.
ΔP (Delta P).
The value displayed in the ΔP (pressure difference) window represents the difference between the PIP value and the PEEP value.
formula_0
Servo pressure.
The servo pressure display indicates the amount of pressure the machine must generate
internally in order to achieve the PIP appearing in the servo-display. Its value can range from 0—20 psi (0—137.9 kPa). If the PIP sensed or approximated at the distal tip of the tracheal tube deviates from the desired PIP, the machine automatically generates more or less internal pressure in an attempt to compensate for the change. The servo-pressure display keeps the operator informed.
The servo display is a general clinical indicator of changes in the compliance or resistance of the patient's lungs, as well as loss of lung volume due to tension pneumothorax.
High-frequency oscillatory ventilation.
In HFOV the airway is pressurized to a set mean airway pressure (called continuous lung-distending pressure) through an adjustable expiratory valve. Small pressure oscillations delivered at a very high rate are superimposed by the action of a “loudspeaker” oscillator membrane. HFOV is often used in premature neonates with respiratory distress syndrome who fail to oxygenate appropriately with lung-protective settings of conventional ventilation. It has also been used in ARDS in adults, but two studies (the OSCAR and OSCILLATE trials) showed negative results for this indication.
Parameters that can be set in HFOV includes the continuous lung-distending pressure, oscillation amplitude and frequency, I:E ratio (positive-oscillation/negative-oscillation ratio), fresh gas flow (called bias flow), and FiO2. Increases in continuous lung-distending pressure and FiO2 will improve oxygenation. Increases in amplitude or fresh gas flow and decreases in frequency will improve ventilation.
High-frequency percussive ventilation.
HFPV — High-frequency percussive ventilation combines HFV plus time cycled, pressure-limited controlled mechanical ventilation (i.e., pressure control ventilation, PCV).
High-frequency positive pressure ventilation.
HFPPV — High-frequency positive pressure ventilation is rarely used anymore, having been replaced by high-frequency jet, oscillatory and percussive types of ventilation. HFPPV is delivered through the endotracheal tube using a conventional ventilator whose frequency is set near its upper limits. HFPV began to be used in selected centres in the 1980s. It is a hybrid of conventional mechanical ventilation and high-frequency oscillatory ventilation. It has been used to salvage patients with persistent hypoxemia when on conventional mechanical ventilation or, in some cases, used as a primary modality of ventilatory support from the start.
High-frequency flow interruption.
HFFI — High Frequency Flow Interruption is similar to high-frequency jet ventilation but the gas control mechanism is different. Frequently a rotating bar or ball with a small opening is placed in the path of a high pressure gas. As the bar or ball rotates and the opening lines-up with the gas flow, a small, brief pulse of gas is allowed to enter the airway. Frequencies for HFFI are typically limited to maximum of about 15 hertz.
High-frequency ventilation (active).
High-frequency ventilation (active) — HFV-A is notable for the active exhalation mechanic included. Active exhalation means a negative pressure is applied to force volume out of the lungs. The CareFusion 3100A and 3100B are similar in all aspects except the target patient size. The 3100A is designed for use on patients up to 35 kilograms and the 3100B is designed for use on patients larger than 35 kilograms.
CareFusion 3100A and 3100B.
High-frequency oscillatory ventilation was first described in 1972 and is used in neonates and adult patient populations to reduce lung injury, or to prevent further lung injury. HFOV is characterized by high respiratory rates between 3.5 and 15 hertz (210 - 900 breaths per minute) and having both inhalation and exhalation maintained by active pressures. The rates used vary widely depending upon patient size, age, and disease process. In HFOV the pressure oscillates around the constant distending pressure (equivalent to mean airway pressure [MAP]) which in effect is the same as positive end-expiratory pressure (PEEP). Thus gas is pushed into the lung during inspiration, and then pulled out during expiration. HFOV generates very low tidal volumes that are generally less than the dead space of the lung. Tidal volume is dependent on endotracheal tube size, power and frequency. Different mechanisms (direct bulk flow - convective, Taylorian dispersion, Pendelluft effect, asymmetrical velocity profiles, cardiogenic mixing and molecular diffusion) of gas transfer are believed to come into play in HFOV compared to normal mechanical ventilation. It is often used in patients who have refractory hypoxemia that cannot be corrected by normal mechanical ventilation such as is the case in the following disease processes: severe ARDS, ALI and other oxygenation diffusion issues. In some neonatal patients HFOV may be used as the first-line ventilator due to the high susceptibility of the premature infant to lung injury from conventional ventilation.
Breath delivery.
The vibrations are created by an electromagnetic valve that controls a piston. The resulting vibrations are similar to those produced by a stereo speaker. The height of the vibrational wave is the amplitude. Higher amplitudes create greater pressure fluctuations which move more gas with each vibration. The number of vibrations per minute is the frequency. One Hertz equals 60 cycles per minute. The higher amplitudes at lower frequencies will cause the greatest fluctuation in pressure and move the most gas.
Altering the % inspiratory time (T%i) changes the proportion of the time in which the vibration or sound wave is above the baseline versus below it. Increasing the % Inspiratory Time will also increase the volume of gas moved or tidal volume. Decreasing the frequency, increasing the amplitude, and increasing the % inspiratory time will all increase tidal volume and eliminate CO2. Increasing the tidal volume will also tend to increase the mean airway pressure.
Settings and measurements.
Bias flow.
The bias flow controls and indicates the rate of continuous flow of humidified blended gas through the patient circuit. The control knob is a 15-turn pneumatic valve which increases flow as it is turned.
Mean pressure adjust.
The mean pressure adjust setting adjusts the mean airway pressure (PAW) by controlling the resistance of the airway pressure control valve. The mean airway pressure will change and requires the mean pressure adjust to be adjusted when the following settings are changed:
During high-frequency oscillatory ventilation (HFOV), PAW is the primary variable affecting oxygenation and is set independent of other variables on the oscillator. Because distal airway pressure changes during HFOV are minimal, the PAW during HFOV can be viewed in a manner similar to the PEEP level in conventional ventilation. The optimal PAW can be considered as a compromise between maximal lung recruitment and minimal overdistention.
Mean pressure limit.
The mean pressure limit controls the limit above which proximal PAW cannot be increased by setting the control pressure of the pressure limit valve. The mean pressure limit range is 10-45 cmH2O.
ΔP and amplitude.
The power setting is set as amplitude to establish a measured change of pressure (ΔP). Amplitude/Power is a setting which determines the amount of power that is driving the oscillator piston forward and backward resulting in an air volume (tidal volume) displacement. The effect of the amplitude on the ΔP that it is changed by the displacement of the oscillator piston and hence the oscillatory pressure (ΔP). The power setting interacts with PAW conditions existing within the patient circuit to produce the resulting ΔP.
% Inspiratory time.
The percent of inspiratory time is a setting which determines the percent of cycle time the piston is traveling toward (or at its final inspiratory position). The inspiratory percent range is 30—50%.
Frequency.
The frequency setting is measured in hertz (hz). The control knob is a 10-turn clockwise-increasing potentiometer covering a range of 3 Hz to 15 Hz. The set frequency is displayed on a digital meter on the face of the ventilator. One Hertz is (-/+5%) equal to 1 breath per second, or 60 breaths per minute (e.g., 10 Hz = 600 breaths per minute). Changes in
frequency are inversely proportional to the amplitude and thus delivered tidal volume.
formula_1
Oscillation trough pressure.
Oscillation trough pressure is the instantaneous pressure within the HFOV circuit following the oscillating piston reaching its complete negative deflection.
formula_2
Transtracheal jet ventilation.
Transtracheal jet ventilation refers to a type of high-frequency ventilation, low tidal volume ventilation provided via a laryngeal catheter by specialized ventilators that are usually only available in the operating room or intensive care unit. This procedure is occasionally employed in the operating room when a difficult airway is anticipated. Such as Treacher Collins syndrome, Robin sequence, head and neck surgery with supraglottic or glottic obstruction).
Adverse effects.
The adverse side effects noted during the use of high-frequency ventilation include those
commonly found during the use of conventional positive pressure ventilators. These adverse effects include:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Delta p = P_{IP} - P_{EEP}"
},
{
"math_id": 1,
"text": "f = Hz \\cdot 60_{seconds}"
},
{
"math_id": 2,
"text": "OTP = MAP - (AMP/3)"
}
] |
https://en.wikipedia.org/wiki?curid=5915493
|
591568
|
Trigonometric polynomial
|
In the mathematical subfields of numerical analysis and mathematical analysis, a trigonometric polynomial is a finite linear combination of functions sin("nx") and cos("nx") with "n" taking on the values of one or more natural numbers. The coefficients may be taken as real numbers, for real-valued functions. For complex coefficients, there is no difference between such a function and a finite Fourier series.
Trigonometric polynomials are widely used, for example in trigonometric interpolation applied to the interpolation of periodic functions. They are used also in the discrete Fourier transform.
The term "trigonometric polynomial" for the real-valued case can be seen as using the analogy: the functions sin("nx") and cos("nx") are similar to the monomial basis for polynomials. In the complex case the trigonometric polynomials are spanned by the positive and negative powers of formula_0, i.e., Laurent polynomials in formula_1 under the change of variables formula_2.
Definition.
Any function "T" of the form
formula_3
with coefficients formula_4 and at least one of the highest-degree coefficients formula_5 and formula_6 non-zero, is called a "complex trigonometric polynomial" of degree "N". Using Euler's formula the polynomial can be rewritten as
formula_7
with formula_8.
Analogously, letting coefficients formula_9, and at least one of formula_5 and formula_6 non-zero or, equivalently, formula_10 and formula_11 for all formula_12, then
formula_13
is called a "real trigonometric polynomial" of degree "N".
Properties.
A trigonometric polynomial can be considered a periodic function on the real line, with period some divisor of &NoBreak;&NoBreak;, or as a function on the unit circle.
Trigonometric polynomials are dense in the space of continuous functions on the unit circle, with the uniform norm; this is a special case of the Stone–Weierstrass theorem. More concretely, for every continuous function &NoBreak;&NoBreak; and every &NoBreak;&NoBreak; there exists a trigonometric polynomial &NoBreak;&NoBreak; such that formula_14 for all &NoBreak;&NoBreak;. Fejér's theorem states that the arithmetic means of the partial sums of the Fourier series of &NoBreak;&NoBreak; converge uniformly to &NoBreak;&NoBreak; provided &NoBreak;&NoBreak; is continuous on the circle; these partial sums can be used to approximate &NoBreak;&NoBreak;.
A trigonometric polynomial of degree &NoBreak;&NoBreak; has a maximum of &NoBreak;&NoBreak; roots in a real interval &NoBreak;&NoBreak; unless it is the zero function.
Fejér-Riesz theorem.
The Fejér-Riesz theorem states that every positive "real" trigonometric polynomial
formula_15
satisfying formula_16 for all formula_17,
can be represented as the square of the modulus of another (usually "complex") trigonometric polynomial formula_18 such that:
formula_19
Or, equivalently, every Laurent polynomial
formula_20
with formula_21 that satisfies formula_22 for all formula_23 can be written as:
formula_24
for some polynomial formula_25.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "e^{ix}"
},
{
"math_id": 1,
"text": "z "
},
{
"math_id": 2,
"text": "x \\mapsto z := e^{ix}"
},
{
"math_id": 3,
"text": "T(x) = a_0 + \\sum_{n=1}^N a_n \\cos (nx) + \\sum_{n=1}^N b_n \\sin(nx) \\qquad (x \\in \\mathbb{R})"
},
{
"math_id": 4,
"text": "a_n, b_n \\in \\mathbb{C}"
},
{
"math_id": 5,
"text": "a_N"
},
{
"math_id": 6,
"text": "b_N"
},
{
"math_id": 7,
"text": "T(x) = \\sum_{n=-N}^N c_n e^{inx} \\qquad (x \\in \\mathbb{R})."
},
{
"math_id": 8,
"text": "c_{n}\\in\\mathbb{C}"
},
{
"math_id": 9,
"text": "a_n, b_n \\in \\mathbb{R}"
},
{
"math_id": 10,
"text": "c_n \\in \\mathbb{R}"
},
{
"math_id": 11,
"text": "c_n = \\bar{c}_{-n}"
},
{
"math_id": 12,
"text": "n\\in[-N,N]"
},
{
"math_id": 13,
"text": "t(x) = a_0 + \\sum_{n=1}^N a_n \\cos (nx) + \\sum_{n=1}^N b_n \\sin(nx) \\qquad (x \\in \\mathbb{R})"
},
{
"math_id": 14,
"text": "|f(z) - T(z)| < \\epsilon"
},
{
"math_id": 15,
"text": "t(x) = \\sum_{n=-N}^{N} c_n e^{i n x},"
},
{
"math_id": 16,
"text": "t(x)>0"
},
{
"math_id": 17,
"text": "x\\in\\mathbb{R}"
},
{
"math_id": 18,
"text": "q(x)"
},
{
"math_id": 19,
"text": "t(x) = |q(x)|^2 = q(x)\\bar{q}(x)."
},
{
"math_id": 20,
"text": "w(z)=\\sum_{n=-N}^{N} w_{n}z^{n},"
},
{
"math_id": 21,
"text": "w_n \\in\\mathbb{C}"
},
{
"math_id": 22,
"text": "w(\\zeta)\\geq 0"
},
{
"math_id": 23,
"text": "\\zeta \\in \\mathbb{T}"
},
{
"math_id": 24,
"text": " w(\\zeta)=|p(\\zeta)|^2=p(\\zeta)\\bar{p}(\\bar{\\zeta}),"
},
{
"math_id": 25,
"text": "p(z)"
}
] |
https://en.wikipedia.org/wiki?curid=591568
|
59156946
|
Tropical projective space
|
In tropical geometry, a tropical projective space is the tropical analog of the classic projective space.
Definition.
Given a module "M" over the tropical semiring T, its projectivization is the usual projective space of a module: the quotient space of the module (omitting the additive identity 0) under scalar multiplication, omitting multiplication by the scalar additive identity 0:
formula_0
In the tropical setting, tropical multiplication is classical addition, with unit real number 0 (not 1); tropical addition is minimum or maximum (depending on convention), with unit extended real number ∞ (not 0), so it is clearer to write this using the extended real numbers, rather than the abstract algebraic units:
formula_1
Just as in the classical case, the standard n-dimensional tropical projective space is defined as the quotient of the standard ("n"+1)-dimensional coordinate space by scalar multiplication, with all operations defined coordinate-wise:
formula_2
Tropical multiplication corresponds to classical addition, so tropical scalar multiplication by "c" corresponds to adding "c" to all coordinates. Thus two elements of &NoBreak;}&NoBreak; are identified if their coordinates differ by the same additive amount "c":
formula_3
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathbf{T}(M) := (M \\setminus \\mathbf{0})/(\\mathbf{T} \\setminus 0)."
},
{
"math_id": 1,
"text": "\\mathbf{T}(M) := (M \\setminus \\boldsymbol{\\infty})/(\\mathbf{T} \\setminus \\infty)."
},
{
"math_id": 2,
"text": "\\mathbf{TP}^n := (\\mathbf{T}^{n+1} \\setminus \\boldsymbol{\\infty})/(\\mathbf{T} \\setminus \\infty)."
},
{
"math_id": 3,
"text": "(x_0, \\dots, x_n) \\sim (y_0, \\dots, y_n) \\iff (x_0 + c, \\dots, x_n + c) = (y_0, \\dots, y_n)."
}
] |
https://en.wikipedia.org/wiki?curid=59156946
|
59158118
|
Idempotent analysis
|
Area of math
In mathematical analysis, idempotent analysis is the study of idempotent semirings, such as the tropical semiring. The lack of an additive inverse in the semiring is compensated somewhat by the idempotent rule formula_0.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "A \\oplus A = A"
}
] |
https://en.wikipedia.org/wiki?curid=59158118
|
59158120
|
Tropical analysis
|
Study of the tropical semiring
In the mathematical discipline of idempotent analysis, tropical analysis is the study of the tropical semiring.
Applications.
The max tropical semiring can be used appropriately to determine marking times within a given Petri net and a vector filled with marking state at the beginning: formula_0 (unit for max, tropical addition) means "never before", while 0 (unit for addition, tropical multiplication) is "no additional time".
Tropical cryptography is cryptography based on the tropical semiring.
Tropical geometry is an analog to algebraic geometry, using the tropical semiring.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "-\\infty"
}
] |
https://en.wikipedia.org/wiki?curid=59158120
|
59158299
|
Up-and-down design
|
Statistical experiment designs
Up-and-down designs (UDDs) are a family of statistical experiment designs used in dose-finding experiments in science, engineering, and medical research. Dose-finding experiments have "binary responses": each individual outcome can be described as one of two possible values, such as success vs. failure or toxic vs. non-toxic. Mathematically the binary responses are coded as 1 and 0. The goal of dose-finding experiments is to estimate the strength of treatment (i.e., the "dose") that would trigger the "1" response a pre-specified proportion of the time. This dose can be envisioned as a percentile of the distribution of response thresholds. An example where dose-finding is used is in an experiment to estimate the LD50 of some toxic chemical with respect to mice.
Dose-finding designs are sequential and response-adaptive: the dose at a given point in the experiment depends upon previous outcomes, rather than be fixed "a priori". Dose-finding designs are generally more efficient for this task than fixed designs, but their properties are harder to analyze, and some require specialized design software. UDDs use a discrete set of doses rather than vary the dose continuously. They are relatively simple to implement, and are also among the best understood dose-finding designs. Despite this simplicity, UDDs generate random walks with intricate properties. The original UDD aimed to find the median threshold by increasing the dose one level after a "0" response, and decreasing it one level after a "1" response. Hence the name "up-and-down". Other UDDs break this symmetry in order to estimate percentiles other than the median, or are able to treat groups of subjects rather than one at a time.
UDDs were developed in the 1940s by several research groups independently. The 1950s and 1960s saw rapid diversification with UDDs targeting percentiles other than the median, and expanding into numerous applied fields. The 1970s to early 1990s saw little UDD methods research, even as the design continued to be used extensively. A revival of UDD research since the 1990s has provided deeper understanding of UDDs and their properties, and new and better estimation methods.
UDDs are still used extensively in the two applications for which they were originally developed: psychophysics where they are used to estimate sensory thresholds and are often known as fixed forced-choice staircase procedures, and explosive sensitivity testing, where the median-targeting UDD is often known as the Bruceton test. UDDs are also very popular in toxicity and anesthesiology research. They are also considered a viable choice for Phase I clinical trials.
Mathematical description.
Definition.
Let formula_0 be the sample size of a UDD experiment, and assuming for now that subjects are treated one at a time. Then the doses these subjects receive, denoted as random variables formula_1, are chosen from a discrete, finite set of formula_2 increasing "dose levels" formula_3 Furthermore, if formula_4, then formula_5 according to simple constant rules based on recent responses. The next subject must be treated one level up, one level down, or at the same level as the current subject. The responses themselves are denoted formula_6 hereafter the "1" responses are positive and "0" negative. The repeated application of the same rules (known as "dose-transition rules") over a finite set of dose levels, turns formula_1 into a random walk over formula_7. Different dose-transition rules produce different UDD "flavors", such as the three shown in the figure above.
Despite the experiment using only a discrete set of dose levels, the dose-magnitude variable itself, formula_8, is assumed to be continuous, and the probability of positive response is assumed to increase continuously with increasing formula_8. The goal of dose-finding experiments is to estimate the dose formula_8 (on a continuous scale) that would trigger positive responses at a pre-specified target rate formula_9; often known as the "target dose". This problem can be also expressed as estimation of the quantile formula_10 of a cumulative distribution function describing the dose-toxicity curve formula_11. The density function formula_12 associated with formula_11 is interpretable as the distribution of "response thresholds" of the population under study.
Transition probability matrix.
Given that a subject receives dose formula_13, denote the probability that the next subject receives dose formula_14, or formula_15, as formula_16 or formula_17, respectively. These "transition probabilities" obey the constraints formula_18 and the boundary conditions formula_19.
Each specific set of UDD rules enables the symbolic calculation of these probabilities, usually as a function of formula_11. Assuming that transition probabilities are fixed in time, depending only upon the current allocation and its outcome, i.e., upon formula_20 and through them upon formula_11 (and possibly on a set of fixed parameters). The probabilities are then best represented via a tri-diagonal transition probability matrix (TPM) formula_21:
formula_22
Balance point.
Usually, UDD dose-transition rules bring the dose down (or at least bar it from escalating) after positive responses, and vice versa. Therefore, UDD random walks have a central tendency: dose assignments tend to meander back and forth around some dose formula_23 that can be calculated from the transition rules, when those are expressed as a function of formula_11. This dose has often been confused with the experiment's formal target formula_10, and the two are often identical - but they do not have to be. The target is the dose that the experiment is tasked with estimating, while formula_23, known as the "balance point", is approximately where the UDD's random walk revolves around.
Stationary distribution of dose allocations.
Since UDD random walks are regular Markov chains, they generate a stationary distribution of dose allocations, formula_24, once the effect of the manually-chosen starting dose wears off. This means, long-term visit frequencies to the various doses will approximate a steady state described by formula_24. According to Markov chain theory the starting-dose effect wears off rather quickly, at a geometric rate. Numerical studies suggest that it would typically take between formula_25 and formula_26 subjects for the effect to wear off nearly completely. formula_24 is also the asymptotic distribution of cumulative dose allocations.
UDDs' central tendencies ensure that long-term, the most frequently visited dose (i.e., the mode of formula_24) will be one of the two doses closest to the balance point formula_23. If formula_23 is outside the range of allowed doses, then the mode will be on the boundary dose closest to it. Under the original median-finding UDD, the mode will be at the closest dose to formula_23 in any case. Away from the mode, asymptotic visit frequencies decrease sharply, at a faster-than-geometric rate. Even though a UDD experiment is still a random walk, long excursions away from the region of interest are very unlikely.
Common UDDs.
Original ("simple" or "classical") UDD.
The original "simple" or "classical" UDD moves the dose up one level upon a negative response, and vice versa. Therefore, the transition probabilities are
formula_27
We use the original UDD as an example for calculating the balance point formula_23. The design's 'up', 'down' functions are formula_28 We equate them to find formula_29:
formula_30
The "classical" UDD is designed to find the median threshold. This is a case where formula_31
The "classical" UDD can be seen as a special case of each of the more versatile designs described below.
Durham and Flournoy's biased coin design.
This UDD shifts the balance point, by adding the option of treating the next subject at the same dose rather than move only up or down. Whether to stay is determined by a random toss of a metaphoric "coin" with probability formula_32 This biased-coin design (BCD) has two "flavors", one for formula_33 and one for formula_34 whose rules are shown below:
formula_35
The heads probability formula_36 can take any value informula_37. The balance point is
formula_38
The BCD balance point can made identical to a target rate formula_10 by setting the heads probability to formula_39. For example, for formula_40 set formula_41. Setting formula_42 makes this design identical to the classical UDD, and inverting the rules by imposing the coin toss upon positive rather than negative outcomes, produces above-median balance points. Versions with two coins, one for each outcome, have also been published, but they do not seem to offer an advantage over the simpler single-coin BCD.
Group (cohort) UDDs.
Some dose-finding experiments, such as phase I trials, require a waiting period of weeks before determining each individual outcome. It may preferable then, to be able treat several subjects at once or in rapid succession. With group UDDs, the transition rules apply rules to cohorts of fixed size formula_43 rather than to individuals. formula_44 becomes the dose given to cohort formula_45, and formula_46 is the number of positive responses in the formula_45-th cohort, rather than a binary outcome. Given that the formula_45-th cohort is treated at formula_4 on the interior of formula_7 the formula_47-th cohort is assigned to
formula_48
formula_46 follow a binomial distribution conditional on formula_44, with parameters formula_43 and formula_49. The up and down probabilities are the binomial distribution's tails, and the stay probability its center (it is zero if formula_50). A specific choice of parameters can be abbreviated as GUDformula_51
Nominally, group UDDs generate formula_43-order random walks, since the formula_43 most recent observations are needed to determine the next allocation. However, with cohorts viewed as single mathematical entities, these designs generate a first-order random walk having a tri-diagonal TPM as above. Some relevant group UDD subfamilies:
formula_57
With formula_58 would be associated with formula_59 and formula_60, respectively. The mirror-image family GUDformula_61 has its balance points at one minus these probabilities.
For general group UDDs, the balance point can be calculated only numerically, by finding the dose formula_23 with toxicity rate formula_29 such that
formula_62
Any numerical root-finding algorithm, e.g., Newton–Raphson, can be used to solve for formula_29.
formula_63-in-a-row (or "transformed" or "geometric") UDD.
This is the most commonly used non-median UDD. It was introduced by Wetherill in 1963, and proliferated by him and colleagues shortly thereafter to psychophysics, where it remains one of the standard methods to find sensory thresholds. Wetherill called it "transformed" UDD; Misrak Gezmu who was the first to analyze its random-walk properties, called it "Geometric" UDD in the 1990s; and in the 2000s the more straightforward name "formula_63-in-a-row" UDD was adopted. The design's rules are deceptively simple:
formula_64
Every dose escalation requires formula_63 non-toxicities observed on consecutive data points, all at the current dose, while de-escalation only requires a single toxicity. It closely resembles GUDformula_65 described above, and indeed shares the same balance point. The difference is that formula_63-in-a-row can bail out of a dose level upon the first toxicity, whereas its group UDD sibling might treat the entire cohort at once, and therefore might see more than one toxicity before descending.
The method used in sensory studies is actually the mirror-image of the one defined above, with formula_63 successive responses required for a de-escalation and only one non-response for escalation, yielding formula_66 for formula_67.
formula_63-in-a-row generates a formula_63-th order random walk because knowledge of the last formula_63 responses might be needed. It can be represented as a first-order chain with formula_68 states, or as a Markov chain with formula_2 levels, each having formula_63 "internal states" labeled formula_69 to formula_70 The internal state serves as a counter of the number of immediately recent consecutive non-toxicities observed at the current dose. This description is closer to the physical dose-allocation process, because subjects at different internal states of the level formula_71, are all assigned the same dose formula_13. Either way, the TPM is formula_72 (or more precisely, formula_73, because the internal counter is meaningless at the highest dose) - and it is not tridiagonal.
Here is the expanded formula_63-in-a-row TPM with formula_74 and formula_75, using the abbreviation formula_76 Each level's internal states are adjacent to each other.
formula_77
formula_63-in-a-row is often considered for clinical trials targeting a low-toxicity dose. In this case, the balance point and the target are not identical; rather, formula_63 is chosen to aim close to the target rate, e.g., formula_74 for studies targeting the 30th percentile, and formula_78 for studies targeting the 20th percentile.
Estimating the target dose.
Unlike other design approaches, UDDs do not have a specific estimation method "bundled in" with the design as a default choice. Historically, the more common choice has been some weighted average of the doses administered, usually excluding the first few doses to mitigate the starting-point bias. This approach antedates deeper understanding of UDDs' Markov properties, but its success in numerical evaluations relies upon the eventual sampling from formula_24, since the latter is centered roughly around formula_79
The single most popular among these "averaging estimators" was introduced by Wetherill et al. in 1966, and only includes "reversal points" (points where the outcome switches from 0 to 1 or vice versa) in the average. In recent years, the limitations of averaging estimators have come to light, in particular the many sources of bias that are very difficult to mitigate. Reversal estimators suffer from both multiple biases (although there is some inadvertent cancelling out of biases), and increased variance due to using a subsample of doses. However, the knowledge about averaging-estimator limitations has yet to disseminate outside the methodological literature and affect actual practice.
By contrast, "regression estimators" attempt to approximate the curve formula_80 describing the dose-response relationship, in particular around the target percentile. The raw data for the regression are the doses formula_13 on the horizontal axis, and the observed toxicity frequencies,
formula_81
on the vertical axis. The target estimate is the abscissa of the point where the fitted curve crosses formula_82
Probit regression has been used for many decades to estimate UDD targets, although far less commonly than the reversal-averaging estimator. In 2002, Stylianou and Flournoy introduced an interpolated version of isotonic regression (IR) to estimate UDD targets and other dose-response data. More recently, a modification called "centered isotonic regression" (CIR) was developed by Oron and Flournoy, promising substantially better estimation performance than ordinary isotonic regression in most cases, and also offering the first viable interval estimator for isotonic regression in general. Isotonic regression estimators appear to be the most compatible with UDDs, because both approaches are nonparametric and relatively robust. The publicly available R package "cir" implements both CIR and IR for dose-finding and other applications.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "X_1,\\ldots,X_n"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "\\mathcal{X}=\\left\\{d_1,\\ldots ,d_M :\\ d_1 <\\cdots <d_M\\right\\}."
},
{
"math_id": 4,
"text": "X_i=d_m"
},
{
"math_id": 5,
"text": "X_{i+1}\\in\\{d_{m-1},d_m,d_{m+1}\\},"
},
{
"math_id": 6,
"text": "Y_1,\\ldots,Y_n \\in\\left\\{0,1\\right\\};"
},
{
"math_id": 7,
"text": "\\mathcal{X}"
},
{
"math_id": 8,
"text": "x"
},
{
"math_id": 9,
"text": "\\Gamma=P\\left\\{Y=1\\mid X=x\\right\\}, \\ \\ \\Gamma\\in(0,1)"
},
{
"math_id": 10,
"text": "F^{-1}(\\Gamma)"
},
{
"math_id": 11,
"text": "F(x)"
},
{
"math_id": 12,
"text": "f(x)"
},
{
"math_id": 13,
"text": "d_m"
},
{
"math_id": 14,
"text": "d_{m-1},d_m"
},
{
"math_id": 15,
"text": "d_{m+1}"
},
{
"math_id": 16,
"text": "p_{m,m-1},p_{mm}"
},
{
"math_id": 17,
"text": "p_{m,m+1}"
},
{
"math_id": 18,
"text": "p_{m,m-1}+p_{mm}+p_{m,m+1}=1"
},
{
"math_id": 19,
"text": "p_{1,0}=p_{M,M+1}=0"
},
{
"math_id": 20,
"text": "\\left(X_i,Y_i\\right)"
},
{
"math_id": 21,
"text": "\\mathbf{P}"
},
{
"math_id": 22,
"text": "\n\\bf{P}=\\left(\n\\begin{array}{cccccc}\n p_{11}& p_{12} & 0 & \\cdots & \\cdots & 0 \\\\\n p_{21} & p_{22} & p_{23} & 0 & \\ddots & \\vdots \\\\\n 0 & \\ddots & \\ddots & \\ddots & \\ddots & \\vdots \\\\\n \\vdots & \\ddots & \\ddots & \\ddots & \\ddots & 0 \\\\\n \\vdots & \\ddots & 0 & p_{M-1,M-2} & p_{M-1,M-1} & p_{M-1,M} \\\\\n 0 & \\cdots & \\cdots & 0 & p_{M,M-1} & p_{MM}\\\\\n\\end{array}\n\\right).\n"
},
{
"math_id": 23,
"text": "x^*"
},
{
"math_id": 24,
"text": "\\pi"
},
{
"math_id": 25,
"text": "2/M"
},
{
"math_id": 26,
"text": "4/M"
},
{
"math_id": 27,
"text": "\\begin{array}{rl}\np_{m,m+1}&=P\\{Y_i=0|X_i=d_m\\}=1-F(d_m);\\\\\np_{m,m-1}&=P\\{Y_i=1|X_i=d_m\\}=F(d_m).\n\\end{array}"
},
{
"math_id": 28,
"text": "p(x)=1-F(x),q(x)=F(x)."
},
{
"math_id": 29,
"text": "F^*"
},
{
"math_id": 30,
"text": "\n1-F^*=F^*\\ \\longrightarrow \\ F^*=0.5.\n"
},
{
"math_id": 31,
"text": "F^*=\\Gamma."
},
{
"math_id": 32,
"text": "b=P\\{\\textrm{heads}\\}."
},
{
"math_id": 33,
"text": "F^*>0.5"
},
{
"math_id": 34,
"text": "F^*<0.5,"
},
{
"math_id": 35,
"text": " X_{i+1} =\n\\begin{array}{ll}\nd_{m+1} & \\textrm{if }\\ \\ Y_i=0\\ \\ \\&\\ \\ \\textrm{ 'heads'};\\\\\nd_{m-1} & \\textrm{if }\\ \\ Y\\_i=1;\\\\\nd_m & \\textrm{if }\\ \\ Y_i=0\\ \\ \\& \\ \\ \\textrm{ 'tails'}.\\\\\n\\end{array}\n"
},
{
"math_id": 36,
"text": "b"
},
{
"math_id": 37,
"text": "[0,1]"
},
{
"math_id": 38,
"text": "\n\\begin{array}{rcl}\n b\\left(1-F^*\\right) &=& F^*\\\\\n F^* &=& \\frac{b}{1+b}\\in[0,0.5].\n\\end{array}\n"
},
{
"math_id": 39,
"text": "b=\\Gamma/(1-\\Gamma)"
},
{
"math_id": 40,
"text": "\\Gamma=0.3"
},
{
"math_id": 41,
"text": "b=3/7"
},
{
"math_id": 42,
"text": "b=1"
},
{
"math_id": 43,
"text": "s"
},
{
"math_id": 44,
"text": "X_i"
},
{
"math_id": 45,
"text": "i"
},
{
"math_id": 46,
"text": "Y_i"
},
{
"math_id": 47,
"text": "i+1"
},
{
"math_id": 48,
"text": "\nX_{i+1}=\n\\begin{cases}\nd_{m+1} &\\textrm{if}\\ \\ Y_i\\le l;\\\\\nd_{m-1} &\\textrm{if}\\ \\ Y_i\\ge u;\\\\\nd_m &\\textrm{if}\\ \\ l<Y_i<u.\n\\end{cases}\n"
},
{
"math_id": 49,
"text": "F(X_i)"
},
{
"math_id": 50,
"text": "u=l+1"
},
{
"math_id": 51,
"text": "_{(s,l,u)}."
},
{
"math_id": 52,
"text": "l+u=s"
},
{
"math_id": 53,
"text": "_{(2,0,2)}"
},
{
"math_id": 54,
"text": "_{(s,0,1)},"
},
{
"math_id": 55,
"text": "\\left(1-F(x)\\right)^s,"
},
{
"math_id": 56,
"text": "1/2"
},
{
"math_id": 57,
"text": "\n F^*=1-\\left(\\frac {1}{2}\\right)^{1/s}.\n"
},
{
"math_id": 58,
"text": "s=2,3,4"
},
{
"math_id": 59,
"text": "F^*\\approx 0.293,0.206"
},
{
"math_id": 60,
"text": "0.159"
},
{
"math_id": 61,
"text": "_{(s,s-1,s)}"
},
{
"math_id": 62,
"text": "\n\\sum_{r=u}^s\n\\left(\\begin{array}{c}\ns\\\\\nr\\\\\n\\end{array}\\right) \\left(F^*\\right)^r(1-F^*)^{s-r}=\n\\sum_{t=0}^{l}\n\\left(\\begin{array}{c}\ns\\\\\nt\\\\\n\\end{array}\\right) \\left(F^*\\right)^t(1-F^*)^{s-t}.\n"
},
{
"math_id": 63,
"text": "k"
},
{
"math_id": 64,
"text": "\n X_{i+1}=\n \\begin{cases}\nd_{m+1} &\\textrm{if}\\ \\ Y_{i-k+1}=\\cdots=Y_i=0,\\ \\ \\textrm{ all}\\ \\textrm{observed}\\ \\textrm{at}\\ \\ d_m;\\\\\nd_{m-1} &\\textrm{if}\\ \\ Y_i=1; \\\\\nd_m &\\textrm{otherwise},\n \\end{cases}\n"
},
{
"math_id": 65,
"text": "_{(s,0,1)}"
},
{
"math_id": 66,
"text": "F^*\\approx 0.707,0.794,0.841,\\ldots"
},
{
"math_id": 67,
"text": "k=2,3,4,\\ldots"
},
{
"math_id": 68,
"text": "Mk"
},
{
"math_id": 69,
"text": "0"
},
{
"math_id": 70,
"text": "k-1"
},
{
"math_id": 71,
"text": "m"
},
{
"math_id": 72,
"text": "Mk\\times Mk"
},
{
"math_id": 73,
"text": "\\left[(M-1)k+1)\\right]\\times \\left[(M-1)k+1)\\right]"
},
{
"math_id": 74,
"text": "k=2"
},
{
"math_id": 75,
"text": "M=5"
},
{
"math_id": 76,
"text": "F_m\\equiv F\\left(d_m\\right)."
},
{
"math_id": 77,
"text": " \n\\begin{bmatrix}\n F_1 & 1-F_1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n F_1 & 0 & 1-F_1 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n F_2 & 0 & 0 & 1-F_2 & 0 & 0 & 0 & 0 & 0 \\\\\n F_2 & 0 & 0 & 0 & 1-F_2 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & F_3 & 0 & 0 &1-F_3 & 0 & 0 & 0 \\\\\n 0 & 0 & F_3 & 0 & 0 & 0 & 1-F_3 & 0 & 0 \\\\\n 0 & 0 & 0 & 0 & F_4 & 0 & 0 &1-F_4 & 0 \\\\\n 0 & 0 & 0 & 0 & F_4 & 0 & 0 & 0 & 1-F_4 \\\\\n 0 & 0 & 0 & 0 & 0 & 0 & F_5 & 0 & 1-F_5 \\\\\n\\end{bmatrix}."
},
{
"math_id": 78,
"text": "k=3"
},
{
"math_id": 79,
"text": "x^*."
},
{
"math_id": 80,
"text": "y=F(x)"
},
{
"math_id": 81,
"text": "\n\\hat{F}_m=\\frac{\\sum_{i=1}^n Y_iI\\left[X_i=d_m\\right]}{\\sum_{i=1}^n I\\left[X_i=d_m\\right]},\\ m=1,\\ldots,M,\n"
},
{
"math_id": 82,
"text": "y=\\Gamma."
}
] |
https://en.wikipedia.org/wiki?curid=59158299
|
59158616
|
Liñán's equation
|
Type of ordinary differential equation
In the study of diffusion flame, Liñán's equation is a second-order nonlinear ordinary differential equation which describes the inner structure of the diffusion flame, first derived by Amable Liñán in 1974. The equation reads as
formula_0
subjected to the boundary conditions
formula_1
where formula_2 is the reduced or rescaled Damköhler number and formula_3 is the ratio of excess heat conducted to one side of the reaction sheet to the total heat generated in the reaction zone. If formula_4, more heat is transported to the oxidizer side, thereby reducing the reaction rate on the oxidizer side (since reaction rate depends on the temperature) and consequently greater amount of fuel will be leaked into the oxidizer side. Whereas, if formula_5, more heat is transported to the fuel side of the diffusion flame, thereby reducing the reaction rate on the fuel side of the flame and increasing the oxidizer leakage into the fuel side. When formula_6 formula_7, all the heat is transported to the oxidizer (fuel) side and therefore the flame sustains extremely large amount of fuel (oxidizer) leakage.
The equation is, in some aspects, universal (also called as the canonical equation of the diffusion flame) since although Liñán derived the equation for stagnation point flow, assuming unity Lewis numbers for the reactants, the same equation is found to represent the inner structure for general laminar flamelets, having arbitrary Lewis numbers.
Existence of solutions.
Near the extinction of the diffusion flame, formula_2 is order unity. The equation has no solution for formula_8, where formula_9 is the extinction Damköhler number. For formula_10 with formula_11, the equation possess two solutions, of which one is an unstable solution. Unique solution exist if formula_12 and formula_10. The solution is unique for formula_13, where formula_14 is the ignition Damköhler number.
Liñán also gave a correlation formula for the extinction Damköhler number, which is increasingly accurate for formula_15,
formula_16
Generalized Liñán's equation.
The generalized Liñán's equation is given by
formula_17
where formula_18 and formula_19 are constant reaction orders of fuel and oxidizer, respectively.
Large Damköhler number limit.
In the Burke–Schumann limit, formula_20. Then the equation reduces to
formula_21
An approximate solution to this equation was developed by Liñán himself using integral method in 1963 for his thesis,
formula_22
where formula_23 is the error function and
formula_24
Here formula_25 is the location where formula_26 reaches its minimum value formula_27. When formula_28, formula_29, formula_30 and formula_31.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{d^2y}{d\\zeta^2} =(y^2-\\zeta^2)e^{-\\delta^{-1/3}(y+\\gamma \\zeta)} "
},
{
"math_id": 1,
"text": "\n\\begin{align}\n\\zeta\\rightarrow -\\infty : &\\quad \\frac{dy}{d\\zeta}=-1,\\\\\n\\zeta\\rightarrow +\\infty : &\\quad \\frac{dy}{d\\zeta}=+1\n\\end{align}"
},
{
"math_id": 2,
"text": "\\delta"
},
{
"math_id": 3,
"text": "\\gamma"
},
{
"math_id": 4,
"text": "\\gamma>0"
},
{
"math_id": 5,
"text": "\\gamma<0"
},
{
"math_id": 6,
"text": "\\gamma\\rightarrow 1"
},
{
"math_id": 7,
"text": "(\\gamma\\rightarrow -1)"
},
{
"math_id": 8,
"text": "\\delta<\\delta_E"
},
{
"math_id": 9,
"text": "\\delta_E"
},
{
"math_id": 10,
"text": "\\delta>\\delta_E"
},
{
"math_id": 11,
"text": "|\\gamma|<1"
},
{
"math_id": 12,
"text": "|\\gamma|>1"
},
{
"math_id": 13,
"text": "\\delta>\\delta_I"
},
{
"math_id": 14,
"text": "\\delta_I"
},
{
"math_id": 15,
"text": "1-\\gamma \\ll 1"
},
{
"math_id": 16,
"text": "\\delta_E = e[(1-\\gamma)-(1-\\gamma)^2+0.26(1-\\gamma)^3 + 0.055(1-\\gamma)^4]."
},
{
"math_id": 17,
"text": "\\frac{d^2y}{d\\zeta^2} =(y-\\zeta)^m (y+\\zeta)^ne^{-\\delta^{-1/3}(y+\\gamma \\zeta)} "
},
{
"math_id": 18,
"text": "m"
},
{
"math_id": 19,
"text": "n"
},
{
"math_id": 20,
"text": "\\delta\\rightarrow\\infty"
},
{
"math_id": 21,
"text": "\\frac{d^2y}{d\\zeta^2} = (y-\\zeta)^m(y+\\zeta)^n, \\quad \\zeta\\rightarrow\\pm\\infty:\\, \\frac{dy}{d\\zeta}=\\pm 1. "
},
{
"math_id": 22,
"text": "y(\\zeta)=y_m + (\\zeta-\\zeta_m)\\operatorname{erf}[k (\\zeta-\\zeta_m)] - \\frac{1}{\\sqrt{\\pi} k}\\left[1-e^{-k^2 (\\zeta-\\zeta_m)^2}\\right],"
},
{
"math_id": 23,
"text": "\\mathrm{erf}"
},
{
"math_id": 24,
"text": "\\begin{align}\n\\zeta_m &= \\frac{n-m}{n+m} y_m,\\\\\ny_m &= \\frac{m+n}{2}\\left[\\frac{2}{\\pi m^2n^2}\\left(\\sqrt{1+\\frac{\\pi(m+n)}{2mn}}-1\\right)\\right]^{\\frac{1}{m+n+1}},\\\\\nk &= \\frac{\\sqrt{\\pi}}{2} m^m n^n\\left(\\frac{2y_m}{m+n}\\right)^{m+n}.\n\\end{align}\n"
},
{
"math_id": 25,
"text": "\\zeta=\\zeta_m"
},
{
"math_id": 26,
"text": "y(\\zeta)"
},
{
"math_id": 27,
"text": "y(\\zeta_m)=y_m"
},
{
"math_id": 28,
"text": "m=n=1"
},
{
"math_id": 29,
"text": "\\zeta_m=0"
},
{
"math_id": 30,
"text": "y_m=0.8702"
},
{
"math_id": 31,
"text": "k=0.6711"
}
] |
https://en.wikipedia.org/wiki?curid=59158616
|
591587
|
Hurewicz theorem
|
Gives a homomorphism from homotopy groups to homology groups
In mathematics, the Hurewicz theorem is a basic result of algebraic topology, connecting homotopy theory with homology theory via a map known as the Hurewicz homomorphism. The theorem is named after Witold Hurewicz, and generalizes earlier results of Henri Poincaré.
Statement of the theorems.
The Hurewicz theorems are a key link between homotopy groups and homology groups.
Absolute version.
For any path-connected space "X" and positive integer "n" there exists a group homomorphism
formula_0
called the Hurewicz homomorphism, from the "n"-th homotopy group to the "n"-th homology group (with integer coefficients). It is given in the following way: choose a canonical generator formula_1, then a homotopy class of maps formula_2 is taken to formula_3.
The Hurewicz theorem states cases in which the Hurewicz homomorphism is an isomorphism.
Relative version.
For any pair of spaces formula_13 and integer formula_14 there exists a homomorphism
formula_15
from relative homotopy groups to relative homology groups. The Relative Hurewicz Theorem states that if both formula_16 and formula_17 are connected and the pair is formula_5-connected then formula_18 for formula_19 and formula_20 is obtained from formula_21 by factoring out the action of formula_22. This is proved in, for example, by induction, proving in turn the absolute version and the Homotopy Addition Lemma.
This relative Hurewicz theorem is reformulated by as a statement about the morphism
formula_23
where formula_24 denotes the cone of formula_17. This statement is a special case of a homotopical excision theorem, involving induced modules for formula_25 (crossed modules if formula_26), which itself is deduced from a higher homotopy van Kampen theorem for relative homotopy groups, whose proof requires development of techniques of a cubical higher homotopy groupoid of a filtered space.
Triadic version.
For any triad of spaces formula_27 (i.e., a space "X" and subspaces "A", "B") and integer formula_28 there exists a homomorphism
formula_29
from triad homotopy groups to triad homology groups. Note that
formula_30
The Triadic Hurewicz Theorem states that if "X", "A", "B", and formula_31 are connected, the pairs formula_32 and formula_33 are formula_34-connected and formula_35-connected, respectively, and the triad formula_27 is formula_36-connected, then formula_37 for formula_38 and formula_39 is obtained from formula_40 by factoring out the action of formula_41 and the generalised Whitehead products. The proof of this theorem uses a higher homotopy van Kampen type theorem for triadic homotopy groups, which requires a notion of the fundamental formula_42-group of an "n"-cube of spaces.
Simplicial set version.
The Hurewicz theorem for topological spaces can also be stated for "n"-connected simplicial sets satisfying the Kan condition.
Rational Hurewicz theorem.
Rational Hurewicz theorem: Let "X" be a simply connected topological space with formula_43 for formula_44. Then the Hurewicz map
formula_45
induces an isomorphism for formula_46 and a surjection for formula_47.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "h_* \\colon \\pi_n(X) \\to H_n(X),"
},
{
"math_id": 1,
"text": "u_n \\in H_n(S^n)"
},
{
"math_id": 2,
"text": "f \\in \\pi_n(X)"
},
{
"math_id": 3,
"text": "f_*(u_n) \\in H_n(X)"
},
{
"math_id": 4,
"text": "n\\ge 2"
},
{
"math_id": 5,
"text": "(n-1)"
},
{
"math_id": 6,
"text": "\\pi_i(X)= 0"
},
{
"math_id": 7,
"text": "i < n"
},
{
"math_id": 8,
"text": "\\tilde{H_i}(X)= 0"
},
{
"math_id": 9,
"text": "h_* \\colon \\pi_n(X) \\to H_n(X)"
},
{
"math_id": 10,
"text": "h_* \\colon \\pi_{n+1}(X) \\to H_{n+1}(X)"
},
{
"math_id": 11,
"text": "n=1"
},
{
"math_id": 12,
"text": "\\tilde{h}_* \\colon \\pi_1(X)/[ \\pi_1(X), \\pi_1(X)] \\to H_1(X)"
},
{
"math_id": 13,
"text": "(X,A)"
},
{
"math_id": 14,
"text": "k>1"
},
{
"math_id": 15,
"text": "h_* \\colon \\pi_k(X,A) \\to H_k(X,A)"
},
{
"math_id": 16,
"text": "X"
},
{
"math_id": 17,
"text": "A"
},
{
"math_id": 18,
"text": "H_k(X,A)=0"
},
{
"math_id": 19,
"text": "k<n"
},
{
"math_id": 20,
"text": "H_n(X,A)"
},
{
"math_id": 21,
"text": "\\pi_n(X,A)"
},
{
"math_id": 22,
"text": "\\pi_1(A)"
},
{
"math_id": 23,
"text": "\\pi_n(X,A) \\to \\pi_n(X \\cup CA),"
},
{
"math_id": 24,
"text": "CA"
},
{
"math_id": 25,
"text": "n>2"
},
{
"math_id": 26,
"text": "n=2"
},
{
"math_id": 27,
"text": "(X;A,B)"
},
{
"math_id": 28,
"text": "k>2"
},
{
"math_id": 29,
"text": "h_*\\colon \\pi_k(X;A,B) \\to H_k(X;A,B)"
},
{
"math_id": 30,
"text": "H_k(X;A,B) \\cong H_k(X\\cup (C(A\\cup B)))."
},
{
"math_id": 31,
"text": "C=A\\cap B"
},
{
"math_id": 32,
"text": "(A,C)"
},
{
"math_id": 33,
"text": "(B,C)"
},
{
"math_id": 34,
"text": "(p-1)"
},
{
"math_id": 35,
"text": "(q-1)"
},
{
"math_id": 36,
"text": "(p+q-2)"
},
{
"math_id": 37,
"text": "H_k(X;A,B)=0"
},
{
"math_id": 38,
"text": "k<p+q-2"
},
{
"math_id": 39,
"text": "H_{p+q-1}(X;A)"
},
{
"math_id": 40,
"text": "\\pi_{p+q-1}(X;A,B)"
},
{
"math_id": 41,
"text": "\\pi_1(A\\cap B)"
},
{
"math_id": 42,
"text": "\\operatorname{cat}^n"
},
{
"math_id": 43,
"text": "\\pi_i(X)\\otimes \\Q = 0"
},
{
"math_id": 44,
"text": "i\\leq r"
},
{
"math_id": 45,
"text": "h\\otimes \\Q \\colon \\pi_i(X)\\otimes \\Q \\longrightarrow H_i(X;\\Q )"
},
{
"math_id": 46,
"text": "1\\leq i \\leq 2r"
},
{
"math_id": 47,
"text": "i = 2r+1"
}
] |
https://en.wikipedia.org/wiki?curid=591587
|
5916
|
Circumference
|
Perimeter of a circle or ellipse
In geometry, the circumference (from Latin "circumferens", meaning "carrying around") is the perimeter of a circle or ellipse. The circumference is the arc length of the circle, as if it were opened up and straightened out to a line segment. More generally, the perimeter is the curve length around any closed figure.
Circumference may also refer to the circle itself, that is, the locus corresponding to the edge of a disk.
The is the circumference, or length, of any one of its great circles.
Circle.
The circumference of a circle is the distance around it, but if, as in many elementary treatments, distance is defined in terms of straight lines, this cannot be used as a definition. Under these circumstances, the circumference of a circle may be defined as the limit of the perimeters of inscribed regular polygons as the number of sides increases without bound. The term circumference is used when measuring physical objects, as well as when considering abstract geometric forms.
Relationship with π.
The circumference of a circle is related to one of the most important mathematical constants. This constant, pi, is represented by the Greek letter formula_0 The first few decimal digits of the numerical value of formula_1 are 3.141592653589793 ... Pi is defined as the ratio of a circle's circumference formula_2 to its diameter formula_3
formula_4
Or, equivalently, as the ratio of the circumference to twice the radius. The above formula can be rearranged to solve for the circumference:
formula_5
The ratio of the circle's circumference to its radius is called the circle constant, and is equivalent to formula_6. The value formula_6 is also the amount of radians in one turn. The use of the mathematical constant π is ubiquitous in mathematics, engineering, and science.
In "Measurement of a Circle" written circa 250 BCE, Archimedes showed that this ratio (formula_7 since he did not use the name π) was greater than 3 but less than 3 by calculating the perimeters of an inscribed and a circumscribed regular polygon of 96 sides. This method for approximating π was used for centuries, obtaining more accuracy by using polygons of larger and larger number of sides. The last such calculation was performed in 1630 by Christoph Grienberger who used polygons with 1040 sides.
Ellipse.
Circumference is used by some authors to denote the perimeter of an ellipse. There is no general formula for the circumference of an ellipse in terms of the semi-major and semi-minor axes of the ellipse that uses only elementary functions. However, there are approximate formulas in terms of these parameters. One such approximation, due to Euler (1773), for the canonical ellipse,
formula_8
is
formula_9
Some lower and upper bounds on the circumference of the canonical ellipse with formula_10 are:
formula_11
formula_12
formula_13
Here the upper bound formula_14 is the circumference of a circumscribed concentric circle passing through the endpoints of the ellipse's major axis, and the lower bound formula_15 is the perimeter of an inscribed rhombus with vertices at the endpoints of the major and minor axes.
The circumference of an ellipse can be expressed exactly in terms of the complete elliptic integral of the second kind. More precisely,
formula_16
where formula_17 is the length of the semi-major axis and formula_18 is the eccentricity formula_19
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\pi."
},
{
"math_id": 1,
"text": "\\pi"
},
{
"math_id": 2,
"text": "C"
},
{
"math_id": 3,
"text": "d:"
},
{
"math_id": 4,
"text": "\\pi = \\frac{C}{d}."
},
{
"math_id": 5,
"text": "{C} = \\pi \\cdot{d} = 2\\pi \\cdot{r}.\\!"
},
{
"math_id": 6,
"text": "2\\pi"
},
{
"math_id": 7,
"text": "C/d,"
},
{
"math_id": 8,
"text": "\\frac{x^2}{a^2} + \\frac{y^2}{b^2} = 1,"
},
{
"math_id": 9,
"text": "C_{\\rm{ellipse}} \\sim \\pi \\sqrt{2\\left(a^2 + b^2\\right)}."
},
{
"math_id": 10,
"text": "a\\geq b"
},
{
"math_id": 11,
"text": "2\\pi b \\leq C \\leq 2\\pi a,"
},
{
"math_id": 12,
"text": "\\pi (a+b) \\leq C \\leq 4(a+b),"
},
{
"math_id": 13,
"text": "4\\sqrt{a^2+b^2} \\leq C \\leq \\pi \\sqrt{2\\left(a^2+b^2\\right)}."
},
{
"math_id": 14,
"text": "2\\pi a"
},
{
"math_id": 15,
"text": "4\\sqrt{a^2+b^2}"
},
{
"math_id": 16,
"text": "C_{\\rm{ellipse}} = 4a \\int_0^{\\pi/2} \\sqrt{1 - e^2 \\sin^2\\theta}\\ d\\theta,"
},
{
"math_id": 17,
"text": "a"
},
{
"math_id": 18,
"text": "e"
},
{
"math_id": 19,
"text": "\\sqrt{1 - b^2/a^2}."
}
] |
https://en.wikipedia.org/wiki?curid=5916
|
59168336
|
Lake metabolism
|
The balance between production and consumption of organic matter in lakes
Lake metabolism represents a lake's balance between carbon fixation (gross primary production) and biological carbon oxidation (ecosystem respiration). Whole-lake metabolism includes the carbon fixation and oxidation from all organism within the lake, from bacteria to fishes, and is typically estimated by measuring changes in dissolved oxygen or carbon dioxide throughout the day.
Ecosystem respiration in excess of gross primary production indicates the lake receives organic material from the surrounding catchment, such as through stream or groundwater inflows or litterfall. Lake metabolism often controls the carbon dioxide emissions from or influx to lakes, but it does not account for all carbon dioxide dynamics since inputs of inorganic carbon from the surrounding catchment also influence carbon dioxide within lakes.
Concept.
Estimates of lake metabolism typically rely on the measurement of dissolved oxygen or carbon dioxide, or measurements of a carbon or oxygen tracer to estimate production and consumption of organic carbon. Oxygen is produced and carbon dioxide consumed through photosynthesis and oxygen is consumed and carbon dioxide produced through respiration. Here, organic matter is symbolized by glucose, though the chemical species produced and respired through these reactions vary widely.
Photosynthesis: formula_0
Respiration: formula_1
Photosynthesis and oxygen production only occurs in the presence of light, while the consumption of oxygen via respiration occurs in both the presence and absence of light. Lake metabolism terms include:
Measurement techniques.
Estimating lake metabolism requires approximating processes that influence the production and consumption of organic carbon by organisms within the lake. Cyclical changes on a daily scale occur in most lakes on Earth because sunlight is available for photosynthesis and production of new carbon only for a portion of the day. Researchers can take advantage of this diel pattern to measure rates of change in carbon itself or changes in dissolved gases such as carbon dioxide or oxygen that occur on a daily scale. Although daily estimates of metabolism are most common, whole-lake metabolism can be integrated over longer time periods such as seasonal or annual rates by estimating a whole-lake carbon budget. The following sections highlight the most common ways to estimate lake metabolism across a variety of temporal and spatial scales and go over some of the assumptions of each of these methods.
Free-water methods.
Measurement of diel changes in dissolved gases within the lake, also known as the "free-water" method, has quickly become the most common method of estimating lake metabolism since the wide adoption of autonomous sensors used to measure dissolved oxygen and carbon dioxide in water. The free-water method is particularly popular since many daily estimates of lake metabolism can be collected relatively cheaply and can give insights into metabolic regimes during difficult-to-observe time periods, such as during storm events. Measured changes in dissolved oxygen and carbon dioxide within a lake represents the sum of all organismal metabolism from bacteria to fishes, after accounting for abiotic changes in dissolved gases. Abiotic changes in dissolved gases include exchanges of dissolved gases between the atmosphere and lake surface, vertical or horizontal entrainment of water with differing concentrations (e.g. low-oxygen water below a lake's thermocline), or import and export of dissolved gases from inflowing streams or a lake outlet. Abiotic changes in dissolved gases can dominate changes of dissolved gases if the lake has a low metabolic rate (e.g. oligotrophic lake, cloudy day), or if there is a large event that causes abiotic factors to exceed biotic (e.g. wind event causing mixing and entrainment of low-oxygenated water). Biotic signals in dissolved gases are most evident when the sun is shining and photosynthesis is occurring, resulting in the production of dissolved oxygen and consumption of carbon dioxide. The conversion of solar energy to chemical energy is termed gross primary production (GPP) and the dissipation of this energy through biological carbon oxidation is termed ecosystem respiration (ER). High-frequency (e.g. 10 minute interval) measurements of dissolved oxygen or carbon dioxide can be translated into estimates of GPP, ER, and the difference between the two termed Net Ecosystem Production (NEP), by fitting the high-frequency data to models of lake metabolism. The governing equation for estimating lake metabolism from a single sensor located in the upper mixed layer measuring dissolved oxygen is:
DO/t = GPP-ER+F
Where F is the flux of gases between the lake and the atmosphere. Additional terms of abiotic gas flux can be added if those abiotic fluxes are deemed significant for a lake (e.g. mixing events, inflowing stream gases). Atmospheric gas exchange (F) is rarely directly measured and typically modeled by estimating lake surface turbulence from wind-driven and convective mixing. Most often, F is estimated from measurements of wind speed and atmospheric pressure, and different models for estimating F can result in significantly different estimates of lake metabolic rates depending on the study lake. Gross primary production is assumed to be zero during the night due to low or no light, and thus ER can be estimated from nighttime changes in dissolved oxygen (or carbon dioxide) after accounting for abiotic changes in dissolved oxygen. Gross primary production can be estimated assuming that ER is equal during the day and night and accounting for dissolved oxygen changes during the day, however, this assumption may not be valid in every lake.
Extracting a high signal-to-noise ratio is key to obtaining good estimates of lake metabolism from the free-water technique, and there are choices that a researcher needs to make prior to collection data and during data analyses to ensure accurate estimates. Location of dissolved gas collection (typically in the surface mixed layer), number of sensors vertically and horizontally, frequency and duration of data collection, and modeling methods need to be considered.
Free-water metabolism modeling techniques.
The free-water measurement techniques require mathematical models to estimate lake metabolism metrics from high-frequency dissolved gas measurements. These models range in complexity from simple algebraic models to depth-integrated modeling using more advanced statistical techniques. Several statistical techniques have been used to estimate GPP, R, and NEP or parameters relating to these metabolism terms.
Light and dark bottle methods.
The light and dark bottle method uses the same concept as the free-water method to estimate rates of metabolism - GPP only occurs during the day with solar energy while ER occurs in both the presence and absence of light. This method incubates lake water in two separate bottles, one that is clear and exposed to natural or artificial light regime and another that is sealed off from the light by wrapping the bottle in foil, paint, or another method. Changes in carbon fixation or dissolved gases are then measured over a certain time period (e.g. several hours to a day) to estimate the rate of metabolism for specific lake depths or an integrated lake water column. Carbon fixation is measured by injecting radioactive carbon isotope 14C into light and dark bottles and sampling the bottles over the time - the samples are filtered onto filter paper and the amount of 14C incorporated into algal (and bacterial) cells is estimated by measuring samples on a scintillation counter. The difference between the light and dark bottle 14C can be considered the rate of primary productivity; however, due to non-photosynthetic uptake of CO2 there is debate as to whether dark bottles should be used with the 14C method or if only a light bottle and a bottle treated with the algicide DCMU should be used. Rates of change in dissolved gases, either carbon dioxide or oxygen, need both the light and dark bottles to estimate rates of productivity and respiration.
Whole-lake carbon budget methods.
Probably the most labor-intensive method of estimating a metric of lake metabolism is by measuring all the inputs and outputs of either organic or inorganic carbon to a lake over a season or year, also known as a whole-lake carbon budget. Measuring all the inputs and outputs of carbon to and from a lake can be used to estimate net ecosystem production (NEP). Since NEP is the difference between gross primary production and respiration (NEP = GPP - R), it can be viewed as the net biological conversion of inorganic carbon to organic carbon (and vice versa), and can thus be determined through whole-lake mass balance of either inorganic or organic carbon. NEP assessed through inorganic (IC) or organic carbon (OC) can be estimated as:
formula_5
formula_6
where "E" is export of OC through fluvial transport and IC through fluvial transport and carbon gas (e.g. CO2, CH4) exchange between the lake surface to the atmosphere; "S" is storage in the lake sediments and water column for OC and water column for IC; and "I" is the input of OC and IC from fluvial, surrounding wetland, and airborne pathways (e.g. atmospheric deposition, litterfall). A lake that receives more OC from the watershed than it exports downstream or accumulates in the water column and sediments (Ioc > Eoc + Soc) indicates that there was net conversion of OC to IC within the lake and is thus net heterotrophic (negative NEP). Likewise, a lake that accumulates and exports more IC than was received from the watershed (Sic + Eic > Iic) also indicates net conversion of OC to IC within the lake and is thus net heterotrophic.
Benthic metabolism methods.
Although the free-water method likely contains some benthic metabolic signal, isolating the benthic contribution to whole-lake metabolism requires benthic-specific methods. Analogous to the light and dark bottle methods described above, lake sediment cores can be collected and changes in dissolved oxygen or carbon fixation can be used to estimate rates of primary productivity and respiration. Relatively new methods describe isolating the sediment-water interface with transparent domes and measure changes in dissolved oxygen in-situ, which is a hybrid between the free-water method and light-dark bottle method. These in-situ benthic chamber methods allow for relatively easy multi-day estimate of benthic metabolism, which helps the researcher determine how benthic metabolism changes with varying weather patterns and lake characteristics.
Assumptions.
Extrapolating site or depth specific measurements to the entire lake can be problematic as there can be significant metabolic variability both vertically and horizontally within a lake (see variability section). For example, many lake metabolism studies only have a single epilimnetic estimate of metabolism, however, this may overestimate metabolic characteristics about the lake such as NEP depending on the mixed layer depth to light extinction depth ratio. Averaging daily metabolism estimates over longer time periods may help overcome some of these single site extrapolation issues, but one must carefully consider the implications of the metabolic estimates and not over extrapolate measurements.
Relation to constituents.
Organismal metabolic rate, or the rate at which organisms assimilate, transform, and expend energy, is influenced by a few key constituents, namely light, nutrients, temperature, and organic matter. The influence of these constituents on organismal metabolism ultimately governs metabolism at the whole-lake scale and can dictate whether a lake is a net source or sink of carbon. In the following section, we describe the relationship between these key constituents and organismal and ecosystem-level metabolism. Although relationships between organisms and constituents described here are well-established, interacting effects of constituents on metabolic rates from organisms to lake ecosystems makes predicting changes in metabolism across lakes or within lakes through time difficult. Many of these complex interacting effects will be discussed in the spatial and temporal variability section.
Temperature.
Temperature is a strong controlling factor on biochemical reaction rates and biological activity. Optimal temperature varies across aquatic organisms as some organisms are more cold-adapted while others prefer warmer habitats. There are rare cases of extreme thermal tolerance in hypersaline antarctic lakes (e.g. Don Juan Pond) or hot springs (e.g. Fly Geyser); however, most lake organisms on Earth reside in temperatures ranging from 0 to 40 °C. Metabolic rates typically scale exponentially with temperature, however, the activation energy for primary productivity and respiration often differ, with photosynthesis having a lower activation energy than aerobic respiration. These differences in activation energies could have implications for net metabolic balance within lake ecosystems as the climate warms. For example, Scharfenberger et al. (2019) show that increasing water temperature resulting from climate change could switch lakes from being net autotrophic to heterotrophic due to differences in activation energy, however, the temperature at which they switch depends on the amount of nutrients available.
Nutrients.
The amount of material available for assimilating into organismal cells controls the rate of metabolism at the cellular to lake ecosystem level. In lakes, phosphorus and nitrogen are the most common limiting nutrients of primary production and ecosystem respiration. Foundational work on the positive relationship between phosphorus concentration and lake eutrophication resulted in legislation that limited the amount of phosphorus in laundry detergents, among other regulations. Although phosphorus is often used as a predictor of lake ecosystem productivity and excess phosphorus as an indicator of eutrophication, many studies show that metabolism is co-limited by phosphorus and nitrogen or nitrogen alone. The balance between phosphorus, nitrogen, and other nutrients, termed ecological stoichiometry, can dictate rates of organismal growth and whole-lake metabolism through cellular requirements of these essential nutrients mediated by life-history traits. For example, fast-growing cladocerans have a much lower nitrogen to phosphorus ratio (N:P) than copepods, mostly due to the high amount of phosphorus-rich RNA in their cells used for rapid growth. Cladocerans residing in lakes with high N:P ratios relative to cladoceran body stoichiometry will be limited in growth and metabolism, having effects on whole-lake metabolism. Furthermore, cascading effects from food web manipulations can cause changes in productivity from changes to nutrient stoichiometry. For example, piscivore addition can reduce predation pressure on fast-growing, low N:P cladocerans which increase in population rapidly, retain phosphorus in their cells, and can cause a lake to become phosphorus limited, consequently reducing whole-lake primary productivity.
Light.
Solar energy is required for converting carbon dioxide and water into organic matter, otherwise known as photosynthesis. As with temperature and nutrients, different algae have different rates of metabolic response to increasing light and also different optimal light conditions for growth, as some algae are more adapted for darker environments while others can outcompete in lighter conditions. Light can also interact with nutrients to affect species-specific algal productivity response to increasing light. These different responses at the organismal level propagate up to influence metabolism at the ecosystem level. Even in low-nutrient lakes where nutrients would be expected to be the limiting resource for primary productivity, light can still be the limiting resource, which has cascading negative effects on higher trophic levels such as fish productivity. Variability in light in different lake zones and within a lake through time creates patchiness in productivity both spatially and temporally.
In addition to controlling primary productivity, sunlight can also influence rates of respiration by partially oxidizing organic matter which can make it easier for bacteria to break down and convert into carbon dioxide. This partial photooxidation essentially increases the amount of organic matter that is available for mineralization. In some lakes, complete photooxidation or partial photooxidation can account for a majority of the conversion from organic to inorganic matter, however, the proportion to bacterial respiration varies greatly among lakes.
Organic carbon.
Primary and secondary consumers in lakes require organic matter (either from plants or animals) to maintain organismal function. Organic matter including tree leaves, dissolved organic matter, and algae provide essential resources to these consumers and in the process increase lake ecosystem respiration rates in the conversion of organic matter to cellular growth and organismal maintenance. Some sources of organic matter may impact the availability of other constituents. For example, dissolved organic matter often darkens lake water which reduces the amount of light available in the lake, thus reducing primary production. However, increases in organic matter loading to a lake can also increase nutrients that are associated with the organic matter, which can stimulate primary production and respiration. Increased dissolved organic matter loading can create tradeoffs between increasing light limitation and release from nutrient limitation. This tradeoff can create non-linear relationships between lake primary production and dissolved organic matter loading based on how much nutrients are associated with the organic matter and how quickly the dissolved organic matter blocks out light in the water column. This is because at low dissolved organic matter concentrations as dissolved organic matter concentration increases, increased associated nutrients enhances GPP. But as dissolved organic matter continues to increase, the reduction in light from the darkening of the lake water suppresses GPP as light becomes the limiting resource for primary productivity. Differences in the magnitude and location of maximum GPP in response to increased DOC load are hypothesized to arise based on the ratio of DOC to nutrients coming into the lake as well as the effect of DOC on lake light climate. The darkening of the lake water can also change thermal regimes within the lake as darker waters typically mean that warmer waters remain at the top of the lake while cooler waters are at the bottom. This change in heat energy distribution can impact the rates of pelagic and benthic productivity (see Temperature above), and change water column stability, with impacts on vertical distribution of nutrients, therefore having effects on vertical distribution of metabolic rates.
Other constituents.
Other lake constituents can influence lake metabolic rates including CO2 concentration, pH, salinity, and silica, among others. CO2 can be a limiting (or co-limiting along with other nutrients) resource for primary productivity and can promote more intense phytoplankton blooms. Some algal species, such as chrysophytes, may not have carbon-concentrating mechanisms or the ability to use bicarbonate as a source of inorganic carbon for photosynthesis, thus, elevated levels of CO2 may increase their rates of photosynthesis. During algal blooms, elevated dissolved CO2 ensures that CO2 is not a limiting resource for growth since rapid increases in production deplete CO2 and raise pH. Changes in pH at short time scales (e.g. sub-daily) from spikes in primary productivity may cause short-term reductions in bacterial growth and respiration, but at longer timescales, bacterial communities can adapt to elevated pH.
Salinity can also cause changes in metabolic rates of lakes through salinity impacts on individual metabolic rates and community composition. Lake metabolic rates can be correlated both positively or negatively with salinity due to interactions of salinity with other drivers of ecosystem metabolism, such as flushing rates or droughts. For example, Moreira-Turcq (2000) found that excess precipitation over evaporation caused reduced salinity in a coastal lagoon, increased nutrient loading, and increased pelagic primary productivity. The positive relationship between primary productivity and salinity might be an indicator of changes in nutrient availability due to increased inflows. However, salinity increases from road salts can cause toxicity in some lake organisms, and extreme cases of salinity increases can restrict lake mixing which could change distribution of metabolism rates throughout the lake water column.
Spatial and temporal variability.
Metabolic rates in lakes and reservoirs are controlled by many environmental factors, such as light and nutrient availability, temperature, and water column mixing regimes. Thus, spatial and temporal changes in those factors cause spatial and temporal variability in metabolic rates, and each of those factors affect metabolism at different spatial and temporal scales.
Spatial variation within lakes.
Variable contributions from different lake zones (i.e. littoral, limnetic, benthic) to whole lake metabolism depends mostly on patchiness in algal and bacterial biomass, and light and nutrient availability. In terms of the organisms contributing to metabolism in each of these zones, limnetic metabolism is dominated by phytoplankton, zooplankton, and bacterial metabolism, with low contribution from epiphytes and fish. Benthic metabolism can receive great contributions from macrophytes, macro- and microalgae, invertebrates, and bacteria. Benthic metabolism is usually highest in shallow littoral zones, or in clear-water shallow lakes, in which light reaches the bottom of the lake to stimulate primary production. In dark or turbid deep lakes, primary production may be restricted to shallower waters and aerobic respiration may be reduced or non-existent in deeper waters due to the formation of anoxic deep zones.
The degree of spatial heterogeneity in metabolic rates within a lake depends on lake morphometry, catchment characteristics (e.g. differences in land use throughout the catchment and inputs from streams), and hydrodynamic processes. For example, lakes with more intense hydrodynamic processes, such as strong vertical and lateral mixing, are more laterally and vertically homogeneous in relation to metabolic rates than highly stratified lakes. On the other hand, lakes with more developed littoral areas have greater metabolic heterogeneity laterally than lakes with a more circular shape and low proportions of shallow littoral areas.
Light attenuation occurring throughout the water column, in combination with thermal and chemical stratification and wind- or convective-driven turbulence, contribute to the vertical distribution of nutrients and organisms in the water column. In stratified lakes, organic matter and nutrients tend to be more concentrated at deeper layers, while light is more available at shallower layers. The vertical distribution of primary production responds to a balance between light and nutrient availability, while respiration occurs more independently of light and nutrients and more homogeneously with depth. This often results in strong coupling of gross primary production (GPP) and ecosystem respiration (ER) in lake surface layers but weaker coupling at greater depths. This means that ER rates are strongly dependent on primary production in shallower layers, while in deeper layers it becomes more dependent on a mixture of organic matter from terrestrial sources and sedimentation of algae particles and organic matter produced in shallower layers. In lakes with a low concentration of nutrients in surface waters and with light penetration below the mixed layer, primary production is higher in intermediate depths, where there is sufficient light for photosynthesis and higher nutrient availability. On the other hand, low transparent polymictic lakes have higher primary production on near-surface layers, usually with a net autotrophic balance (GPP > ER) between primary production and respiration.
Laterally, heterogeneity within lakes is driven by differences in metabolic rates in the open water limnetic zones and the more benthic-dominated littoral zones. Littoral areas are usually more complex and heterogeneous, in part because of their proximity with the terrestrial system, but also due to low water volume and high sediment-to-water volume ratio. Thus, littoral zones are more susceptible to changes in temperature, inputs of nutrients and organic matter from the landscape and river inflows, wind shear mixing and wave action, shading from terrestrial vegetation, and resuspension of the sediments (Figure 1). Additionally, littoral zones usually have greater habitat complexity due to the presence of macrophytes, which serve as shelter, nursery, and feeding place for many organisms. Consequently, metabolic rates in the littoral areas usually have high short-term variability and are typically greater than limnetic metabolic rates.
Spatial variation across lakes.
In addition to spatial variability within lakes, whole-lake metabolic rates and their drivers also differ across lakes. Each lake has a unique set of characteristics depending on their morphometry, catchment properties, and hydrologic characteristics. These features affect lake conditions, such as water colour, temperature, nutrients, organic matter, light attenuation, vertical and horizontal mixing, with direct and indirect effects on lake metabolism.
As lakes differ in the status of their constituents (e.g. light, nutrients, temperature, and organic matter), there are emerging differences in the magnitude and variability of metabolic rates among lakes. In the previous section (Relation to Constituents), we discussed the expected patterns of metabolic rates in response to variability in these influential constituents. Here, we will discuss how whole-lake metabolism varies across lakes due to differences in these constituents as mediated by differences in lake morphometry, catchment properties, and water residence time.
Lake morphometry (e.g. lake size and shape) and catchment properties (e.g. land use, drainage area, climate, and geological characteristics) determine the flux of external inputs of organic matter and nutrients per unit of lake water volume. As the ratio between catchment size and lake water volume (drainage ratio) increases, the flux of nutrients and organic matter from the surrounding terrestrial landscape generally increases. That is, small lakes with relatively large catchments will receive more external inputs of nutrients and organic matter per unit of lake volume than large lakes with relatively small catchments, thus enhancing both primary production and respiration rates. In lakes with small drainage ratio (i.e. relative large lake surface area in relation to catchment area), metabolic processes are expected to be less dependent on external inputs coming from the surrounding catchment. Additionally, small lakes are less exposed to wind-driven mixing and typically have higher terrestrial organic matter input which often results in shallower mixing depths and enhanced light attenuation, thus limiting primary production to upper portions of small lakes. Considering lakes with similar catchment properties, small lakes are generally more net heterotrophic (GPP < ER) than large lakes, since their higher respiration rates are fueled by higher allochthonous organic matter (i.e. synthesized within the drainage area, but outside of the water body) entering the system and outpaces primary production which is limited to shallower lake layers.
Catchment properties, namely land cover, land use, and geologic characteristics, influence lake metabolism through their impact on the quality of organic matter and nutrients entering the lake as well as wind exposure. The organic matter quality can impact light attenuation, and along with wind exposure, can influence heat and light distribution throughout the water lake column. Lakes in landscapes dominated by agriculture have higher nutrient inputs and lower organic matter inputs compared to lakes with similar drainage ratio but in landscapes dominated by forests. Thus, lakes in agricultural-dominated landscapes are expected to have higher primary production rates, more algal blooms, and excessive macrophyte biomass compared to lakes in forest-dominated landscapes (). However, the effects of catchment size and catchment type are complex and interactive. Relatively small forested lakes are more shaded and protected from wind exposure and also receive high amounts of allochthonous organic matter. Thus, small forested lakes are generally more humic with a shallow mixed layer and reduced light penetration. The high inputs of allochthonous organic matter (produced outside the lake) stimulate heterotrophic communities, such as bacteria, zooplankton, and fish, enhancing whole-lake respiration rates. Hence small forested lakes are more likely to be net heterotrophic, with ER rates exceeding primary production rates in the lake. On the other hand, forested lakes with low drainage ratio receive relatively less nutrients and organic matter, typically resulting in clear-water lakes, with low GPP and ER rates (). Another important difference among lakes that influences lake metabolism variability is the residence time of the water in the system, especially among lakes that are intensively managed by humans. Changes to lake level and flushing rates affects nutrient and organic matter concentrations, organism abundance, and rates of ecological processes such as photodegradation of colored organic matter, thus affecting metabolic rates magnitudes and variability. Endorheic lakes or lakes with intermediate hydraulic residence time (HRT) typically have a high retention time of nutrients and organic matter in the system, that favours growth of primary producers and bacterial degradation of organic matter. Thus, these types of lakes are expected to maintain relatively higher and less variable GPP and ER rates, than lakes with low residence time in the same trophic status. On the other hand, lakes with long HRT are expected to have reduced metabolic rates due to lower inputs of nutrients and organic matter to the lake. Finally, lentic systems that have frequent and intense changes in water level and accelerated flushing rates have a dynamic closer to lotic systems, with usually low GPP and ER rates, due to nutrients, organic matter, and algae being flushed out of the system during intense flushing events.
Temporal variation on a daily scale.
On a daily scale, GPP rates are most affected by the diel cycle of photosynthetically active radiation while ER is largely affected by changes in water temperature. Additionally, ER rates are also tied to the quantity or quality of the organic substrate and relative contributions of autotrophic and heterotrophic respiration, as indicated by studies of the patterns of night-time respiration (e.g. Sadro et al. 2014). For example, bacterioplankton respiration can be higher during the day and in the first hours of the night, due to the higher availability of labile dissolved organic matter produced by phytoplankton. As the sun rises, there is a rapid increase in primary production in the lake, often making it autotrophic (NEP > 0) and reducing dissolved CO2 that was produced from carbon mineralization that occurred during the night. This behavior continues until reaching a peak in NEP, typically around the maximum light availability. Then there is a tendency for the NEP to fall steadily between the hours of maximum light availability until the next day's sunrise.
Day-to-day differences in incoming light and temperature, due to differences in the weather, such as cloud cover and storms, affect rates of primary production and, to a lesser extent, respiration. These weather variations also cause short-term variability in mixed layer depth, which in turn affects nutrients, organic matter, and light availability, as well as vertical and horizontal gas exchanges. Deep mixing reduces light availability but also increases nutrients and organic matter availability in the upper layers. Thus the effects of short-term variability in mixed layer depth on gross primary production (GPP) will depend on which are the limiting factors on each lake at a given period. Thus a deeper mixing layer could either increase or decrease GPP rates depending on the balance between nutrient and light limitation of photosynthesis ().
Responses in metabolic rates are as dynamic as the physical and chemical processes occurring in the lake, but changes in algal biomass are less variable, involving growth and loss over longer periods. High light and nutrients availability are associated with the formation of algal blooms in lakes; during these blooms GPP rates are very high, and ER rates usually increase almost as much as GPP rates, and the balance of GPP and ER is close to 1. Right after the bloom, GPP rates start to decrease but ER rates continue higher due to the high availability of labile organic matter, which can lead to a fast depletion of dissolved oxygen concentration in the water column, resulting in fish kills.
Temporal variation on an annual scale.
Seasonal variations in metabolism can be driven by seasonal variations in temperature, ice-cover, rainfall, mixing and stratification dynamics, and community succession (e.g. phytoplankton control by zooplankton). Seasonal variations in lake metabolism will depend on how seasons alter the inputs of nutrients and organic matter, and light availability, and on which factors are limiting metabolic rates in each lake.
Light is a primary driver of lake metabolism, thus seasonality in light levels is an important driver of seasonal changes in lake metabolic rates. Therefore, it is expected GPP rates to be more pronounced during seasons such as spring and summer, in which light levels are higher and days are longer. This is especially pronounced for lakes with light-limited GPP, for example, more turbid or stained lakes. Seasonality in light levels also affects ER rates. Ecosystem respiration rates are usually coupled with GPP rates, thus seasons with higher GPP will also show higher ER rates associated with increased organic matter produced within the lake. Moreover, during seasons with higher light levels photodegradation of organic matter is more pronounced, which stimulates microbial degradation, enhancing heterotrophic respiration rates. Most of the lakes in the world freeze during the winter, a low-irradiance period, in which ice and snow cover limit light penetration in the water column. Light limitation occurs mainly by snow cover and not by ice, which makes primary production strongly sensitive to snow cover in those lakes. In addition to light limitation, low temperatures under ice also diminish metabolic rates, but not enough to cease metabolic processes. Therefore, the metabolic balance is usually negative during the majority of the ice season, leading to dissolved oxygen depletion. Shallow lakes in arid climates have none or very little snow cover during the winter, thus, primary production sustained under-ice can be enough to prevent dissolved oxygen depletion, as reported by Song and others in a Mongolian lake. Despite the high proportion of the world's lakes that freeze during the winter, few studies have been conducted on lake metabolism under-ice, mostly due to sampling technical difficulties.
Lakes that are closer to the equator experience less seasonality regarding light intensity and daylight hours than lakes at higher latitudes (temperate and polar zones). Thus, lakes at higher latitudes are more likely to experience light limitation of primary production during low-light seasons (winter and autumn). Seasonal differences in temperature are also not so important in the tropics as they are for higher latitudes lakes. Thus, the direct effect of temperature seasonal variations on metabolic rates is more important in higher latitudes lakes than in tropical lakes (). In turn, tropical and subtropical lakes are more likely to have seasonal variations following the stratification and mixing dynamics, and rainfall regimes (wet and dry seasons), than due to the four astronomical or meteorological seasons (spring, summer, autumn, and winter) when compared to higher latitudes lakes. Seasonal changes in temperature and rainfall lead to seasonal changes in water column stability. During periods of low water column stability, a deeper mixed layer (total or partial mixing of the water column, depending on the lake) increases the inputs of nutrients and organic matter from deeper layers and through sediment resuspension, which reduce light availability. Conversely, during periods of strong water column stability, internal loadings of nutrients, organic matter, and the associated bacteria to the water column are suppressed, while algal loss due to sinking is enhanced. Moreover, light availability during this period is higher, due to photobleaching, lower resuspension of sediments, and lower mixing depth, which expose phytoplankton to a more light-rich environment. Higher ER rates during low water column stability period, as a consequence of higher organic matter availability and higher bacteria biomass associated with this organic matter, have been reported for many lakes around the world. However, primary production rates responses to these seasonal changes have been shown different behaviors in different lakes. As said before, the responses of metabolic rates to those changes will depend on limiting factors of primary production in each lake (). During low water column stability periods, upwelling of waters rich in nutrients can result in higher pelagic GPP rates, as has been observed in some tropical lakes. Conversely, during low water column stability periods, GPP rates can be limited by low light availability, as have been observed in some temperate and subtropical lakes. The net metabolic balance is usually more negative during de-stratified periods, even in lakes in which the well-mixed season is the most productive period. Regardless of the high GPP in these systems, ER rates are also enhanced by the increased availability of organic matter stocks from sediments and deeper waters.
Seasonal differences in rainfall also affect metabolic rates. The increase in precipitation promotes the entry of organic matter and nutrients in lakes, which can stimulate ER rates and stimulate or inhibit GPP rates, depending on the balance between increased nutrients and lower light availability. On the other hand, lower precipitation also affects limnological conditions by reducing the water level and, thereby, increasing the concentration of nutrients and chlorophyll, as well as changing the thermal stability of aquatic environments. These changes could also enhance ER and GPP rates. Thus, the degree of the responses of metabolic rates to seasonal changes in rainfall will depend on lake morphometry, catchment properties and the intensity and duration of the rainfall events. Lakes frequently exposed to strong storms, such as the typhoon areas in the Northwest Pacific Ocean, receive intense rainfall events that can last for a few days. During these storm seasons, a reduction in metabolic rates is expected due to reduced sunlight and flushing of water and organisms. This reduction is expected to be more pronounced in GPP than in ER rates, resulting in a more heterotrophic NEP (GPP < ER). In a subtropical lake in Taiwan, for example, a decoupling of GPP and ER rates was observed during typhoon seasons, following a shift in the organic matter pool from autochthonous-based (organic matter produced within the lake) to allochthonous-based (organic matter produced outside the lake). This suggests that ER rates were more resistant to the typhoon disturbance than GPP rates.
Interannual variations.
Interannual variability on metabolic rates can be driven by extensive changes in the catchment or by directional and cyclical climate change and climate disturbances, such as the events associated with the El Niño Southern Oscillation (ENSO). Those changes in the catchment, air temperature, and precipitation between years affect metabolic rates by altering nutrient and organic matter inputs to the lake, light attenuation, mixing dynamics, and by direct temperature-dependence of metabolic processes.
The increase in precipitation increases external loading of organic matter, nutrients and sediments in lakes. Moreover, increased discharge events promoted by increased rainfall can also alter mixing dynamics and cause physical flushing of organisms. While lower precipitation associated with high evaporation rates also affects limnological conditions by reducing the water level and thereby increasing the concentration of nutrients and chlorophyll, as well as changing the thermal stability of aquatic environments. During warmer years, a stronger water column stability limits the inputs of nutrients and organic matter to the photic zone. In contrast, during colder years, a less stable water column enhances resuspension of the sediments and the inputs of nutrients and organic matter from deeper waters. This lowers light availability, while enhances nutrient and organic matter availability. Thus, the effects of differences in precipitation and temperature between years in metabolic rates will depend on the intensity and duration of these changes, and also in which factors are limiting GPP and ER in each water body.
In lakes with nutrients and organic matter limitation of GPP and ER, wetter years can enhance GPP and ER rates, due to higher inputs of nutrients and organic matter from the landscape. This will depend if the terrestrial inputs will be promptly available for the primary producers and heterotrophic communities or if it is going to enter the lake through deeper waters, in which metabolic processes are very low or non-existent. In this case, the inputs will only be available in the next water column mixing event. Thus, increases in metabolic rates due to rainfall depend also on the stratification and mixing dynamics, hydrology, and morphometry of the lake. On the other hand, drier years can also have enhanced GPP and ER rates if it is accompanied by lower water levels, which would lead to higher nutrients and organic matter concentrations. A lower water level is associated with a less stable water column and higher proximity with the sediments, thus increased inputs of nutrients and organic matter from deeper waters. Also, a reduction in water level through water evaporation leads to a concentration effect. In turn, during warmer years the water column is more stable, and the depth of the mixing layer is shallower, thus reducing internal inputs of nutrients and organic matter to the mixed layer. Metabolic rates, in this scenario, will be lower in the upper mixed layer. In lakes with a photic zone extending deeper than the mixed layer, metabolic rates will be higher in intermediated depths, coinciding with the deep chlorophyll maxima.
In lakes with primary production limited mostly by light availability, increases in rainfall could lead to lower light availability, associated with increased dissolved organic matter and total suspended matter. Consequently, increased rainfall would be associated with lower levels of GPP, which would reduce respiration rates associated with autochthonous production, leading to a decoupling of GPP and ER rates. In addition, increased allochthonous organic matter availability during wet years can lead to higher ER, and consequently leading the metabolic balance to be negative (NEP <0).
Changes in annual precipitation can also affect the spatial variability in metabolic rates within lakes. Williamson and collaborators, for example, found that, in a hyper-eutrophic reservoir in North America, the relative spatial variability in GPP and ER rates were higher in a dry year compared to a wet one. These suggest higher relevance of internal processes, such as internal loading, nutrient uptake, sedimentation, and resuspension, to metabolic rates during dry years.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "6CO_2+6H_2O\\xrightarrow[]{light} C_6H_{12}O_6 + 6O_2 "
},
{
"math_id": 1,
"text": "C_6H_{12}O_6 + 6O_2\\xrightarrow[]{} 6CO_2+6H_2O "
},
{
"math_id": 2,
"text": "(R_h + R_a)"
},
{
"math_id": 3,
"text": "R_h "
},
{
"math_id": 4,
"text": "R_a "
},
{
"math_id": 5,
"text": "NEP_{OC} = E_{OC} + S_{OC} - I_{OC} "
},
{
"math_id": 6,
"text": "NEP_{IC} = I_{IC} - S_{IC} - E_{IC} "
}
] |
https://en.wikipedia.org/wiki?curid=59168336
|
591703
|
Szemerédi's theorem
|
Long dense subsets of the integers contain arbitrarily large arithmetic progressions
In arithmetic combinatorics, Szemerédi's theorem is a result concerning arithmetic progressions in subsets of the integers. In 1936, Erdős and Turán conjectured that every set of integers "A" with positive natural density contains a "k"-term arithmetic progression for every "k". Endre Szemerédi proved the conjecture in 1975.
Statement.
A subset "A" of the natural numbers is said to have positive upper density if
formula_0
Szemerédi's theorem asserts that a subset of the natural numbers with positive upper density contains an arithmetic progression of length "k" for all positive integers "k".
An often-used equivalent finitary version of the theorem states that for every positive integer "k" and real number formula_1, there exists a positive integer
formula_2
such that every subset of {1, 2, ..., "N"} of size at least "δN" contains an arithmetic progression of length "k".
Another formulation uses the function "r""k"("N"), the size of the largest subset of {1, 2, ..., "N"} without an arithmetic progression of length "k". Szemerédi's theorem is equivalent to the asymptotic bound
formula_3
That is, "r""k"("N") grows less than linearly with "N".
History.
Van der Waerden's theorem, a precursor of Szemerédi's theorem, was proven in 1927.
The cases "k" = 1 and "k" = 2 of Szemerédi's theorem are trivial. The case "k" = 3, known as Roth's theorem, was established in 1953 by Klaus Roth via an adaptation of the Hardy–Littlewood circle method. Endre Szemerédi proved the case "k" = 4 through combinatorics. Using an approach similar to the one he used for the case "k" = 3, Roth gave a second proof for this in 1972.
The general case was settled in 1975, also by Szemerédi, who developed an ingenious and complicated extension of his previous combinatorial argument for "k" = 4 (called "a masterpiece of combinatorial reasoning" by Erdős). Several other proofs are now known, the most important being those by Hillel Furstenberg in 1977, using ergodic theory, and by Timothy Gowers in 2001, using both Fourier analysis and combinatorics while also introducing what is now called the Gowers norm. Terence Tao has called the various proofs of Szemerédi's theorem a "Rosetta stone" for connecting disparate fields of mathematics.
Quantitative bounds.
It is an open problem to determine the exact growth rate of "r""k"("N"). The best known general bounds are
formula_4
where formula_5. The lower bound is due to O'Bryant building on the work of Behrend, Rankin, and Elkin. The upper bound is due to Gowers.
For small "k", there are tighter bounds than the general case. When "k" = 3, Bourgain, Heath-Brown, Szemerédi, Sanders, and Bloom established progressively smaller upper bounds, and Bloom and Sisask then proved the first bound that broke the so-called "logarithmic barrier". The current best bounds are
formula_6, for some constant formula_7,
respectively due to O'Bryant, and Bloom and Sisask (the latter built upon the breakthrough result of Kelley and Meka, who obtained the same upper bound, with "1/9" replaced by "1/12").
For "k" = 4, Green and Tao proved that
formula_8
For k=5 in 2023 and k≥5 in 2024 Leng, Sah and Sawhney proved in preprints that:
formula_9
Extensions and generalizations.
A multidimensional generalization of Szemerédi's theorem was first proven by Hillel Furstenberg and Yitzhak Katznelson using ergodic theory. Timothy Gowers, Vojtěch Rödl and Jozef Skokan with Brendan Nagle, Rödl, and Mathias Schacht, and Terence Tao provided combinatorial proofs.
Alexander Leibman and Vitaly Bergelson generalized Szemerédi's to polynomial progressions: If formula_10 is a set with positive upper density and formula_11 are integer-valued polynomials such that formula_12, then there are infinitely many formula_13 such that formula_14 for all formula_15. Leibman and Bergelson's result also holds in a multidimensional setting.
The finitary version of Szemerédi's theorem can be generalized to finite additive groups including vector spaces over finite fields. The finite field analog can be used as a model for understanding the theorem in the natural numbers. The problem of obtaining bounds in the k=3 case of Szemerédi's theorem in the vector space formula_16 is known as the cap set problem.
The Green–Tao theorem asserts the prime numbers contain arbitrarily long arithmetic progressions. It is not implied by Szemerédi's theorem because the primes have density 0 in the natural numbers. As part of their proof, Ben Green and Tao introduced a "relative" Szemerédi theorem which applies to subsets of the integers (even those with 0 density) satisfying certain pseudorandomness conditions. A more general relative Szemerédi theorem has since been given by David Conlon, Jacob Fox, and Yufei Zhao.
The Erdős conjecture on arithmetic progressions would imply both Szemerédi's theorem and the Green–Tao theorem.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
External links.
<templatestyles src="Div col/styles.css"/>
|
[
{
"math_id": 0,
"text": "\\limsup_{n \\to \\infty}\\frac{|A\\cap \\{1, 2, 3, \\dotsc, n\\}|}{n} > 0."
},
{
"math_id": 1,
"text": "\\delta \\in (0, 1]"
},
{
"math_id": 2,
"text": "N = N(k,\\delta)"
},
{
"math_id": 3,
"text": "r_k(N) = o(N). "
},
{
"math_id": 4,
"text": "CN\\exp\\left(-n2^{(n - 1)/2}\\sqrt[n]{\\log N} + \\frac{1}{2n}\\log \\log N\\right) \\leq r_k(N) \\leq \\frac{N}{(\\log \\log N)^{2^{-2^{k + 9}}}},"
},
{
"math_id": 5,
"text": "n = \\lceil \\log k\\rceil"
},
{
"math_id": 6,
"text": " N 2^{-\\sqrt{8\\log N}} \\leq r_3(N) \\leq N e^{-c(\\log N)^{1/9}} "
},
{
"math_id": 7,
"text": "c>0 "
},
{
"math_id": 8,
"text": "r_4(N) \\leq C\\frac{N}{(\\log N)^c}"
},
{
"math_id": 9,
"text": "r_k(N) \\leq CN\\exp(-(\\log\\log N)^c)"
},
{
"math_id": 10,
"text": "A \\subset \\mathbb{N}"
},
{
"math_id": 11,
"text": "p_1(n),p_2(n),\\dotsc,p_k(n)"
},
{
"math_id": 12,
"text": "p_i(0) = 0"
},
{
"math_id": 13,
"text": "u, n \\in \\mathbb{Z}"
},
{
"math_id": 14,
"text": "u + p_i(n) \\in A"
},
{
"math_id": 15,
"text": "1 \\leq i \\leq k"
},
{
"math_id": 16,
"text": "\\mathbb{F}_3^n"
}
] |
https://en.wikipedia.org/wiki?curid=591703
|
5917746
|
Soft-body dynamics
|
Computer graphics simulation of deformable objects
Soft-body dynamics is a field of computer graphics that focuses on visually realistic physical simulations of the motion and properties of deformable objects (or "soft bodies"). The applications are mostly in video games and films. Unlike in simulation of rigid bodies, the shape of soft bodies can change, meaning that the relative distance of two points on the object is not fixed. While the relative distances of points are not fixed, the body is expected to retain its shape to some degree (unlike a fluid). The scope of soft body dynamics is quite broad, including simulation of soft organic materials such as muscle, fat, hair and vegetation, as well as other deformable materials such as clothing and fabric. Generally, these methods only provide visually plausible emulations rather than accurate scientific/engineering simulations, though there is some crossover with scientific methods, particularly in the case of finite element simulations. Several physics engines currently provide software for soft-body simulation.
Deformable solids.
The simulation of volumetric solid soft bodies can be realised by using a variety of approaches.
Spring/mass models.
In this approach, the body is modeled as a set of point masses (nodes) connected by ideal weightless elastic springs obeying some variant of Hooke's law. The nodes may either derive from the edges of a two-dimensional polygonal mesh representation of the surface of the object, or from a three-dimensional network of nodes and edges modeling the internal structure of the object (or even a one-dimensional system of links, if for example a rope or hair strand is being simulated). Additional springs between nodes can be added, or the force law of the springs modified, to achieve desired effects. Applying Newton's second law to the point masses including the forces applied by the springs and any external forces (due to contact, gravity, air resistance, wind, and so on) gives a system of differential equations for the motion of the nodes, which is solved by standard numerical schemes for solving ODEs. Rendering of a three-dimensional mass-spring lattice is often done using free-form deformation, in which the rendered mesh is embedded in the lattice and distorted to conform to the shape of the lattice as it evolves. Assuming all point masses equal to zero one can obtain the Stretched grid method aimed at several engineering problems solution relative to the elastic grid behavior. These are sometimes known as mass-spring-damper models. In pressurized soft bodies spring-mass model is combined with a pressure force based on the ideal gas law.
Finite element simulation.
This is a more physically accurate approach, which uses the widely used finite element method to solve the partial differential equations which govern the dynamics of an elastic material. The body is modeled as a three-dimensional elastic continuum by breaking it into a large number of solid elements which fit together, and solving for the stresses and strains in each element using a model of the material. The elements are typically tetrahedral, the nodes being the vertices of the tetrahedra (relatively simple methods exist to "tetrahedralize" a three dimensional region bounded by a polygon mesh into tetrahedra, similarly to how a two-dimensional polygon may be "triangulated" into triangles). The strain (which measures the local deformation of the points of the material from their rest state) is quantified by the strain tensor formula_0. The stress (which measures the local forces per-unit area in all directions acting on the material) is quantified by the Cauchy stress tensor formula_1. Given the current local strain, the local stress can be computed via the generalized form of Hooke's law:
formula_2
where formula_3 is the elasticity tensor, which encodes the material properties (parametrized in linear elasticity for an isotropic material by the Poisson ratio and Young's modulus).
The equation of motion of the element nodes is obtained by integrating the stress field over each element and relating this, via Newton's second law, to the node accelerations.
Pixelux (developers of the Digital Molecular Matter system) use a finite-element-based approach for their soft bodies, using a tetrahedral mesh and converting the stress tensor directly into node forces. Rendering is done via a form of free-form deformation.
Energy minimization methods.
This approach is motivated by variational principles and the physics of surfaces, which dictate that a constrained surface will
assume the shape which minimizes the total energy of deformation (analogous to a soap bubble). Expressing the energy of a surface in terms of its local deformation (the energy is due to a combination of stretching and bending), the local force on the surface is given by differentiating the energy with respect to position, yielding an equation of motion which can be solved in the standard ways.
Shape matching.
In this scheme, penalty forces or constraints are applied to the model to drive it towards its original shape (i.e. the material behaves as if it has shape memory). To conserve momentum the rotation of the body must be estimated properly, for example via polar decomposition. To approximate finite element simulation, shape matching can be applied to three dimensional lattices and multiple shape matching constraints blended.
Rigid-body based deformation.
Deformation can also be handled by a traditional rigid-body physics engine, modeling the soft-body motion using a network of multiple rigid bodies connected by constraints, and using (for example) matrix-palette skinning to generate a surface mesh for rendering. This is the approach used for deformable objects in Havok Destruction.
Cloth simulation.
In the context of computer graphics, "cloth simulation" refers to the simulation of soft bodies in the form of two dimensional continuum elastic membranes, that is, for this purpose, the actual structure of real cloth on the yarn level can be ignored (though modeling cloth on the yarn level has been tried). Via rendering effects, this can produce a visually plausible emulation of textiles and clothing, used in a variety of contexts in video games, animation, and film. It can also be used to simulate two dimensional sheets of materials other than textiles, such as deformable metal panels or vegetation. In video games it is often used to enhance the realism of clothed animated characters.
Cloth simulators are generally based on mass-spring models, but a distinction must be made between force-based and position-based solvers.
Force-based cloth.
The mass-spring model (obtained from a polygonal mesh representation of the cloth) determines the internal spring forces acting on the nodes at each timestep (in combination with gravity and applied forces). Newton's second law gives equations of motion which can be solved via standard ODE solvers. To create high resolution cloth with a realistic stiffness is not possible however with simple explicit solvers (such as forward Euler integration), unless the timestep is made too small for interactive applications (since as is well known, explicit integrators are numerically unstable for sufficiently stiff systems). Therefore, implicit solvers must be used, requiring solution of a large sparse matrix system (via e.g. the conjugate gradient method), which itself may also be difficult to achieve at interactive frame rates. An alternative is to use an explicit method with low stiffness, with "ad hoc" methods to avoid instability and excessive stretching (e.g. strain limiting corrections).
Position-based dynamics.
To avoid needing to do an expensive implicit solution of a system of ODEs, many real-time cloth simulators (notably PhysX, Havok Cloth, and Maya nCloth) use "position based dynamics" (PBD), an approach based on constraint relaxation. The mass-spring model is converted into a system of constraints, which demands that the distance between the connected nodes be equal to the initial distance. This system is solved sequentially and iteratively, by directly moving nodes to satisfy each constraint, until sufficiently stiff cloth is obtained. This is similar to a Gauss-Seidel solution of the implicit matrix system for the mass-spring model. Care must be taken though to solve the constraints in the same sequence each timestep, to avoid spurious oscillations, and to make sure that the constraints do not violate linear and angular momentum conservation. Additional position constraints can be applied, for example to keep the nodes within desired regions of space (sufficiently close to an animated model for example), or to maintain the body's overall shape via shape matching.
Collision detection for deformable objects.
Realistic interaction of simulated soft objects with their environment may be important for obtaining visually realistic results. Cloth self-intersection is important in some applications for acceptably realistic simulated garments. This is challenging to achieve at interactive frame rates, particularly in the case of detecting and resolving self collisions and mutual collisions between two or more deformable objects.
Collision detection may be "discrete/a posteriori" (meaning objects are advanced in time through a pre-determined interval, and then any penetrations detected and resolved), or "continuous/a priori" (objects are advanced only until a collision occurs, and the collision is handled before proceeding). The former is easier to implement and faster, but leads to failure to detect collisions (or detection of spurious collisions) if objects move fast enough. Real-time systems generally have to use discrete collision detection, with other "ad hoc" ways to avoid failing to detect collisions.
Detection of collisions between cloth and environmental objects with a well defined "inside" is straightforward since the system can detect unambiguously whether the cloth mesh vertices and faces are intersecting the body and resolve them accordingly. If a well defined "inside" does not exist (e.g. in the case of collision with a mesh which does not form a closed boundary), an "inside" may be constructed via extrusion. Mutual- or self-collisions of soft bodies defined by tetrahedra is straightforward, since it reduces to detection of collisions between solid tetrahedra.
However, detection of collisions between two polygonal cloths (or collision of a cloth with itself) via discrete collision detection is much more difficult, since there is no unambiguous way to locally detect after a timestep whether a cloth node which has penetrated is on the "wrong" side or not. Solutions involve either using the history of the cloth motion to determine if an intersection event has occurred, or doing a global analysis of the cloth state to detect and resolve self-intersections. Pixar has presented a method which uses a global topological analysis of mesh intersections in configuration space to detect and resolve self-interpenetration of cloth. Currently, this is generally too computationally expensive for real-time cloth systems.
To do collision detection efficiently, primitives which are certainly not colliding must be identified as soon as possible and discarded from consideration to avoid wasting time.
To do this, some form of spatial subdivision scheme is essential, to avoid a brute force test of formula_4 primitive collisions. Approaches used include:
Other applications.
Other effects which may be simulated via the methods of soft-body dynamics are:
Simulating fluids in the context of computer graphics would not normally be considered soft-body dynamics, which is usually restricted to mean simulation of materials which have a tendency to retain their shape and form. In contrast, a fluid assumes the shape of whatever vessel contains it, as the particles are bound together by relatively weak forces.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\boldsymbol{\\epsilon}"
},
{
"math_id": 1,
"text": "\\boldsymbol{\\sigma}"
},
{
"math_id": 2,
"text": " \n\\boldsymbol{\\sigma} = \\mathsf{C} \\boldsymbol{\\varepsilon} \\,\n"
},
{
"math_id": 3,
"text": "\\mathsf{C}"
},
{
"math_id": 4,
"text": "O[n^2]"
}
] |
https://en.wikipedia.org/wiki?curid=5917746
|
5918
|
Continuum mechanics
|
Branch of physics which studies the behavior of materials modeled as continuous media
Continuum mechanics is a branch of mechanics that deals with the deformation of and transmission of forces through materials modeled as a continuous medium (also called a continuum) rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such models in the 19th century.
Continuum mechanics deals with "deformable bodies", as opposed to rigid bodies.
A continuum model assumes that the substance of the object completely fills the space it occupies. While ignoring the fact that matter is made of atoms, this provides a sufficiently accurate description of matter on length scales much greater than that of inter-atomic distances. The concept of a continuous medium allows for intuitive analysis of bulk matter by using differential equations that describe the behavior of such matter according to physical laws, such as mass conservation, momentum conservation, and energy conservation. Information about the specific material is expressed in constitutive relationships.
Continuum mechanics treats the physical properties of solids and fluids independently of any particular coordinate system in which they are observed. These properties are represented by tensors, which are mathematical objects with the salient property of being independent of coordinate systems. This permits definition of physical properties at any point in the continuum, according to mathematically convenient continuous functions. The theories of elasticity, plasticity and fluid mechanics are based on the concepts of continuum mechanics.
Concept of a continuum.
The concept of a continuum underlies the mathematical framework for studying large-scale forces and deformations in materials. Although materials are composed of discrete atoms and molecules, separated by empty space or microscopic cracks and crystallographic defects, physical phenomena can often be modeled by considering a substance distributed throughout some region of space. A continuum is a body that can be continually sub-divided into infinitesimal elements with local material properties defined at any particular point. Properties of the bulk material can therefore be described by continuous functions, and their evolution can be studied using the mathematics of calculus.
Apart from the assumption of continuity, two other independent assumptions are often employed in the study of continuum mechanics. These are homogeneity (assumption of identical properties at all locations) and isotropy (assumption of directionally invariant vector properties). If these auxiliary assumptions are not globally applicable, the material may be segregated into sections where they are applicable in order to simplify the analysis. For more complex cases, one or both of these assumptions can be dropped. In these cases, computational methods are often used to solve the differential equations describing the evolution of material properties.
Major areas.
An additional area of continuum mechanics comprises elastomeric foams, which exhibit a curious hyperbolic stress-strain relationship. The elastomer is a true continuum, but a homogeneous distribution of voids gives it unusual properties.
Formulation of models.
Continuum mechanics models begin by assigning a region in three-dimensional Euclidean space to the material body formula_0 being modeled. The points within this region are called particles or material points. Different "configurations" or states of the body correspond to different regions in Euclidean space. The region corresponding to the body's configuration at time formula_1 is labeled formula_2.
A particular particle within the body in a particular configuration is characterized by a position vector
formula_3
where formula_4 are the coordinate vectors in some frame of reference chosen for the problem (See figure 1). This vector can be expressed as a function of the particle position formula_5 in some "reference configuration", for example the configuration at the initial time, so that
formula_6
This function needs to have various properties so that the model makes physical sense. formula_7 needs to be:
For the mathematical formulation of the model, formula_7 is also assumed to be twice continuously differentiable, so that differential equations describing the motion may be formulated.
Forces in a continuum.
A solid is a deformable body that possesses shear strength, "sc." a solid can support shear forces (forces parallel to the material surface on which they act). Fluids, on the other hand, do not sustain shear forces.
Following the classical dynamics of Newton and Euler, the motion of a material body is produced by the action of externally applied forces which are assumed to be of two kinds: surface forces formula_8 and body forces formula_9. Thus, the total force formula_10 applied to a body or to a portion of the body can be expressed as:
formula_11
Surface forces.
"Surface forces" or "contact forces", expressed as force per unit area, can act either on the bounding surface of the body, as a result of mechanical contact with other bodies, or on imaginary internal surfaces that bound portions of the body, as a result of the mechanical interaction between the parts of the body to either side of the surface (Euler-Cauchy's stress principle). When a body is acted upon by external contact forces, internal contact forces are then transmitted from point to point inside the body to balance their action, according to Newton's third law of motion of conservation of linear momentum and angular momentum (for continuous bodies these laws are called the Euler's equations of motion). The internal contact forces are related to the body's deformation through constitutive equations. The internal contact forces may be mathematically described by how they relate to the motion of the body, independent of the body's material makeup.
The distribution of internal contact forces throughout the volume of the body is assumed to be continuous. Therefore, there exists a "contact force density" or "Cauchy traction field" formula_12 that represents this distribution in a particular configuration of the body at a given time formula_13. It is not a vector field because it depends not only on the position formula_14 of a particular material point, but also on the local orientation of the surface element as defined by its normal vector formula_15.
Any differential area formula_16 with normal vector formula_15 of a given internal surface area formula_17, bounding a portion of the body, experiences a contact force formula_18 arising from the contact between both portions of the body on each side of formula_17, and it is given by
formula_19
where formula_20 is the "surface traction", also called "stress vector", "traction", or "traction vector". The stress vector is a frame-indifferent vector (see Euler-Cauchy's stress principle).
The total contact force on the particular internal surface formula_17 is then expressed as the sum (surface integral) of the contact forces on all differential surfaces formula_16:
formula_21
In continuum mechanics a body is considered stress-free if the only forces present are those inter-atomic forces (ionic, metallic, and van der Waals forces) required to hold the body together and to keep its shape in the absence of all external influences, including gravitational attraction. Stresses generated during manufacture of the body to a specific configuration are also excluded when considering stresses in a body. Therefore, the stresses considered in continuum mechanics are only those produced by deformation of the body, "sc." only relative changes in stress are considered, not the absolute values of stress.
Body forces.
"Body forces" are forces originating from sources outside of the body that act on the volume (or mass) of the body. Saying that body forces are due to outside sources implies that the interaction between different parts of the body (internal forces) are manifested through the contact forces alone. These forces arise from the presence of the body in force fields, "e.g." gravitational field (gravitational forces) or electromagnetic field (electromagnetic forces), or from inertial forces when bodies are in motion. As the mass of a continuous body is assumed to be continuously distributed, any force originating from the mass is also continuously distributed. Thus, body forces are specified by vector fields which are assumed to be continuous over the entire volume of the body, "i.e." acting on every point in it. Body forces are represented by a body force density formula_22 (per unit of mass), which is a frame-indifferent vector field.
In the case of gravitational forces, the intensity of the force depends on, or is proportional to, the mass density formula_23 of the material, and it is specified in terms of force per unit mass (formula_24) or per unit volume (formula_25). These two specifications are related through the material density by the equation formula_26. Similarly, the intensity of electromagnetic forces depends upon the strength (electric charge) of the electromagnetic field.
The total body force applied to a continuous body is expressed as
formula_27
Body forces and contact forces acting on the body lead to corresponding moments of force (torques) relative to a given point. Thus, the total applied torque formula_28 about the origin is given by
formula_29
In certain situations, not commonly considered in the analysis of the mechanical behavior of materials, it becomes necessary to include two other types of forces: these are "couple stresses" (surface couples, contact torques) and "body moments". Couple stresses are moments per unit area applied on a surface. Body moments, or body couples, are moments per unit volume or per unit mass applied to the volume of the body. Both are important in the analysis of stress for a polarized dielectric solid under the action of an electric field, materials where the molecular structure is taken into consideration ("e.g." bones), solids under the action of an external magnetic field, and the dislocation theory of metals.
Materials that exhibit body couples and couple stresses in addition to moments produced exclusively by forces are called "polar materials". "Non-polar materials" are then those materials with only moments of forces. In the classical branches of continuum mechanics the development of the theory of stresses is based on non-polar materials.
Thus, the sum of all applied forces and torques (with respect to the origin of the coordinate system) in the body can be given by
formula_30
formula_31
Kinematics: motion and deformation.
A change in the configuration of a continuum body results in a displacement. The displacement of a body has two components: a rigid-body displacement and a deformation. A rigid-body displacement consists of a simultaneous translation and rotation of the body without changing its shape or size. Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration formula_32 to a current or deformed configuration formula_2 (Figure 2).
The motion of a continuum body is a continuous time sequence of displacements. Thus, the material body will occupy different configurations at different times so that a particle occupies a series of points in space which describe a path line.
There is continuity during motion or deformation of a continuum body in the sense that:
It is convenient to identify a reference configuration or initial condition which all subsequent configurations are referenced from. The reference configuration need not be one that the body will ever occupy. Often, the configuration at formula_33 is considered the reference configuration, formula_34. The components formula_35 of the position vector formula_5 of a particle, taken with respect to the reference configuration, are called the material or reference coordinates.
When analyzing the motion or deformation of solids, or the flow of fluids, it is necessary to describe the sequence or evolution of configurations throughout time. One description for motion is made in terms of the material or referential coordinates, called material description or Lagrangian description.
Lagrangian description.
In the Lagrangian description the position and physical properties of the particles are described in terms of the material or referential coordinates and time. In this case the reference configuration is the configuration at formula_33. An observer standing in the frame of reference observes the changes in the position and physical properties as the material body moves in space as time progresses. The results obtained are independent of the choice of initial time and reference configuration, formula_32. This description is normally used in solid mechanics.
In the Lagrangian description, the motion of a continuum body is expressed by the mapping function formula_36 (Figure 2),
formula_37
which is a mapping of the initial configuration formula_32 onto the current configuration formula_2, giving a geometrical correspondence between them, i.e. giving the position vector formula_38 that a particle formula_39, with a position vector formula_5 in the undeformed or reference configuration formula_32, will occupy in the current or deformed configuration formula_2 at time formula_1. The components formula_40 are called the spatial coordinates.
Physical and kinematic properties formula_41, i.e. thermodynamic properties and flow velocity, which describe or characterize features of the material body, are expressed as continuous functions of position and time, i.e. formula_42.
The material derivative of any property formula_41 of a continuum, which may be a scalar, vector, or tensor, is the time rate of change of that property for a specific group of particles of the moving continuum body. The material derivative is also known as the "substantial derivative", or "comoving derivative", or "convective derivative". It can be thought as the rate at which the property changes when measured by an observer traveling with that group of particles.
In the Lagrangian description, the material derivative of formula_41 is simply the partial derivative with respect to time, and the position vector formula_5 is held constant as it does not change with time. Thus, we have
formula_43
The instantaneous position formula_14 is a property of a particle, and its material derivative is the "instantaneous flow velocity" formula_44 of the particle. Therefore, the flow velocity field of the continuum is given by
formula_45
Similarly, the acceleration field is given by
formula_46
Continuity in the Lagrangian description is expressed by the spatial and temporal continuity of the mapping from the reference configuration to the current configuration of the material points. All physical quantities characterizing the continuum are described this way. In this sense, the function formula_36 and formula_47 are single-valued and continuous, with continuous derivatives with respect to space and time to whatever order is required, usually to the second or third.
Eulerian description.
Continuity allows for the inverse of formula_36 to trace backwards where the particle currently located at formula_14 was located in the initial or referenced configuration formula_32. In this case the description of motion is made in terms of the spatial coordinates, in which case is called the spatial description or Eulerian description, i.e. the current configuration is taken as the reference configuration.
The Eulerian description, introduced by d'Alembert, focuses on the current configuration formula_2, giving attention to what is occurring at a fixed point in space as time progresses, instead of giving attention to individual particles as they move through space and time. This approach is conveniently applied in the study of fluid flow where the kinematic property of greatest interest is the rate at which change is taking place rather than the shape of the body of fluid at a reference time.
Mathematically, the motion of a continuum using the Eulerian description is expressed by the mapping function
formula_48
which provides a tracing of the particle which now occupies the position formula_14 in the current configuration formula_2 to its original position formula_5 in the initial configuration formula_32.
A necessary and sufficient condition for this inverse function to exist is that the determinant of the Jacobian matrix, often referred to simply as the Jacobian, should be different from zero. Thus,
formula_49
In the Eulerian description, the physical properties formula_41 are expressed as
formula_50
where the functional form of formula_51 in the Lagrangian description is not the same as the form of formula_52 in the Eulerian description.
The material derivative of formula_53, using the chain rule, is then
formula_54
The first term on the right-hand side of this equation gives the "local rate of change" of the property formula_53 occurring at position formula_14. The second term of the right-hand side is the "convective rate of change" and expresses the contribution of the particle changing position in space (motion).
Continuity in the Eulerian description is expressed by the spatial and temporal continuity and continuous differentiability of the flow velocity field. All physical quantities are defined this way at each instant of time, in the current configuration, as a function of the vector position formula_14.
Displacement field.
The vector joining the positions of a particle formula_55 in the undeformed configuration and deformed configuration is called the displacement vector formula_56, in the Lagrangian description, or formula_57, in the Eulerian description.
A "displacement field" is a vector field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. It is convenient to do the analysis of deformation or motion of a continuum body in terms of the displacement field, In general, the displacement field is expressed in terms of the material coordinates as
formula_58
or in terms of the spatial coordinates as
formula_59
where formula_60 are the direction cosines between the material and spatial coordinate systems with unit vectors formula_61 and formula_4, respectively. Thus
formula_62
and the relationship between formula_63 and formula_64 is then given by
formula_65
Knowing that
formula_66
then
formula_67
It is common to superimpose the coordinate systems for the undeformed and deformed configurations, which results in formula_68, and the direction cosines become Kronecker deltas, i.e.
formula_69
Thus, we have
formula_70
or in terms of the spatial coordinates as
formula_71
Governing equations.
Continuum mechanics deals with the behavior of materials that can be approximated as continuous for certain length and time scales. The equations that govern the mechanics of such materials include the balance laws for mass, momentum, and energy. Kinematic relations and constitutive equations are needed to complete the system of governing equations. Physical restrictions on the form of the constitutive relations can be applied by requiring that the second law of thermodynamics be satisfied under all conditions. In the continuum mechanics of solids, the second law of thermodynamics is satisfied if the Clausius–Duhem form of the entropy inequality is satisfied.
The balance laws express the idea that the rate of change of a quantity (mass, momentum, energy) in a volume must arise from three causes:
Let formula_72 be the body (an open subset of Euclidean space) and let formula_73 be its surface (the boundary of formula_72).
Let the motion of material points in the body be described by the map
formula_74
where formula_75 is the position of a point in the initial configuration and formula_76 is the location of the same point in the deformed configuration.
The deformation gradient is given by
formula_77
Balance laws.
Let formula_78 be a physical quantity that is flowing through the body. Let formula_79 be sources on the surface of the body and let formula_80 be sources inside the body. Let formula_81 be the outward unit normal to the surface formula_73. Let formula_82 be the flow velocity of the physical particles that carry the physical quantity that is flowing. Also, let the speed at which the bounding surface formula_73 is moving be formula_83 (in the direction formula_84).
Then, balance laws can be expressed in the general form
formula_85
The functions formula_78, formula_79, and formula_80 can be scalar valued, vector valued, or tensor valued - depending on the physical quantity that the balance equation deals with. If there are internal boundaries in the body, jump discontinuities also need to be specified in the balance laws.
If we take the Eulerian point of view, it can be shown that the balance laws of mass, momentum, and energy for a solid can be written as (assuming the source term is zero for the mass and angular momentum equations)
formula_86
In the above equations formula_87 is the mass density (current), formula_88 is the material time derivative of formula_89, formula_82 is the particle velocity, formula_90 is the material time derivative of formula_91, formula_92 is the Cauchy stress tensor, formula_93 is the body force density, formula_94 is the internal energy per unit mass, formula_95 is the material time derivative of formula_96, formula_97 is the heat flux vector, and formula_98 is an energy source per unit mass. The operators used are defined below.
With respect to the reference configuration (the Lagrangian point of view), the balance laws can be written as
formula_99
In the above, formula_100 is the first Piola-Kirchhoff stress tensor, and formula_101 is the mass density in the reference configuration. The first Piola-Kirchhoff stress tensor is related to the Cauchy stress tensor by
formula_102
We can alternatively define the nominal stress tensor formula_103 which is the transpose of the first Piola-Kirchhoff stress tensor such that
formula_104
Then the balance laws become
formula_105
Operators.
The operators in the above equations are defined as
formula_106
where formula_91 is a vector field, formula_107 is a second-order tensor field, and formula_108 are the components of an orthonormal basis in the current configuration. Also,
formula_109
where formula_91 is a vector field, formula_107 is a second-order tensor field, and formula_110 are the components of an orthonormal basis in the reference configuration.
The inner product is defined as
formula_111
Clausius–Duhem inequality.
The Clausius–Duhem inequality can be used to express the second law of thermodynamics for elastic-plastic materials. This inequality is a statement concerning the irreversibility of natural processes, especially when energy dissipation is involved.
Just like in the balance laws in the previous section, we assume that there is a flux of a quantity, a source of the quantity, and an internal density of the quantity per unit mass. The quantity of interest in this case is the entropy. Thus, we assume that there is an entropy flux, an entropy source, an internal mass density formula_89 and an internal specific entropy (i.e. entropy per unit mass) formula_112 in the region of interest.
Let formula_72 be such a region and let formula_73 be its boundary. Then the second law of thermodynamics states that the rate of increase of formula_112 in this region is greater than or equal to the sum of that supplied to formula_72 (as a flux or from internal sources) and the change of the internal entropy density formula_113 due to material flowing in and out of the region.
Let formula_73 move with a flow velocity formula_83 and let particles inside formula_72 have velocities formula_91. Let formula_84 be the unit outward normal to the surface formula_73. Let formula_89 be the density of matter in the region, formula_114 be the entropy flux at the surface, and formula_115 be the entropy source per unit mass.
Then the entropy inequality may be written as
formula_116
The scalar entropy flux can be related to the vector flux at the surface by the relation formula_117. Under the assumption of incrementally isothermal conditions, we have
formula_118
where formula_119 is the heat flux vector, formula_120 is an energy source per unit mass, and formula_121 is the absolute temperature of a material point at formula_76 at time formula_1.
We then have the Clausius–Duhem inequality in integral form:
formula_122
We can show that the entropy inequality may be written in differential form as
formula_123
In terms of the Cauchy stress and the internal energy, the Clausius–Duhem inequality may be written as
formula_124
Validity.
The validity of the continuum assumption may be verified by a theoretical analysis, in which either some clear periodicity is identified or statistical homogeneity and ergodicity of the microstructure exist. More specifically, the continuum hypothesis hinges on the concepts of a representative elementary volume and separation of scales based on the Hill–Mandel condition. This condition provides a link between an experimentalist's and a theoretician's viewpoint on constitutive equations (linear and nonlinear elastic/inelastic or coupled fields) as well as a way of spatial and statistical averaging of the microstructure.
When the separation of scales does not hold, or when one wants to establish a continuum of a finer resolution than the size of the representative volume element (RVE), a statistical volume element (SVE) is employed, which results in random continuum fields. The latter then provide a micromechanics basis for stochastic finite elements (SFE). The levels of SVE and RVE link continuum mechanics to statistical mechanics. Experimentally, the RVE can only be evaluated when the constitutive response is spatially homogenous.
See also.
<templatestyles src="Div col/styles.css"/>
Explanatory notes.
<templatestyles src="Reflist/styles.css" />
References.
Citations.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathcal B"
},
{
"math_id": 1,
"text": "t"
},
{
"math_id": 2,
"text": "\\kappa_t(\\mathcal B)"
},
{
"math_id": 3,
"text": "\\mathbf x = \\sum_{i=1}^3 x_i \\mathbf e_i,"
},
{
"math_id": 4,
"text": "\\mathbf e_i"
},
{
"math_id": 5,
"text": "\\mathbf X"
},
{
"math_id": 6,
"text": "\\mathbf{x}=\\kappa_t(\\mathbf X)."
},
{
"math_id": 7,
"text": "\\kappa_t(\\cdot)"
},
{
"math_id": 8,
"text": "\\mathbf F_C"
},
{
"math_id": 9,
"text": "\\mathbf F_B"
},
{
"math_id": 10,
"text": "\\mathcal F"
},
{
"math_id": 11,
"text": "\\mathcal F = \\mathbf F_C + \\mathbf F_B"
},
{
"math_id": 12,
"text": "\\mathbf T(\\mathbf n, \\mathbf x, t)"
},
{
"math_id": 13,
"text": "t\\,\\!"
},
{
"math_id": 14,
"text": "\\mathbf x"
},
{
"math_id": 15,
"text": "\\mathbf n"
},
{
"math_id": 16,
"text": "dS\\,\\!"
},
{
"math_id": 17,
"text": "S\\,\\!"
},
{
"math_id": 18,
"text": "d\\mathbf F_C\\,\\!"
},
{
"math_id": 19,
"text": "d\\mathbf F_C= \\mathbf T^{(\\mathbf n)}\\,dS"
},
{
"math_id": 20,
"text": "\\mathbf T^{(\\mathbf n)}"
},
{
"math_id": 21,
"text": "\\mathbf F_C=\\int_S \\mathbf T^{(\\mathbf n)}\\,dS"
},
{
"math_id": 22,
"text": "\\mathbf b(\\mathbf x, t)"
},
{
"math_id": 23,
"text": "\\mathbf \\rho (\\mathbf x, t)\\,\\!"
},
{
"math_id": 24,
"text": "b_i\\,\\!"
},
{
"math_id": 25,
"text": "p_i\\,\\!"
},
{
"math_id": 26,
"text": "\\rho b_i = p_i\\,\\!"
},
{
"math_id": 27,
"text": "\\mathbf F_B=\\int_V\\mathbf b\\,dm=\\int_V \\rho\\mathbf b\\,dV"
},
{
"math_id": 28,
"text": "\\mathcal M"
},
{
"math_id": 29,
"text": "\\mathcal M= \\mathbf M_C + \\mathbf M_B"
},
{
"math_id": 30,
"text": "\\mathcal F = \\int_V \\mathbf a\\,dm = \\int_S \\mathbf T\\,dS + \\int_V \\rho\\mathbf b\\,dV"
},
{
"math_id": 31,
"text": "\\mathcal M = \\int_S \\mathbf r \\times \\mathbf T\\,dS + \\int_V \\mathbf r \\times \\rho\\mathbf b\\,dV"
},
{
"math_id": 32,
"text": "\\kappa_0(\\mathcal B)"
},
{
"math_id": 33,
"text": "t=0"
},
{
"math_id": 34,
"text": "\\kappa_0 (\\mathcal B)"
},
{
"math_id": 35,
"text": "X_i"
},
{
"math_id": 36,
"text": "\\chi(\\cdot)"
},
{
"math_id": 37,
"text": "\\mathbf x=\\chi(\\mathbf X, t)"
},
{
"math_id": 38,
"text": "\\mathbf{x}=x_i\\mathbf e_i"
},
{
"math_id": 39,
"text": "X"
},
{
"math_id": 40,
"text": "x_i"
},
{
"math_id": 41,
"text": "P_{ij\\ldots}"
},
{
"math_id": 42,
"text": "P_{ij\\ldots}=P_{ij\\ldots}(\\mathbf X,t)"
},
{
"math_id": 43,
"text": "\\frac{d}{dt}[P_{ij\\ldots}(\\mathbf X,t)]=\\frac{\\partial}{\\partial t}[P_{ij\\ldots}(\\mathbf X,t)]"
},
{
"math_id": 44,
"text": "\\mathbf v"
},
{
"math_id": 45,
"text": "\\mathbf v = \\dot{\\mathbf x} =\\frac{d\\mathbf x}{dt}=\\frac{\\partial \\chi(\\mathbf X,t)}{\\partial t} "
},
{
"math_id": 46,
"text": "\\mathbf a= \\dot{\\mathbf v} = \\ddot{\\mathbf x} =\\frac{d^2\\mathbf x}{dt^2}=\\frac{\\partial^2 \\chi(\\mathbf X,t)}{\\partial t^2} "
},
{
"math_id": 47,
"text": "P_{ij\\ldots}(\\cdot)"
},
{
"math_id": 48,
"text": "\\mathbf X=\\chi^{-1}(\\mathbf x, t)"
},
{
"math_id": 49,
"text": "J = \\left| \\frac{\\partial \\chi_i}{\\partial X_J} \\right| = \\left| \\frac{\\partial x_i}{\\partial X_J} \\right| \\neq 0"
},
{
"math_id": 50,
"text": "P_{ij \\ldots}=P_{ij\\ldots}(\\mathbf X,t)=P_{ij\\ldots}[\\chi^{-1}(\\mathbf x,t),t]=p_{ij\\ldots}(\\mathbf x,t)"
},
{
"math_id": 51,
"text": "P_{ij \\ldots}"
},
{
"math_id": 52,
"text": "p_{ij \\ldots}"
},
{
"math_id": 53,
"text": "p_{ij\\ldots}(\\mathbf x,t)"
},
{
"math_id": 54,
"text": "\\frac{d}{dt}[p_{ij\\ldots}(\\mathbf x,t)]=\\frac{\\partial}{\\partial t}[p_{ij\\ldots}(\\mathbf x,t)]+ \\frac{\\partial}{\\partial x_k}[p_{ij\\ldots}(\\mathbf x,t)]\\frac{dx_k}{dt}"
},
{
"math_id": 55,
"text": "P"
},
{
"math_id": 56,
"text": "\\mathbf u(\\mathbf X,t)=u_i\\mathbf e_i"
},
{
"math_id": 57,
"text": "\\mathbf U(\\mathbf x,t)=U_J\\mathbf E_J"
},
{
"math_id": 58,
"text": "\\mathbf u(\\mathbf X,t) = \\mathbf b+\\mathbf x(\\mathbf X,t) - \\mathbf X \\qquad \\text{or}\\qquad u_i = \\alpha_{iJ}b_J + x_i - \\alpha_{iJ}X_J"
},
{
"math_id": 59,
"text": "\\mathbf U(\\mathbf x,t) = \\mathbf b+\\mathbf x - \\mathbf X(\\mathbf x,t) \\qquad \\text{or}\\qquad U_J = b_J + \\alpha_{Ji}x_i - X_J \\,"
},
{
"math_id": 60,
"text": "\\alpha_{Ji}"
},
{
"math_id": 61,
"text": "\\mathbf E_J"
},
{
"math_id": 62,
"text": "\\mathbf E_J \\cdot \\mathbf e_i = \\alpha_{Ji}=\\alpha_{iJ}"
},
{
"math_id": 63,
"text": "u_i"
},
{
"math_id": 64,
"text": "U_J"
},
{
"math_id": 65,
"text": "u_i=\\alpha_{iJ}U_J \\qquad \\text{or} \\qquad U_J=\\alpha_{Ji}u_i"
},
{
"math_id": 66,
"text": "\\mathbf e_i = \\alpha_{iJ}\\mathbf E_J"
},
{
"math_id": 67,
"text": "\\mathbf u(\\mathbf X,t)=u_i\\mathbf e_i=u_i(\\alpha_{iJ}\\mathbf E_J)=U_J\\mathbf E_J=\\mathbf U(\\mathbf x,t)"
},
{
"math_id": 68,
"text": "\\mathbf b=0"
},
{
"math_id": 69,
"text": "\\mathbf E_J \\cdot \\mathbf e_i = \\delta_{Ji}=\\delta_{iJ}"
},
{
"math_id": 70,
"text": "\\mathbf u(\\mathbf X,t) = \\mathbf x(\\mathbf X,t) - \\mathbf X \\qquad \\text{or}\\qquad u_i = x_i - \\delta_{iJ}X_J"
},
{
"math_id": 71,
"text": "\\mathbf U(\\mathbf x,t) = \\mathbf x - \\mathbf X(\\mathbf x,t) \\qquad \\text{or}\\qquad U_J = \\delta_{Ji}x_i - X_J "
},
{
"math_id": 72,
"text": "\\Omega"
},
{
"math_id": 73,
"text": "\\partial \\Omega "
},
{
"math_id": 74,
"text": "\\mathbf{x} = \\boldsymbol{\\chi}(\\mathbf{X}) = \\mathbf{x}(\\mathbf{X})"
},
{
"math_id": 75,
"text": "\\mathbf{X}"
},
{
"math_id": 76,
"text": "\\mathbf{x}"
},
{
"math_id": 77,
"text": "\\boldsymbol{F} = \\frac{\\partial \\mathbf{x}}{\\partial \\mathbf{X}} = \\nabla \\mathbf{x} ~."
},
{
"math_id": 78,
"text": "f(\\mathbf{x},t)"
},
{
"math_id": 79,
"text": "g(\\mathbf{x},t)"
},
{
"math_id": 80,
"text": "h(\\mathbf{x},t)"
},
{
"math_id": 81,
"text": "\\mathbf{n}(\\mathbf{x},t)"
},
{
"math_id": 82,
"text": "\\mathbf{v}(\\mathbf{x},t)"
},
{
"math_id": 83,
"text": "u_n"
},
{
"math_id": 84,
"text": "\\mathbf{n}"
},
{
"math_id": 85,
"text": "\n \\cfrac{d}{dt}\\left[\\int_{\\Omega} f(\\mathbf{x},t)~\\text{dV}\\right] = \n \\int_{\\partial \\Omega } f(\\mathbf{x},t)[u_n(\\mathbf{x},t) - \\mathbf{v}(\\mathbf{x},t)\\cdot\\mathbf{n}(\\mathbf{x},t)]~\\text{dA} + \n \\int_{\\partial \\Omega } g(\\mathbf{x},t)~\\text{dA} + \\int_{\\Omega} h(\\mathbf{x},t)~\\text{dV} ~.\n "
},
{
"math_id": 86,
"text": "\n {\n \\begin{align}\n \\dot{\\rho} + \\rho (\\boldsymbol{\\nabla} \\cdot \\mathbf{v}) & = 0 \n & & \\qquad\\text{Balance of Mass} \\\\\n \\rho~\\dot{\\mathbf{v}} - \\boldsymbol{\\nabla} \\cdot \\boldsymbol{\\sigma} - \\rho~\\mathbf{b} & = 0 \n & & \\qquad\\text{Balance of Linear Momentum (Cauchy's first law of motion)} \\\\\n \\boldsymbol{\\sigma} & = \\boldsymbol{\\sigma}^T\n & & \\qquad\\text{Balance of Angular Momentum (Cauchy's second law of motion)} \\\\\n \\rho~\\dot{e} - \\boldsymbol{\\sigma}:(\\boldsymbol{\\nabla}\\mathbf{v}) + \\boldsymbol{\\nabla} \\cdot \\mathbf{q} - \\rho~s & = 0\n & & \\qquad\\text{Balance of Energy.}\n \\end{align}\n }\n "
},
{
"math_id": 87,
"text": "\\rho(\\mathbf{x},t)"
},
{
"math_id": 88,
"text": "\\dot{\\rho}"
},
{
"math_id": 89,
"text": "\\rho"
},
{
"math_id": 90,
"text": "\\dot{\\mathbf{v}}"
},
{
"math_id": 91,
"text": "\\mathbf{v}"
},
{
"math_id": 92,
"text": "\\boldsymbol{\\sigma}(\\mathbf{x},t)"
},
{
"math_id": 93,
"text": "\\mathbf{b}(\\mathbf{x},t)"
},
{
"math_id": 94,
"text": "e(\\mathbf{x},t)"
},
{
"math_id": 95,
"text": "\\dot{e}"
},
{
"math_id": 96,
"text": "e"
},
{
"math_id": 97,
"text": "\\mathbf{q}(\\mathbf{x},t)"
},
{
"math_id": 98,
"text": "s(\\mathbf{x},t)"
},
{
"math_id": 99,
"text": "\n {\n \\begin{align}\n \\rho~\\det(\\boldsymbol{F}) - \\rho_0 &= 0 & & \\qquad \\text{Balance of Mass} \\\\\n \\rho_0~\\ddot{\\mathbf{x}} - \\boldsymbol{\\nabla}_{\\circ}\\cdot\\boldsymbol{P}^T -\\rho_0~\\mathbf{b} & = 0 & & \n \\qquad \\text{Balance of Linear Momentum} \\\\\n \\boldsymbol{F}\\cdot\\boldsymbol{P}^T & = \\boldsymbol{P}\\cdot\\boldsymbol{F}^T & & \n \\qquad \\text{Balance of Angular Momentum} \\\\ \n \\rho_0~\\dot{e} - \\boldsymbol{P}^T:\\dot{\\boldsymbol{F}} + \\boldsymbol{\\nabla}_{\\circ}\\cdot\\mathbf{q} - \\rho_0~s & = 0\n & & \\qquad\\text{Balance of Energy.} \n \\end{align}\n }\n "
},
{
"math_id": 100,
"text": "\\boldsymbol{P}"
},
{
"math_id": 101,
"text": "\\rho_0"
},
{
"math_id": 102,
"text": "\n \\boldsymbol{P} = J~\\boldsymbol{\\sigma}\\cdot\\boldsymbol{F}^{-T}\n ~\\text{where}~ J = \\det(\\boldsymbol{F})\n "
},
{
"math_id": 103,
"text": "\\boldsymbol{N}"
},
{
"math_id": 104,
"text": "\n \\boldsymbol{N} = \\boldsymbol{P}^T = J~\\boldsymbol{F}^{-1}\\cdot\\boldsymbol{\\sigma} ~.\n "
},
{
"math_id": 105,
"text": "\n {\n \\begin{align}\n \\rho~\\det(\\boldsymbol{F}) - \\rho_0 &= 0 & & \\qquad \\text{Balance of Mass} \\\\\n \\rho_0~\\ddot{\\mathbf{x}} - \\boldsymbol{\\nabla}_{\\circ}\\cdot\\boldsymbol{N} -\\rho_0~\\mathbf{b} & = 0 & & \n \\qquad \\text{Balance of Linear Momentum} \\\\\n \\boldsymbol{F}\\cdot\\boldsymbol{N} & = \\boldsymbol{N}^T\\cdot\\boldsymbol{F}^T & & \n \\qquad \\text{Balance of Angular Momentum} \\\\ \n \\rho_0~\\dot{e} - \\boldsymbol{N}:\\dot{\\boldsymbol{F}} + \\boldsymbol{\\nabla}_{\\circ}\\cdot\\mathbf{q} - \\rho_0~s & = 0\n & & \\qquad\\text{Balance of Energy.} \n \\end{align}\n }\n "
},
{
"math_id": 106,
"text": "\n \\boldsymbol{\\nabla} \\mathbf{v} = \\sum_{i,j = 1}^3 \\frac{\\partial v_i}{\\partial x_j}\\mathbf{e}_i\\otimes\\mathbf{e}_j = \n v_{i,j}\\mathbf{e}_i\\otimes\\mathbf{e}_j ~;~~\n \\boldsymbol{\\nabla} \\cdot \\mathbf{v} = \\sum_{i=1}^3 \\frac{\\partial v_i}{\\partial x_i} = v_{i,i} ~;~~\n \\boldsymbol{\\nabla} \\cdot \\boldsymbol{S} = \\sum_{i,j=1}^3 \\frac{\\partial S_{ij}}{\\partial x_j}~\\mathbf{e}_i \n = \\sigma_{ij,j}~\\mathbf{e}_i ~.\n "
},
{
"math_id": 107,
"text": "\\boldsymbol{S}"
},
{
"math_id": 108,
"text": "\\mathbf{e}_i"
},
{
"math_id": 109,
"text": "\n \\boldsymbol{\\nabla}_{\\circ} \\mathbf{v} = \\sum_{i,j = 1}^3 \\frac{\\partial v_i}{\\partial X_j}\\mathbf{E}_i\\otimes\\mathbf{E}_j = \n v_{i,j}\\mathbf{E}_i\\otimes\\mathbf{E}_j ~;~~\n \\boldsymbol{\\nabla}_{\\circ}\\cdot\\mathbf{v} = \\sum_{i=1}^3 \\frac{\\partial v_i}{\\partial X_i} = v_{i,i} ~;~~\n \\boldsymbol{\\nabla}_{\\circ}\\cdot\\boldsymbol{S} = \\sum_{i,j=1}^3 \\frac{\\partial S_{ij}}{\\partial X_j}~\\mathbf{E}_i = S_{ij,j}~\\mathbf{E}_i \n "
},
{
"math_id": 110,
"text": "\\mathbf{E}_i"
},
{
"math_id": 111,
"text": "\n \\boldsymbol{A}:\\boldsymbol{B} = \\sum_{i,j=1}^3 A_{ij}~B_{ij} = \\operatorname{trace}(\\boldsymbol{A}\\boldsymbol{B}^T) ~.\n "
},
{
"math_id": 112,
"text": "\\eta"
},
{
"math_id": 113,
"text": "\\rho\\eta"
},
{
"math_id": 114,
"text": "\\bar{q}"
},
{
"math_id": 115,
"text": "r"
},
{
"math_id": 116,
"text": "\n \\cfrac{d}{dt}\\left(\\int_{\\Omega} \\rho~\\eta~\\text{dV}\\right) \\ge\n \\int_{\\partial \\Omega} \\rho~\\eta~(u_n - \\mathbf{v}\\cdot\\mathbf{n}) ~\\text{dA} + \n \\int_{\\partial \\Omega} \\bar{q}~\\text{dA} + \\int_{\\Omega} \\rho~r~\\text{dV}.\n "
},
{
"math_id": 117,
"text": "\\bar{q} = -\\boldsymbol{\\psi}(\\mathbf{x})\\cdot\\mathbf{n}"
},
{
"math_id": 118,
"text": "\n \\boldsymbol{\\psi}(\\mathbf{x}) = \\cfrac{\\mathbf{q}(\\mathbf{x})}{T} ~;~~ r = \\cfrac{s}{T}\n "
},
{
"math_id": 119,
"text": "\\mathbf{q}"
},
{
"math_id": 120,
"text": "s"
},
{
"math_id": 121,
"text": "T"
},
{
"math_id": 122,
"text": "\n {\n \\cfrac{d}{dt}\\left(\\int_{\\Omega} \\rho~\\eta~\\text{dV}\\right) \\ge\n \\int_{\\partial \\Omega} \\rho~\\eta~(u_n - \\mathbf{v}\\cdot\\mathbf{n}) ~\\text{dA} - \n \\int_{\\partial \\Omega} \\cfrac{\\mathbf{q}\\cdot\\mathbf{n}}{T}~\\text{dA} + \\int_\\Omega \\cfrac{\\rho~s}{T}~\\text{dV}.\n }\n "
},
{
"math_id": 123,
"text": "\n {\n \\rho~\\dot{\\eta} \\ge - \\boldsymbol{\\nabla} \\cdot \\left(\\cfrac{\\mathbf{q}}{T}\\right)\n + \\cfrac{\\rho~s}{T}.\n }\n "
},
{
"math_id": 124,
"text": "\n {\n \\rho~(\\dot{e} - T~\\dot{\\eta}) - \\boldsymbol{\\sigma}:\\boldsymbol{\\nabla}\\mathbf{v} \\le \n - \\cfrac{\\mathbf{q}\\cdot\\boldsymbol{\\nabla} T}{T}.\n }\n "
}
] |
https://en.wikipedia.org/wiki?curid=5918
|
5918099
|
Sextic equation
|
Polynomial equation of degree 6
In algebra, a sextic (or hexic) polynomial is a polynomial of degree six.
A sextic equation is a polynomial equation of degree six—that is, an equation whose left hand side is a sextic polynomial and whose right hand side is zero. More precisely, it has the form:
formula_0
where "a" ≠ 0 and the "coefficients" "a", "b", "c", "d", "e", "f", "g" may be integers, rational numbers, real numbers, complex numbers or, more generally, members of any field.
A sextic function is a function defined by a sextic polynomial. Because they have an even degree, sextic functions appear similar to quartic functions when graphed, except they may possess an additional local maximum and local minimum each. The derivative of a sextic function is a quintic function.
Since a sextic function is defined by a polynomial with even degree, it has the same infinite limit when the argument goes to positive or negative infinity. If the leading coefficient "a" is positive, then the function increases to positive infinity at both sides and thus the function has a global minimum. Likewise, if "a" is negative, the sextic function decreases to negative infinity and has a global maximum.
Solvable sextics.
Some sixth degree equations, such as "ax"6 + "dx"3 + "g" = 0, can be solved by factorizing into radicals, but other sextics cannot. Évariste Galois developed techniques for determining whether a given equation could be solved by radicals which gave rise to the field of Galois theory.
It follows from Galois theory that a sextic equation is solvable in terms of radicals if and only if its Galois group is contained either in the group of order 48 which stabilizes a partition of the set of the roots into three subsets of two roots or in the group of order 72 which stabilizes a partition of the set of the roots into two subsets of three roots.
There are formulas to test either case, and, if the equation is solvable, compute the roots in term of radicals.
The general sextic equation can be solved by the two-variable Kampé de Fériet function. A more restricted class of sextics can be solved by the one-variable generalised hypergeometric function using Felix Klein's approach to solving the quintic equation.
Examples.
Watt's curve, which arose in the context of early work on the steam engine, is a sextic in two variables.
One method of solving the cubic equation involves transforming variables to obtain a sextic equation having terms only of degrees 6, 3, and 0, which can be solved as a quadratic equation in the cube of the variable.
Etymology.
The describer "sextic" comes from the Latin stem for 6 or 6th ("sex-t-"), and the Greek suffix meaning "pertaining to" ("-ic"). The much less common "hexic" uses Greek for both its stem ("hex-" 6) and its suffix ("-ik-"). In both cases, the prefix refers to the degree of the function. Often, these type of functions will simply be referred to as "6th degree functions".
|
[
{
"math_id": 0,
"text": "ax^6+bx^5+cx^4+dx^3+ex^2+fx+g=0,\\,"
}
] |
https://en.wikipedia.org/wiki?curid=5918099
|
59182989
|
Translation surface (differential geometry)
|
Surface generated by translations
In differential geometry a translation surface is a surface that is generated by translations:
If both curves are contained in a common plane, the translation surface is planar (part of a plane). This case is generally ignored.
Simple "examples":
Translation surfaces are popular in descriptive geometry and architecture, because they can be modelled easily. <br>
In differential geometry minimal surfaces are represented by translation surfaces or as "midchord surfaces" (s. below).
The translation surfaces as defined here should not be confused with the translation surfaces in complex geometry.
Parametric representation.
For two space curves formula_10 and formula_11 with formula_12 the translation surface formula_13 can be represented by:
(TS) formula_14
and contains the origin. Obviously this definition is symmetric regarding the curves formula_2 and formula_3. Therefore, both curves are called generatrices (one: generatrix). Any point formula_15 of the surface is contained in a shifted copy of formula_2 and formula_3 resp.. The tangent plane at formula_15 is generated by the tangentvectors of the generatrices at this point, if these vectors are linearly independent.
If the precondition formula_12 is not fulfilled, the surface defined by (TS) may not contain the origin and the curves formula_16. But in any case the surface contains shifted copies of any of the curves formula_16 as parametric curves formula_17 and formula_18 respectively.
The two curves formula_16 can be used to generate the so called corresponding midchord surface. Its parametric representation is
(MCS) formula_19
Helicoid as translation surface and midchord surface.
A helicoid is a special case of a generalized helicoid and a ruled surface. It is an example of a minimal surface and can be represented as a translation surface.
The helicoid with the parametric representation
formula_20
has a "turn around shift" (German: Ganghöhe) formula_21.
Introducing new parameters formula_22 such that
formula_23
and formula_24 a positive real number, one gets a new parametric representation
formula_26
which is the parametric representation of a translation surface with the two "identical" (!) generatrices
formula_27 and
formula_28
The common point used for the diagram is formula_29.
The (identical) generatrices are helices with the turn around shift formula_30 which lie on the cylinder with the equation formula_31. Any parametric curve is a shifted copy of the generatrix formula_2 (in diagram: purple) and is contained in the right circular cylinder with radius formula_24, which contains the "z"-axis.
The new parametric representation represents only such points of the helicoid that are within the cylinder with the equation formula_32.
From the new parametric representation one recognizes, that the helicoid is a midchord surface, too:
formula_33
where
formula_34 and
formula_35
are two identical generatrices.
In diagram: formula_36 lies on the helix formula_37 and formula_38 on the (identical) helix formula_39. The midpoint of the chord is formula_40.
Advantages of a translation surface.
A surface (for example a roof) can be manufactured using a jig for curve
formula_3 and several identical jigs of curve formula_2. The jigs can be designed without any knowledge of mathematics. By positioning the jigs the rules of a translation surface have to be respected only.
Establishing a parallel projection of a translation surface one 1) has to produce projections of the two generatrices, 2) make a jig of curve formula_2 and 3) draw with help of this jig copies of the curve respecting the rules of a translation surface. The contour of the surface is the envelope of the curves drawn with the jig. This procedure works for orthogonal and oblique projections, but not for central projections.
For a translation surface with parametric representation
formula_41
the partial derivatives of formula_42 are simple derivatives of the curves. Hence the mixed derivatives are always formula_43 and the coefficient formula_44 of the second fundamental form is formula_43, too. This is an essential facilitation for showing that (for example) a helicoid is a minimal surface.
|
[
{
"math_id": 0,
"text": "c_1, c_2"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "c_1"
},
{
"math_id": 3,
"text": "c_2"
},
{
"math_id": 4,
"text": "\\; z=x^2+y^2\\; "
},
{
"math_id": 5,
"text": " \\ c_1:\\; (x,0,x^2)\\ "
},
{
"math_id": 6,
"text": "\\ c_2:\\;(0,y,y^2)\\ "
},
{
"math_id": 7,
"text": "z=x^2-y^2"
},
{
"math_id": 8,
"text": "c_1: (x,0,x^2)"
},
{
"math_id": 9,
"text": "c_2:(0,y,-y^2)"
},
{
"math_id": 10,
"text": "\\ c_1: \\; \\vec x=\\gamma_1(u)\\ "
},
{
"math_id": 11,
"text": " \\ c_2:\\; \\vec x=\\gamma_2(v)\\ "
},
{
"math_id": 12,
"text": "\\gamma_1(0)=\\gamma_2(0)=\\vec 0"
},
{
"math_id": 13,
"text": "\\Phi"
},
{
"math_id": 14,
"text": " \\quad \\vec x=\\gamma_1(u)+\\gamma_2(v) \\; "
},
{
"math_id": 15,
"text": "X"
},
{
"math_id": 16,
"text": "c_1,c_2"
},
{
"math_id": 17,
"text": "\\vec x(u_0,v)"
},
{
"math_id": 18,
"text": "\\vec x(u,v_0)"
},
{
"math_id": 19,
"text": "\\quad \\vec x=\\frac{1}{2}(\\gamma_1(u)+\\gamma_2(v)) \\; ."
},
{
"math_id": 20,
"text": "\\vec x(u,v)= (u\\cos v,u\\sin v, kv)"
},
{
"math_id": 21,
"text": "2\\pi k"
},
{
"math_id": 22,
"text": "\\alpha, \\varphi"
},
{
"math_id": 23,
"text": "u=2a\\cos\\left(\\frac{\\alpha-\\varphi} 2 \\right)\\ , \\ \\ v=\\frac{\\alpha+\\varphi}{2}"
},
{
"math_id": 24,
"text": "a"
},
{
"math_id": 25,
"text": "\\vec X(\\alpha,\\varphi)= \\left (a\\cos\\alpha + a\\cos \\varphi \\; ,\\; a\\sin\\alpha + a\\sin \\varphi\\; ,\\; \\frac{k\\alpha}{2}+\\frac{k\\varphi}{2}\\right )"
},
{
"math_id": 26,
"text": "=(a\\cos\\alpha , a\\sin\\alpha , \\frac{k\\alpha}{2} ) \\ +\\ (a\\cos\\varphi , a\\sin\\varphi ,\\frac{k\\varphi}{2} )\\ ,"
},
{
"math_id": 27,
"text": "c_1: \\; \\gamma_1=\\vec X(\\alpha,0)=\\left(a+a\\cos\\alpha , a\\sin\\alpha , \\frac{k\\alpha}{2} \\right) \\quad "
},
{
"math_id": 28,
"text": "c_2: \\; \\gamma_2=\\vec X(0,\\varphi)=\\left(a+a\\cos\\varphi , a\\sin\\varphi ,\\frac{k\\varphi}{2} \\right)\\ ."
},
{
"math_id": 29,
"text": " P=\\vec X(0,0)=(2a,0,0)"
},
{
"math_id": 30,
"text": "k\\pi\\;, "
},
{
"math_id": 31,
"text": "(x-a)^2+y^2=a^2"
},
{
"math_id": 32,
"text": "x^2+y^2=4a^2"
},
{
"math_id": 33,
"text": "\n\\begin{align}\n\\vec X(\\alpha,\\varphi) & = \\left(a\\cos\\alpha , a\\sin\\alpha , \\frac{k\\alpha}{2} \\right) \\ +\\ \\left(a\\cos\\varphi , a\\sin\\varphi ,\\frac{k\\varphi}{2} \\right) \\\\[5pt]\n& =\\frac{1}{2}(\\delta_1(\\alpha) +\\delta_2(\\varphi))\\ ,\\quad\n\\end{align}\n"
},
{
"math_id": 34,
"text": "d_1: \\ \\vec x=\\delta_1(\\alpha)=(2a\\cos\\alpha , 2a\\sin\\alpha , k\\alpha ) \\ ,\\quad "
},
{
"math_id": 35,
"text": "d_2: \\ \\vec x=\\delta_2(\\varphi)=(2a\\cos\\varphi , 2a\\sin\\varphi , k\\varphi ) \\ ,\\quad "
},
{
"math_id": 36,
"text": "P_1: \\delta_1(\\alpha_0) "
},
{
"math_id": 37,
"text": "d_1"
},
{
"math_id": 38,
"text": "P_2: \\delta_2(\\varphi_0)"
},
{
"math_id": 39,
"text": "d_2"
},
{
"math_id": 40,
"text": "\\ M: \\frac{1}{2}(\\delta_1(\\alpha_0) +\\delta_2(\\varphi_0))=\\vec X(\\alpha_0,\\varphi_0)\\ "
},
{
"math_id": 41,
"text": " \\vec x(u,v)=\\gamma_1(u)+\\gamma_2(v) \\; "
},
{
"math_id": 42,
"text": " \\vec x(u,v)"
},
{
"math_id": 43,
"text": "0"
},
{
"math_id": 44,
"text": "M"
}
] |
https://en.wikipedia.org/wiki?curid=59182989
|
59183779
|
Jan van Deemter
|
Dutch physicist (1918–2004)
Jan Jozef van Deemter (31 March 1918 – 10 October 2004) was a Dutch physicist and engineer known for the Van Deemter equation in chromatography.
He obtained his doctorate in physics from the University of Amsterdam in June of 1950. Starting in 1947 he began work for Royal Dutch Shell as a researcher and it was there that he developed and published his article in 1956.
Van Deemter equation.
The van Deemter equation relates the resolving power of a chromatographic column to the various flow and kinetic parameters which cause peak broadening through
formula_0
Where HETP is the height equivalent theoretical plate, A is the eddy-diffusion parameter, B is the longitudinal diffusion coefficient of the eluting material in the longitudinal direction, C is the resistance to mass transfer coefficient of the analyte between mobile and stationary phase, and "u" is the linear velocity of the column flow.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " HETP = A + \\frac{B}{u} + (C_s +C_m)\\cdot u"
}
] |
https://en.wikipedia.org/wiki?curid=59183779
|
59188974
|
Localization-protected quantum order
|
Many-body localization (MBL) is a dynamical phenomenon which leads to the breakdown of equilibrium statistical mechanics in isolated many-body systems. Such systems never reach local thermal equilibrium, and retain local memory of their initial conditions for infinite times. One can still define a notion of phase structure in these out-of-equilibrium systems. Strikingly, MBL can even enable new kinds of exotic orders that are disallowed in thermal equilibrium – a phenomenon that goes by the name of localization-protected quantum order (LPQO) or eigenstate order.
Background.
The study of phases of matter and the transitions between them has been a central enterprise in physics for well over a century. One of the earliest paradigms for elucidating phase structure, associated most with Landau, classifies phases according to the spontaneous breaking of global symmetries present in a physical system. More recently, we have also made great strides in understanding topological phases of matter which lie outside Landau's framework: the order in topological phases cannot be characterized by local patterns of symmetry breaking, and is instead encoded in global patterns of quantum entanglement.
All of this remarkable progress rests on the foundation of equilibrium statistical mechanics. Phases and phase transitions are only sharply defined for macroscopic systems in the thermodynamic limit, and statistical mechanics allows us to make useful predictions about such macroscopic systems with many (~ 1023) constituent particles. A fundamental assumption of statistical mechanics is that systems generically reach a state of thermal equilibrium (such as the Gibbs state) which can be characterized by only a few parameters such as temperature or a chemical potential. Traditionally, phase structure is studied by examining the behavior of ``order parameters" in equilibrium states. At zero temperature, these are evaluated in the ground state of the system, and different phases correspond to different quantum orders (topological or otherwise). Thermal equilibrium strongly constrains the allowed orders at finite temperatures. In general, thermal fluctuations at finite temperatures reduce the long-ranged quantum correlations present in ordered phases and, in lower dimensions, can destroy order altogether. As an example, the Peierls-Mermin-Wagner theorems prove that a one dimensional system cannot spontaneously break a continuous symmetry at any non-zero temperature.
Recent progress on the phenomenon of many-body localization has revealed classes of generic (typically disordered) many-body systems which "never" reach local thermal equilibrium, and thus lie outside the framework of equilibrium statistical mechanics. MBL systems can undergo a dynamical phase transition to a thermalizing phase as parameters such as the disorder or interaction strength are tuned, and the nature of the MBL-to-thermal phase transition is an active area of research. The existence of MBL raises the interesting question of whether one can have different kinds of MBL phases, just as there are different kinds of thermalizing phases. Remarkably, the answer is affirmative, and out-of-equilibrium systems can also display a rich phase structure. What's more, the suppression of thermal fluctuations in localized systems can even allow for new kinds of order that are forbidden in equilibrium—which is the essence of localization-protected quantum order. The recent discovery of time-crystals in periodically driven MBL systems is a notable example of this phenomenon.
Phases out of equilibrium: eigenstate order.
Studying phase structure in localized systems requires us to first formulate a sharp notion of a phase away from thermal equilibrium. This is done via the notion of eigenstate order: one can measure order parameters and correlation functions in "individual" energy eigenstates of a many-body system, instead of averaging over several eigenstates as in a Gibbs state. The key point is that individual eigenstates can show patterns of order that may be invisible to thermodynamic averages over eigenstates. Indeed, a thermodynamic ensemble average isn't even appropriate in MBL systems since they never reach thermal equilibrium. What's more, while individual eigenstates aren't themselves experimentally accessible, order in eigenstates nevertheless has "measurable" dynamical signatures. The eigenspectrum properties change in a singular fashion as the system transitions between from one type of MBL phase to another, or from an MBL phase to a thermal one---again with measurable dynamical signatures.
When considering eigenstate order in MBL systems, one generally speaks of "highly excited" eigenstates at energy densities that would correspond to high or infinite temperatures if the system were able to thermalize. In a thermalizing system, the temperature is defined via formula_0 where the entropy formula_1 is maximized near the middle of the many-body spectrum (corresponding to formula_2) and vanishes near the edges of the spectrum (corresponding to formula_3). Thus, "infinite temperature eigenstates" are those drawn from near the middle of the spectrum, and it more correct to refer to energy-densities rather than temperatures since temperature is only defined in equilibrium. In MBL systems, the suppression of thermal fluctuations means that the properties of highly excited eigenstates are similar, in many respects, to those of ground states of gapped local Hamiltonians. This enables various forms of ground state order to be promoted to finite energy densities.
We note that in thermalizing MB systems, the notion of eigenstate order is congruent with the usual definition of phases. This is because the eigenstate thermalization hypothesis (ETH) implies that local observables (such as order parameters) computed in individual eigenstates agree with those computed in the Gibbs state at a temperature appropriate to the energy density of the eigenstate. On the other hand, MBL systems do not obey the ETH and nearby many-body eigenstates have very different local properties. This is what enables individual MBL eigenstates to display order even if thermodynamic averages are forbidden from doing so.
Localization-protected symmetry-breaking order.
Localization enables symmetry breaking orders at finite energy densities, forbidden in equilibrium by the Peierls-Mermin-Wagner Theorems.
Let us illustrate this with the concrete example of a disordered transverse field Ising chain in one dimension:
formula_4
where formula_5 are Pauli spin-1/2 operators in a chain of length formula_6, all the couplings formula_7 are positive random numbers drawn from distributions with means formula_8, and the system has Ising symmetry formula_9 corresponding to flipping all spins in the formula_10 basis. The formula_11 term introduces interactions, and the system is mappable to a free fermion model (the Kitaev chain) when formula_12.
Non-interacting Ising chain – no disorder.
Let us first consider the clean, non-interacting system: formula_13. In equilibrium, the ground state is ferromagnetically ordered with spins aligned along the formula_14 axis for formula_15, but is a paramagnet for formula_16 and at any finite temperature (Fig 1a). Deep in the ordered phase, the system has two degenerate Ising symmetric ground states which look like ``Schrödinger cat" or superposition states: formula_17. These display long-range order:
formula_18
At any finite temperature, thermal fluctuations lead to a finite density of delocalized domain walls since the entropic gain from creating these domain walls wins over the energy cost in one dimension. These fluctuations destroy long-range order since the presence of fluctuating domain walls destroys the correlation between distant spins.
Disordered non-interacting Ising chain.
Upon turning on disorder, the excitations in the non-interacting model (formula_12) localize due to Anderson localization. In other words, the domain walls get pinned by the disorder, so that a generic highly excited eigenstate for formula_19 looks like formula_20, where formula_21 refers to the formula_22 eigenstate and the pattern is eigenstate dependent. Note that a spin-spin correlation function evaluated in this state is non-zero for arbitrarily distant spins, but has fluctuating sign depending on whether an even/odd number of domain walls are crossed between two sites. Whence, we say that the system has long-range spin-"glass" (SG) order. Indeed, for formula_23, localization promotes the ground state ferromagnetic order to spin-glass order in highly excited states at all energy densities (Fig 1b). If one averages over eigenstates as in the thermal Gibbs state, the fluctuating signs causes the correlation to average out as required by Peierls theorem forbidding symmetry breaking of discrete symmetries at finite temperatures in 1D. For formula_24, the system is paramagnetic (PM), and the eigenstates deep in the PM look like product states in the formula_25 basis and do not show long range Ising order: formula_26. The transition between the localized PM and the localized SG at formula_27 belongs to the infinite randomness universality class.
Disordered interacting Ising chain.
Upon turning on weak interactions formula_28, the Anderson insulator remains many-body localized and order persists deep in the PM/SG phases. Strong enough interactions destroy MBL and the system transitions to a thermalizing phase. The fate of the MBL PM to MBL SG transition in the presence of interactions is presently unsettled, and it is likely this transition proceeds via an intervening thermal phase (Fig 1c).
Detecting eigenstate order – measurable signatures.
While the discussion above pertains to sharp diagnostics of LPQO obtained by evaluating order parameters and correlation functions in individual highly excited many-body eigenstates, such quantities are nearly impossible to measure experimentally. Nevertheless, even though individual eigenstates aren't themselves experimentally accessible, order in eigenstates has measurable dynamical signatures. In other words, measuring a local physically accessible observable in time starting from a physically preparable initial state still contains sharp signatures of eigenstate order.
For example, for the disordered Ising chain discussed above, one can prepare random symmetry-broken initial states which are product states in the formula_10basis: formula_29. These randomly chosen states are at infinite temperature. Then, one can measures the local magnetization formula_30in time, which acts as an order parameter for symmetry breaking. It is straightforward to show that formula_31saturates to a non-zero value even for infinitely late times in the symmetry-broken spin-glass phase, while it decays to zero in the paramagnet. The singularity in the eigenspectrum properties at the transition between the localized SG and PM phases translates into a sharp dynamical phase transition which is measurable. Indeed, a nice example of this is furnished by recent experiments detecting time-crystals in Floquet MBL systems, where the time crystal phase spontaneously breaks both time translation symmetry and spatial Ising symmetry, showing correlated spatiotemporal eigenstate order.
Localization-protected topological order.
Similar to the case of symmetry breaking order, thermal fluctuations at finite temperatures can reduce or destroy the quantum correlations necessary for topological order. Once again, localization can enable such orders in regimes forbidden by equilibrium. This happens for both the so-called long range entangled topological phases, and for "symmetry protected" or short-range entangled topological phases. The toric-code/formula_32 gauge theory in 2D is an example of the former, and the topological order in this phase can be diagnosed by Wilson loop operators. The topological order is destroyed in equilibrium at any finite temperature due to fluctuating vortices--- however, these can get localized by disorder, enabling "glassy" localization-protected topological order at finite energy densities. On the other hand, symmetry protected topological (SPT) phases do have any bulk long-range order, and are distinguished from trivial paramagnets due to the presence of coherent gapless edge modes as long the protecting symmetry is present. In equilibrium, these edge modes are typically destroyed at finite temperatures as they decohere due to interactions with delocalized bulk excitations. Once again, localization protects the coherence of these modes even at finite energy densities! The presence of localization-protected topological order could potentially have far-reaching consequences for developing new quantum technologies by allowing for quantum coherent phenomena at high energies.
Floquet systems.
It has been shown that periodically driven or Floquet systems can also be many-body localized under suitable drive conditions. This is remarkable because one generically expects a driven many-body system to simply heat up to a trivial infinite temperature state (the maximum entropy state without energy conservation). However, with MBL, this heating can be evaded and one can again get non-trivial quantum orders in the eigenstates of the Floquet unitary, which is the time-evolution operator for one period. The most striking example of this is the time-crystal, a phase with long-range spatiotemporal order and spontaneous breaking of time translation symmetry. This phase is disallowed in thermal equilibrium, but can be realized in a Floquet MBL setting.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "T = \\left ( \\frac{dS}{dE} \\right )^{-1}"
},
{
"math_id": 1,
"text": "S"
},
{
"math_id": 2,
"text": "T=\\infty"
},
{
"math_id": 3,
"text": "T=0^{\\pm}"
},
{
"math_id": 4,
"text": "H = \\sum_{i=1}^L J_i \\sigma_i^z \\sigma_{i+1}^z + h_i \\sigma_i^x + J_{\\rm int} ( \\sigma_i^z \\sigma_{i+2}^z + \\sigma_i^z \\sigma_{i+1}^z)"
},
{
"math_id": 5,
"text": "\\sigma_i^{x/y/z}"
},
{
"math_id": 6,
"text": "L"
},
{
"math_id": 7,
"text": "\\{J_i, h_i\\}"
},
{
"math_id": 8,
"text": "\\overline{J}, \\overline{h}"
},
{
"math_id": 9,
"text": "P = \\prod_i \\sigma_i^x"
},
{
"math_id": 10,
"text": "z"
},
{
"math_id": 11,
"text": "J_{\\rm int}"
},
{
"math_id": 12,
"text": "J_{\\rm int}=0"
},
{
"math_id": 13,
"text": " J_i = J, \\;h_i = h, \\; J_{\\rm int}=0 "
},
{
"math_id": 14,
"text": " z "
},
{
"math_id": 15,
"text": " J>h "
},
{
"math_id": 16,
"text": " J < h "
},
{
"math_id": 17,
"text": " |\\psi_0^\\pm\\rangle = \\frac{1}{\\sqrt{2}}(|\\uparrow\\uparrow \\cdots \\uparrow\\rangle \\pm |\\downarrow\\downarrow \\cdots \\downarrow\\rangle) "
},
{
"math_id": 18,
"text": " \\lim_{|i-j| \\rightarrow \\infty} \\lim_{L \\rightarrow \\infty} \\langle \\psi_0^\\pm| \\sigma_i^z \\sigma_j^z|\\psi_0^\\pm\\rangle - \\langle \\psi_0^\\pm| \\sigma_i^z|\\psi_0^\\pm\\rangle\\langle \\psi_0^\\pm| \\sigma_j^z|\\psi_0^\\pm\\rangle > 0. "
},
{
"math_id": 19,
"text": "\\overline{J} \\gg \\overline{h}"
},
{
"math_id": 20,
"text": "|\\psi_{\\rm SG}^{n,\\pm}\\rangle = \\frac{1}{\\sqrt{2}}(|\\uparrow\\uparrow \\downarrow \\downarrow \\downarrow \\uparrow \\uparrow \\cdots \\rangle \\pm |\\downarrow\\downarrow \\uparrow \\uparrow \\uparrow \\downarrow \\downarrow \\cdots \\rangle "
},
{
"math_id": 21,
"text": "n"
},
{
"math_id": 22,
"text": "n^\\text{th}"
},
{
"math_id": 23,
"text": "\\overline{J} > \\overline{h}"
},
{
"math_id": 24,
"text": "\\overline{J} < \\overline{h}"
},
{
"math_id": 25,
"text": "x"
},
{
"math_id": 26,
"text": "|\\psi_{\\rm PM}^n\\rangle = |\\rightarrow \\rightarrow \\leftarrow \\leftarrow\\leftarrow \\rightarrow \\cdots \\rangle"
},
{
"math_id": 27,
"text": "\\overline{J} = \\overline{h}"
},
{
"math_id": 28,
"text": "J_{\\rm int} \\neq 0"
},
{
"math_id": 29,
"text": "|\\psi_0\\rangle = |\\uparrow \\downarrow \\downarrow \\uparrow \\cdots \\uparrow \\uparrow \\downarrow\\rangle"
},
{
"math_id": 30,
"text": "\\langle \\sigma_i^z \\rangle"
},
{
"math_id": 31,
"text": "\\langle \\psi_0(t)| \\sigma_i^z |\\psi_0(t)\\rangle"
},
{
"math_id": 32,
"text": "Z_2"
}
] |
https://en.wikipedia.org/wiki?curid=59188974
|
591931
|
Dependency ratio
|
Age-population ratio of those in the labor force to those not in the labor force
The dependency ratio is an age-population ratio of those typically not in the labor force (the "dependent" part ages 0 to 14 and 65+) and those typically in the labor force (the "productive" part ages 15 to 64). It is used to measure the pressure on the productive population.
Consideration of the dependency ratio is essential for governments, economists, bankers, business, industry, universities and all other major economic segments which can benefit from understanding the impacts of changes in population structure. A low dependency ratio means that there are sufficient people working who can support the dependent population.
A lower ratio could allow for better pensions and better health care for citizens. A higher ratio indicates more financial stress on working people and possible political instability. While the strategies of increasing fertility and of allowing immigration especially of younger working age people have been formulas for lowering dependency ratios, future job reductions through automation may impact the effectiveness of those strategies.
Formula.
In published international statistics, the dependent part usually includes those under the age of 15 and over the age of 64. The productive part makes up the population in between, ages 15 – 64. It is normally expressed as a percentage:
formula_0
As the ratio increases there may be an increased burden on the productive part of the population to maintain the upbringing and pensions of the economically dependent. This results in direct impacts on financial expenditures on things like social security, as well as many indirect consequences.
The (total) dependency ratio can be decomposed into the child dependency ratio and the aged dependency ratio:
formula_1
formula_2
Total dependency ratio by regions.
Projections.
Below is a table constructed from data provided by the UN Population Division. It shows a historical ratio for the regions shown for the period 1950 - 2010. Columns to the right show projections of the ratio. Each number in the table shows the total number of dependents (people aged 0–14 plus people aged over 65) per hundred people in the workforce (number of people aged 15–64). The number can also be expressed as a percent. So, the total dependency ratio for the world in 1950 was 64.8% of the workforce.
As of 2010, Japan and Europe had high aged dependency ratios (that is over 65 as % of workforce) compared to other parts of the world. In Europe 2010, for every adult aged 65 and older there are approximately four working age adults (15-64); This ratio (one:four, or 25%) is expected to decrease to one:two, or 50%, by 2050. An aging population is caused by a decline in fertility and longer life expectancy. The average life expectancy of males and females are expected to increase from 79 years in 1990 to 82 years in 2025. The dependency amongst Japan residents aged 65 and older is expected to increase which will have a major impact on Japan's economy.
Inverse.
The inverse of the dependency ratio, the inverse dependency ratio can be interpreted as how many independent workers have to provide for one dependent person (pension & expenditure on children).
Measures of dependency.
Old age dependency ratio.
A high dependency ratio can cause serious problems for a country if a large proportion of a government's expenditure is on health, social security & education, which are most used by the youngest and the oldest in a population. The fewer people of working age, the fewer the people who can support schools, retirement pensions, disability pensions and other assistances to the youngest and oldest members of a population, often considered the most vulnerable members of society. The ratio of old (usually retired) to young working people is called old age dependency ratio (OADR) or just dependency ratio.
Nevertheless, the dependency ratio ignores the fact that the 65+ are not necessarily dependent (an increasing proportion of them are working) and that many of those of 'working age' are actually not working. Alternatives have been developed', such as the 'economic dependency ratio', but they still ignore factors such as increases in productivity and in working hours. Worries about the increasing (demographic) dependency ratio should thus be taken with caution.
Labor force dependency ratio.
The "labor force dependency ratio" (LFDR) is a more specific metric than the "old age dependency ratio" because it measures the ratio of the older "retired" population to the "employed" population at "all ages" (or the ratio of the inactive population to the active population at all ages).
Productivity weighted labor force dependency ratio.
While OADRs or LFDRs provide reasonable measures of dependency, they do not account for the fact that middle-aged and educated workers are usually the most productive. Hence the productivity weighted labor force dependency ratio (PWLFDR) may be a better metric to determine dependency. The PWLFDR is the ratio of inactive population (all ages) to active population (all ages), weighted by productivity for education level. Interestingly, while OADRs or LFDRs can change substantially, the PWLFDR is predicted to remain relatively constant in countries like China for the next couple of decades. PWLFDR assessments recommend to invest in education and life-long learning and child health to maintain social stability even when populations age.
Migrant labor dependency ratio.
Migrant labor dependency ratio (MLDR) is used to describe the extent to which the domestic population is dependent upon migrant labor.
Impact on savings and housing markets.
High dependency ratios can lead to long-term economic changes within the population such as saving rates, investment rates, the housing markets, and the consumption patterns. Typically, workers will start to increase their savings as they grow closer to retirement age, but this will eventually affect their long-term interest rates due to the retirement population increasing and the fertility rates decreasing. If the demographic population continues to follow this trend, their savings will decrease while their long-term interest rates increase. Due to the saving rates decreasing, the investment rate will prevent economic growth because there will be less funding for investment projects. There is a correlation between labor force and housing markets, so when there is a high age-dependency ratio in a country, the investments in housing markets will decrease since the labor force is decreasing due to a high dependency population.
Solutions.
Low dependency ratios promote economic growth while high dependency ratios decrease economic growth due to the large amounts of dependents that pay little to no taxes. A solution to decreasing the dependency ratio within a country is to promote immigration for younger people. This will stimulate a higher economic growth because the working-age population will grow in number if more young adults migrate into their country.
The increase in the involvement of women in the work force has contributed to the working-age population which complements the dependency ratio for a country. Encouraging women to work will help decrease the dependency ratio. Because more women are getting higher education, it is less likely for them to have children, causing the fertility rates to decrease as well.
Using productivity weighted labor force dependency ratio (PWLFDR) suggests that even an aging or decreasing population can maintain a stable support for the dependent (primarily ageing) population by increasing its productivity. A consequence from PWLFDR assessments is the recommendation to invest in education and life-long learning, child health, and to support disabled workers.
Dependency ratios based on the demographic transition model.
The age-dependency ratio can determine which stage in the Demographic Transition Model a certain country is in. The dependency ratio acts like a rollercoaster when going through the stages of the Demographic Transition Model. During stages 1 and 2, the dependency ratio is high due to significantly high crude birth rates putting pressure onto the smaller working-age population to take care of all of them. In stage 3, the dependency ratio starts to decrease because fertility and mortality rates start to decrease which shows that the proportion of adults to the young and elderly are much larger in this stage.
In stages 4 and 5, the dependency ratio starts to increase once again as the working-age population retires. Because fertility rates caused the younger population to decrease, once they grow up and start working, there will be more pressure for them to take care of the previous working-age population that just retired since there will be more young and elderly people than working-age adults during that time period.
The population structure of a country is an important factor for determining the economic status of their country. Japan is a great example of an aging population. They have a 1:4 ratio of people 65 years and older. This causes trouble for them because there is not enough people in the working-age population to support all of the elders. Rwanda is another example of a population that struggles with a younger population (also known as the "youth bulge"). Both of these countries are struggling with high dependency ratios even though both countries are on opposite stages of the Demographic Transition Model.
Criticism.
The dependency ratio has been criticized for ignoring that many older adults are employed, and many younger adults are not, and obscuring other trends such as improving health for older people that might make older people less economically dependent. For this reason, the Office of the United Nations High Commissioner for Human Rights has characterized the metric as ageist, and recommends avoiding its use. Alternative metrics, such as the economic dependency ratio (defined as the number of unemployed and retired people divided by the number of workers) do address this oversimplification, but ignore the effects of productivity and work hours.
See also.
Case studies:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{(Total)\\ Dependency\\ ratio} = \\frac{(\\mathrm{number\\ of\\ people\\ aged\\ 0\\ to\\ 14}) + (\\mathrm{number\\ of\\ people\\ aged\\ 65\\ and\\ over)}}{\\mathrm{number\\ of\\ people\\ aged\\ 15\\ to\\ 64}} \\times 100 "
},
{
"math_id": 1,
"text": "\\mathrm{Child\\ dependency\\ ratio} = \\frac{\\mathrm{number\\ of\\ people\\ aged\\ 0\\ to\\ 14}}{\\mathrm{number\\ of\\ people\\ aged\\ 15\\ to\\ 64}} \\times 100 "
},
{
"math_id": 2,
"text": "\\mathrm{Aged\\ dependency\\ ratio} = \\frac{\\mathrm{number\\ of\\ people\\ aged\\ 65\\ and\\ over}}{\\mathrm{number\\ of\\ people\\ aged\\ 15\\ to\\ 64}} \\times 100 "
}
] |
https://en.wikipedia.org/wiki?curid=591931
|
591994
|
Cryptographic protocol
|
Aspect of cryptography
A cryptographic protocol is an abstract or concrete protocol that performs a security-related function and applies cryptographic methods, often as sequences of cryptographic primitives. A protocol describes how the algorithms should be used and includes details about data structures and representations, at which point it can be used to implement multiple, interoperable versions of a program.
Cryptographic protocols are widely used for secure application-level data transport. A cryptographic protocol usually incorporates at least some of these aspects:
For example, Transport Layer Security (TLS) is a cryptographic protocol that is used to secure web (HTTPS) connections. It has an entity authentication mechanism, based on the X.509 system; a key setup phase, where a symmetric encryption key is formed by employing public-key cryptography; and an application-level data transport function. These three aspects have important interconnections. Standard TLS does not have non-repudiation support.
There are other types of cryptographic protocols as well, and even the term itself has various readings; Cryptographic "application" protocols often use one or more underlying key agreement methods, which are also sometimes themselves referred to as "cryptographic protocols". For instance, TLS employs what is known as the Diffie–Hellman key exchange, which although it is only a part of TLS "per se", Diffie–Hellman may be seen as a complete cryptographic protocol in itself for other applications.
Advanced cryptographic protocols.
A wide variety of cryptographic protocols go beyond the traditional goals of data confidentiality, integrity, and authentication to also secure a variety of other desired characteristics of computer-mediated collaboration. Blind signatures can be used for digital cash and digital credentials to prove that a person holds an attribute or right without revealing that person's identity or the identities of parties that person transacted with. Secure digital timestamping can be used to prove that data (even if confidential) existed at a certain time. Secure multiparty computation can be used to compute answers (such as determining the highest bid in an auction) based on confidential data (such as private bids), so that when the protocol is complete the participants know only their own input and the answer. End-to-end auditable voting systems provide sets of desirable privacy and auditability properties for conducting e-voting. Undeniable signatures include interactive protocols that allow the signer to prove a forgery and limit who can verify the signature. Deniable encryption augments standard encryption by making it impossible for an attacker to mathematically prove the existence of a plain text message. Digital mixes create hard-to-trace communications.
Formal verification.
Cryptographic protocols can sometimes be verified formally on an abstract level. When it is done, there is a necessity to formalize the environment in which the protocol operates in order to identify threats. This is frequently done through the Dolev-Yao model.
Logics, concepts and calculi used for formal reasoning of security protocols:
Research projects and tools used for formal verification of security protocols:
Notion of abstract protocol.
To formally verify a protocol it is often abstracted and modelled using Alice & Bob notation. A simple example is the following:
formula_0
This states that Alice formula_1 intends a message for Bob formula_2 consisting of a message formula_3 encrypted under shared key formula_4.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A\\rightarrow B:\\{X\\}_{K_{A,B}}"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "B"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "K_{A,B}"
}
] |
https://en.wikipedia.org/wiki?curid=591994
|
5920062
|
Crooks fluctuation theorem
|
The Crooks fluctuation theorem (CFT), sometimes known as the Crooks equation, is an equation in statistical mechanics that relates the work done on a system during a non-equilibrium transformation to the free energy difference between the final and the initial state of the transformation. During the non-equilibrium transformation the system is at constant volume and in contact with a heat reservoir. The CFT is named after the chemist Gavin E. Crooks (then at University of California, Berkeley) who discovered it in 1998.
The most general statement of the CFT relates the probability of a space-time trajectory formula_0 to the time-reversal of the trajectory formula_1. The theorem says if the dynamics of the system satisfies microscopic reversibility, then the forward time trajectory is exponentially more likely than the reverse, given that it produces entropy,
formula_2
If one defines a generic reaction coordinate of the system as a function of the Cartesian coordinates of the constituent particles (" e.g. ", a distance between two particles), one can characterize every point along the reaction coordinate path by a parameter formula_3, such that formula_4 and formula_5 correspond to two ensembles of microstates for which the reaction coordinate is constrained to different values. A dynamical process where formula_3 is externally driven from zero to one, according to an arbitrary time scheduling, will be referred as " forward transformation ", while the time reversal path will be indicated as "backward transformation". Given these definitions, the CFT sets a relation between the following five quantities:
The CFT equation reads as follows:
formula_15
In the previous equation the difference formula_16 corresponds to the work dissipated in the forward transformation, formula_17. The probabilities formula_6 and formula_9 become identical when the transformation is performed at infinitely slow speed, " i.e. " for equilibrium transformations. In such cases, formula_18 and formula_19
Using the time reversal relation formula_20, and grouping together all the trajectories yielding the same work (in the forward and backward transformation), i.e. determining the probability distribution (or density) formula_21 of an amount of work formula_22 being exerted by a random system trajectory from formula_7 to formula_8, we can write the above equation in terms of the work distribution functions as follows
formula_23
Note that for the backward transformation, the work distribution function must be evaluated by taking the work with the opposite sign. The two work distributions for the forward and backward processes cross at formula_24. This phenomenon has been experimentally verified using optical tweezers for the
process of unfolding and refolding of a small RNA hairpin and an RNA three-helix junction.
The CFT implies the Jarzynski equality.
|
[
{
"math_id": 0,
"text": "x(t)"
},
{
"math_id": 1,
"text": "\\tilde{x}(t)"
},
{
"math_id": 2,
"text": " \\frac{P[x(t)]}{\\tilde{P}[\\tilde{x}(t)]} = e^{\\sigma[x(t)]}. "
},
{
"math_id": 3,
"text": "\\lambda"
},
{
"math_id": 4,
"text": "\\lambda = 0"
},
{
"math_id": 5,
"text": "\\lambda = 1"
},
{
"math_id": 6,
"text": "P(A \\rightarrow B)"
},
{
"math_id": 7,
"text": "A"
},
{
"math_id": 8,
"text": "B"
},
{
"math_id": 9,
"text": "P(A \\leftarrow B)"
},
{
"math_id": 10,
"text": "\\beta = (k_B\nT)^{-1}"
},
{
"math_id": 11,
"text": "k_B"
},
{
"math_id": 12,
"text": "T"
},
{
"math_id": 13,
"text": "W_{A \\rightarrow B}"
},
{
"math_id": 14,
"text": "\\Delta F = F(B) - F(A)"
},
{
"math_id": 15,
"text": "\n\\frac{P(A \\rightarrow B)}{P( A \\leftarrow B)} = \\exp [ \\beta ( W_{A \\rightarrow B} - \\Delta F)].\n"
},
{
"math_id": 16,
"text": "W_{A \\rightarrow B} - \\Delta F"
},
{
"math_id": 17,
"text": "W_d"
},
{
"math_id": 18,
"text": "W_{A \\rightarrow B} = \\Delta F "
},
{
"math_id": 19,
"text": "W_d = 0."
},
{
"math_id": 20,
"text": "W_{A \\rightarrow B} = -W_{A \\leftarrow B}"
},
{
"math_id": 21,
"text": "P_{A\\rightarrow B}(W)"
},
{
"math_id": 22,
"text": "W"
},
{
"math_id": 23,
"text": "\nP_{A \\rightarrow B} (W) = P_{A\\leftarrow B}(- W) ~ \\exp[\\beta (W - \\Delta F)].\n"
},
{
"math_id": 24,
"text": " W=\\Delta F "
}
] |
https://en.wikipedia.org/wiki?curid=5920062
|
5920689
|
Harmony (color)
|
Aesthetically pleasing color combination
In color theory, color harmony refers to the property that certain aesthetically pleasing color combinations have. These combinations create pleasing contrasts and consonances that are said to be harmonious. These combinations can be of complementary colors, split-complementary colors, color triads, or analogous colors. Color harmony has been a topic of extensive study throughout history, but only since the Renaissance and the Scientific Revolution has it seen extensive codification. Artists and designers make use of these harmonies in order to achieve certain moods or aesthetics.
Types.
Several patterns have been suggested for predicting which sets of colors will be perceived as harmonious. One difficulty with codifying such patterns is the variety of color spaces and color models that have been developed. Different models yield different pairs of complementary colors and so forth, and the degree of harmony of sets derived from each color space is largely subjective. Despite the development of color models based on the physics of color production, such as RGB and CMY, and those based on human perception, such as Munsell and CIE L*a*b*, the traditional RYB color model (common to most early attempts at codifying color) has persisted among many artists and designers for selecting harmonious colors.
Complementary colors.
Complementary colors exist opposite each other on the color wheel. They create the most contrast and therefore greatest visual tension by virtue of how dissimilar they are.
Split-complementary colors.
Split-complementary colors are like complementary colors, except one of the complements is split into two nearby analogous colors. This maintains the tension of complementary colors while simultaneously introducing more visual interest with more variety.
Triads.
Similarly to split-complementary colors mentioned above, color triads involve three colors in a geometric relationship. Unlike split-complementary colors, however, all three colors are equidistant to one another on the color wheel in an equilateral triangle. The most common triads are the primary colors. From these primary colors are obtained the secondary colors.
Analogous colors.
The simplest and most stable harmony is that of analogous colors. It is composed of a root color and two or more nearby colors. It forms the basis for a color scheme, and in practice many color schemes are a combination of analogous and complementary harmonies in order to achieve both visual interest through variety, chromatic stability, and tension through contrast.
Relationship.
It has been suggested that "Colors seen together to produce a pleasing affective response are said to be in harmony". However, color harmony is a complex notion because human responses to color are both affective and cognitive, involving emotional response and judgement. Hence, our responses to color and the notion of color harmony is open to the influence of a range of different factors. These factors include individual differences (such as age, gender, personal preference, affective state, etc.) as well as cultural, sub-cultural and socially-based differences which gives rise to conditioning and learned responses about color. In addition, context always has an influence on responses about color and the notion of color harmony, and this concept is also influenced by temporal factors (such as changing trends) and perceptual factors (such as simultaneous contrast) which may impinge on human response to color. The following conceptual model illustrates this 21st century approach to color harmony:
formula_0
Wherein color harmony is a function ("f") of the interaction between color/s (Col 1, 2, 3, …, "n") and the factors that influence positive aesthetic response to color: individual differences ("ID") such as age, gender, personality and affective state; cultural experiences ("CE"); contextual effects ("CX") which include setting and ambient lighting; intervening perceptual effects ("P"); and temporal effects ("T") in terms of prevailing social trends.
In addition, given that humans can perceive over 2.8 million different colors, it has been suggested that the number of possible color combinations is virtually infinite thereby implying that predictive color harmony formulae are fundamentally unsound. Despite this, many color theorists have devised formulae, principles or guidelines for color combination with the aim being to predict or specify positive aesthetic response or "color harmony". Color wheel models have often been used as a basis for color combination principles or guidelines and for defining relationships between colors. Some theorists and artists believe juxtapositions of complementary color will produce strong contrast, a sense of visual tension as well as "color harmony"; while others believe juxtapositions of analogous colors will elicit positive aesthetic response. Color combination guidelines suggest that colors next to each other on the color wheel model (analogous colors) tend to produce a single-hued or monochromatic color experience and some theorists also refer to these as "simple harmonies". In addition, split complementary color schemes usually depict a modified complementary pair, with instead of the "true" second color being chosen, a range of analogous hues around it are chosen, i.e. the split complements of red are blue-green and yellow-green. A triadic color scheme adopts any three colors approximately equidistant around a color wheel model. Feisner and Mahnke are among a number of authors who provide color combination guidelines in greater detail.
Color combination formulae and principles may provide some guidance but have limited practical application. This is because of the influence of contextual, perceptual and temporal factors which will influence how color/s are perceived in any given situation, setting or context. Such formulae and principles may be useful in fashion, interior and graphic design, but much depends on the tastes, lifestyle and cultural norms of the viewer or consumer.
As early as the ancient Greek philosophers, many theorists have devised color associations and linked particular connotative meanings to specific colors. However, connotative color associations and color symbolism tends to be culture-bound and may also vary across different contexts and circumstances. For example, red has many different connotative and symbolic meanings from exciting, arousing, sensual, romantic and feminine; to a symbol of good luck; and also acts as a signal of danger. Such color associations tend to be learned and do not necessarily hold irrespective of individual and cultural differences or contextual, temporal or perceptual factors. It is important to note that while color symbolism and color associations exist, their existence does not provide evidential support for color psychology or claims that color has therapeutic properties.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\text{Color harmony} = f(\\text{Col} 1, 2, 3, \\dots, n) \\cdot (ID + CE + CX + P + T)"
}
] |
https://en.wikipedia.org/wiki?curid=5920689
|
59208283
|
M equilibrium
|
Mathematical concept
M equilibrium is a set valued solution concept in game theory that relaxes the rational choice assumptions of perfect maximization ("no mistakes") and perfect beliefs ("no surprises"). The concept can be applied to any normal-form game with finite and discrete strategies. M equilibrium was first introduced by Jacob K. Goeree and Philippos Louis.
Background.
A large body of work in experimental game theory has documented systematic departures from Nash equilibrium, the cornerstone of classic game theory. The lack of empirical support for Nash equilibrium led Nash himself to return to doing research in pure mathematics. Selten, who shared the 1994 Nobel Prize with Nash, likewise concluded that "game theory is for proving theorems, not for playing games". M equilibrium is motivated by the desire for an empirically relevant game theory.
M equilibrium accomplishes this by replacing the two main assumptions underlying classical game theory, perfect maximization and rational expectations, with the weaker notions of ordinal monotonicity –players' choice probabilities are ranked the same as the expected payoffs based on their beliefs – and ordinal consistency – players' beliefs yield the same ranking of expected payoffs as their choices.
M equilibria do not follow from the fixed-points that follow by imposing rational expectations and that have long dominated economics. Instead, the mathematical machinery used to characterize M equilibria is semi-algebraic geometry. Interestingly, some of this machinery was developed by Nash himself. The characterization of M equilibria as semi-algebraic sets allows for mathematically precise and empirically testable predictions.
Definition.
M equilibrium is based on the following two conditions;
Let formula_0 and formula_1 denote the concatenations of players’ choice and belief profiles respectively, and let formula_2 and formula_3 denote the concatenations of players’ rank correspondences and profit functions. We write formula_4 for the profile of expected payoffs based on players’ beliefs and formula_5 for the profile of expected payoffs when beliefs are correct, i.e. formula_6 for formula_7. The set of possible choice profiles is formula_8 and the set of possible belief profiles is formula_9.
Definition: We say formula_10 form an "M Equilibrium" if they are the closures of the largest non-empty sets formula_11 and formula_12 that satisfy:
formula_13
for all formula_14, formula_15.
Properties.
It can be shown that, generically, M equilibria satisfy the following properties:
The number of M equilibria can generically be even or odd, and may be less than, equal, or greater than the number of Nash equilibria. Also, any M equilibrium may contain zero, one, or multiple Nash equilibria. Importantly, the measure of any M equilibrium choice set is bounded and decreases exponentially with the number of players and the number of possible choices.
Meta Theory.
Surprisingly, M equilibrium "minimally envelopes" various parametric models based on fixed-points, including Quantal Response Equilibrium. Unlike QRE, however, M equilibrium is parameter-free, easy to compute, and does not impose the rational-expectations condition of homogeneous and correct beliefs.
Behavioral stability.
The interior of a colored M equilibrium set consists of choices and beliefs that are behaviorally stable. A profile is behaviorally stable when small perturbations in the game do not destroy its equilibrium nature. So an M-equilibrium is behaviorally stable when it remains an M equilibrium even after perturbing the game. Behavioral stability is a strengthening of the concept of strategic stability.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sigma^c"
},
{
"math_id": 1,
"text": "\\sigma^b"
},
{
"math_id": 2,
"text": "rank"
},
{
"math_id": 3,
"text": "\\pi"
},
{
"math_id": 4,
"text": "\\pi(\\sigma^b)"
},
{
"math_id": 5,
"text": "\\pi(\\sigma^c)"
},
{
"math_id": 6,
"text": "\\sigma_i^b = \\sigma^c"
},
{
"math_id": 7,
"text": "i\\in N"
},
{
"math_id": 8,
"text": "\\Sigma = \\Pi_{i\\in N}\\Sigma_i"
},
{
"math_id": 9,
"text": "\\Sigma^n"
},
{
"math_id": 10,
"text": "(\\overline{M^{c}},\\overline{M^{b}})\\subseteq \\Sigma \\times \\Sigma^n"
},
{
"math_id": 11,
"text": "M^c"
},
{
"math_id": 12,
"text": "M^b"
},
{
"math_id": 13,
"text": "rank(\\sigma^c) \\subseteq rank(\\pi(\\sigma^b)) = rank(\\pi(\\sigma^c))"
},
{
"math_id": 14,
"text": "\\sigma^c\\in M^c"
},
{
"math_id": 15,
"text": "\\sigma^b \\in M^b"
},
{
"math_id": 16,
"text": "\\Sigma \\times \\Sigma^n"
}
] |
https://en.wikipedia.org/wiki?curid=59208283
|
59211511
|
Terminal investment hypothesis
|
The terminal investment hypothesis is the idea in life history theory that as an organism's residual reproductive value (or the total reproductive value minus the reproductive value of the current breeding attempt) decreases, its reproductive effort will increase. Thus, as an organism's prospects for survival decreases (through age or an immune challenge, for example), it will invest more in reproduction. This hypothesis is generally supported in animals, although results contrary to it do exist.
Definition.
The terminal investment hypothesis posits that as residual reproductive value (measured as the total reproductive value minus the reproductive value of the current breeding attempt) decreases, reproductive effort increases. This is based on the cost of reproduction hypothesis, which says that an increase in resources dedicated to current reproduction decreases the potential for future reproduction. But, as the residual reproductive value decreases, the importance of this trade-off decreases, leading to increased investment in the current reproductive attempt. This terminal investment hypothesis can be illustrated by the equation
formula_0,
where formula_1 is the total reproductive value, formula_2 the reproductive value of the current breeding attempt, formula_3 the proportionate increase in formula_2 resulting from a positive decision (where a "yes-no" decision must be made regarding whether or not to increase reproductive effort), formula_4 the cost of a positive decision where there is no selective pressure for either a positive decision or negative decision (this variable is also known as the "barely-justified cost"). The variable formula_5 is the proportionate loss in formula_2 from a negative decision. The barely-justified cost is thus inversely proportional to the residual reproductive value. When the level of reproductive investment has not reached the point where the equation above is true, more positive decisions about reproductive effort will be made. Thus, as the residual reproductive value decreases, more positive decisions need to be made so the equation is equal.
In animals.
In animals, most tests of the terminal investment hypothesis are correlations of age and reproductive effort, immune challenges on all age stages, and immune challenges on older ages versus younger ages. The last type of test is considered to be a more reliable measure of senescence's effect on reproductive effort, as younger individuals should reduce reproductive effort to reduce their chance of death because of their high future reproductive prospects, while older animals should increase effort because of their low future prospects. Overall, the terminal investment hypothesis is generally supported in a variety of animals.
In birds.
A study on blue tits published in 2000 found that individuals injected with a human diphtheria–tetanus vaccine fed their nestlings less than those injected with a control solution. In a study published in 2004, house sparrows that were injected with a Newcastle disease vaccine were more likely to lay a replacement clutch after their first clutch had been artificially removed than those that were injected with a control solution. In a study published in 2006, old blue-footed boobies injected with lipopolysaccharides (to challenge the immune system) before laying fledged more young than normal, whereas young individuals fledged less than normal. An increase in maternal effort in immune challenged birds may be mediated by the hormone corticosterone; a study published in 2015 found that house wrens injected with lipopolysaccharides increased foraging, and that measurements of corticosterone from eggs laid after injection found a positive correlation of this hormone with maternal foraging rates.
In insects.
A study published in 2009 supported the cost of reproduction and terminal investment hypotheses in the burying beetle. It found that beetles manipulated to overproduce young (by replacing a mouse carcass with a carcass) had shorter lifespans than those that bred on just carcasses, followed by those that had a carcass. In turn, non-breeding beetles had a significantly longer lifespan than those that bred. This supports the cost of reproduction hypothesis. Another experiment from the same study found beetles that first bred at 65 days had a larger brood size before dispersal (before the larvae start to pupate in the soil) than those that initially bred at 28 days. This supports the terminal investment hypothesis, and prevents the effect of an increased average brood size in older animals due to differential survival of quality individuals.
In flatworms.
A study published in 2004 on the flatworm "Diplostomum spathaceum" found that as its intermediate host, a snail, aged, production of cercariae (which are passed on to the final host, a fish) decreased. This is in line with the bet hedging hypothesis, which, in this case, says that the flatworm should attempt to keep its host alive longer so that more young can be produced; it does not support the terminal investment hypothesis.
In mammals.
A study published in 2002 found results contrary to the terminal investment hypothesis in reindeer. Calf weight peaked at the mother's seventh year of age, and declined thereafter. However, this would only be opposed to the hypothesis if reproductive costs did not increase with age. An alternative hypothesis, the senescence hypothesis, positing that reproductive output declines with age-related loss of function, was supported by the study. These two hypotheses are not necessarily mutually exclusive; a study on rhesus macaques published in 2010 strongly supported the senescence hypothesis and weakly supported the terminal investment hypothesis. It found that older mothers were lighter, less active, and had lighter infants with reduced survival rates compared to younger mothers (supporting the senescence hypothesis), but that older individuals spent more time in contact with their young (supporting the terminal investment hypothesis). Additionally, a study published in 1982 on red deer on the island of Rhum found that while older mothers produced less offspring (and lighter offspring, when they did) than expected for a given body weight, they had longer suckling bouts (which had previously been correlated with milk yield, calf body condition in early winter, and calf survival to spring) compared to younger mothers.
In reptiles.
A study on spotted turtles published in 2008 found that individuals in very poor condition sometimes did not breed. This is consistent with the bet hedging hypothesis, and indicates decision making on a large temporal scale (as spotted turtles may live for 65 to 110 years. However, individuals in poor condition generally produced a relatively large amount of small eggs; consistent with the terminal investment hypothesis.
In plants.
Although the terminal investment hypothesis has been relatively widely studied in animals, there have been few studies of the hypothesis' application to plants. One study on members of the long-lived oak genus "Quercus" found that trees declined in condition towards the end of their lifespan, and did not invest an increasing proportion of their decreasing resources in reproduction.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\hat{c}=\\frac{(a+b)\\phi}{(\\Phi-\\phi)}"
},
{
"math_id": 1,
"text": "\\Phi"
},
{
"math_id": 2,
"text": "\\phi"
},
{
"math_id": 3,
"text": "a"
},
{
"math_id": 4,
"text": "\\hat{c}"
},
{
"math_id": 5,
"text": "b"
}
] |
https://en.wikipedia.org/wiki?curid=59211511
|
592151
|
Even and odd functions
|
Functions such that f(–x) equals f(x) or –f(x)
In mathematics, an even function is a real function such that formula_0 for every formula_1 in its domain. Similarly, an odd function is a function such that formula_2 for every formula_1 in its domain.
They are named for the parity of the powers of the power functions which satisfy each condition: the function formula_3 is even if "n" is an even integer, and it is odd if "n" is an odd integer.
Even functions are those real functions whose graph is self-symmetric with respect to the and odd functions are those whose graph is self-symmetric with respect to the origin.
If the domain of a real function is self-symmetric with respect to the origin, then the function can be uniquely decomposed as the sum of an even function and an odd function.
Definition and examples.
Evenness and oddness are generally considered for real functions, that is real-valued functions of a real variable. However, the concepts may be more generally defined for functions whose domain and codomain both have a notion of additive inverse. This includes abelian groups, all rings, all fields, and all vector spaces. Thus, for example, a real function could be odd or even (or neither), as could a complex-valued function of a vector variable, and so on.
The given examples are real functions, to illustrate the symmetry of their graphs.
Even functions.
A real function "f" is even if, for every x in its domain, −"x" is also in its domain and
formula_4
or equivalently
formula_5
Geometrically, the graph of an even function is symmetric with respect to the "y"-axis, meaning that its graph remains unchanged after reflection about the "y"-axis.
Examples of even functions are:
Odd functions.
A real function "f" is odd if, for every x in its domain, −"x" is also in its domain and
formula_12
or equivalently
formula_13
Geometrically, the graph of an odd function has rotational symmetry with respect to the origin, meaning that its graph remains unchanged after rotation of 180 degrees about the origin.
If formula_14 is in the domain of an odd function formula_15, then formula_16.
Examples of odd functions are:
Even–odd decomposition.
If a real function has a domain that is self-symmetric with respect to the origin, it may be uniquely decomposed as the sum of an even and an odd function, which are called respectively the even part and the odd part of the function, and are defined by
formula_23
and
formula_24
It is straightforward to verify that formula_25 is even, formula_26 is odd, and formula_27
This decomposition is unique since, if
formula_28
where g is even and h is odd, then formula_29 and formula_30 since
formula_31
For example, the hyperbolic cosine and the hyperbolic sine may be regarded as the even and odd parts of the exponential function, as the first one is an even function, the second one is odd, and
formula_32.
Analytic properties.
A function's being odd or even does not imply differentiability, or even continuity. For example, the Dirichlet function is even, but is nowhere continuous.
In the following, properties involving derivatives, Fourier series, Taylor series are considered, and these concepts are thus supposed to be defined for the considered functions.
Harmonics.
In signal processing, harmonic distortion occurs when a sine wave signal is sent through a memory-less nonlinear system, that is, a system whose output at time "t" only depends on the input at time "t" and does not depend on the input at any previous times. Such a system is described by a response function formula_36. The type of harmonics produced depend on the response function "f":
This does not hold true for more complex waveforms. A sawtooth wave contains both even and odd harmonics, for instance. After even-symmetric full-wave rectification, it becomes a triangle wave, which, other than the DC offset, contains only odd harmonics.
Generalizations.
Multivariate functions.
Even symmetry:
A function formula_41 is called "even symmetric" if:
formula_42
Odd symmetry:
A function formula_41 is called "odd symmetric" if:
formula_43
Complex-valued functions.
The definitions for even and odd symmetry for complex-valued functions of a real argument are similar to the real case. In signal processing, a similar symmetry is sometimes considered, which involves complex conjugation.
Conjugate symmetry:
A complex-valued function of a real argument formula_44 is called "conjugate symmetric" if
formula_45
A complex valued function is conjugate symmetric if and only if its real part is an even function and its imaginary part is an odd function.
A typical example of a conjugate symmetric function is the cis function
formula_46
Conjugate antisymmetry:
A complex-valued function of a real argument formula_44 is called "conjugate antisymmetric" if:
formula_47
A complex valued function is conjugate antisymmetric if and only if its real part is an odd function and its imaginary part is an even function.
Finite length sequences.
The definitions of odd and even symmetry are extended to "N"-point sequences (i.e. functions of the form formula_48) as follows:
Even symmetry:
A "N"-point sequence is called "conjugate symmetric" if
formula_49
Such a sequence is often called a palindromic sequence; see also Palindromic polynomial.
Odd symmetry:
A "N"-point sequence is called "conjugate antisymmetric" if
formula_50
Such a sequence is sometimes called an anti-palindromic sequence; see also Antipalindromic polynomial.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f(-x)=f(x)"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "f(-x)=-f(x)"
},
{
"math_id": 3,
"text": "f(x) = x^n"
},
{
"math_id": 4,
"text": "f(-x) = f(x)"
},
{
"math_id": 5,
"text": "f(x) - f(-x) = 0."
},
{
"math_id": 6,
"text": "x \\mapsto |x|,"
},
{
"math_id": 7,
"text": "x \\mapsto x^2,"
},
{
"math_id": 8,
"text": "x \\mapsto x^4,"
},
{
"math_id": 9,
"text": "\\cos,"
},
{
"math_id": 10,
"text": "\\cosh,"
},
{
"math_id": 11,
"text": "x \\mapsto \\exp (-x^2). "
},
{
"math_id": 12,
"text": "f(-x) = -f(x)"
},
{
"math_id": 13,
"text": "f(x) + f(-x) = 0."
},
{
"math_id": 14,
"text": "x=0"
},
{
"math_id": 15,
"text": "f(x)"
},
{
"math_id": 16,
"text": "f(0)=0"
},
{
"math_id": 17,
"text": "x \\mapsto \\sgn(x),"
},
{
"math_id": 18,
"text": "x \\mapsto x,"
},
{
"math_id": 19,
"text": "x \\mapsto x^3,"
},
{
"math_id": 20,
"text": "\\sin,"
},
{
"math_id": 21,
"text": "\\sinh,"
},
{
"math_id": 22,
"text": "\\operatorname{erf}."
},
{
"math_id": 23,
"text": "f_\\text{even}(x) = \\frac {f(x)+f(-x)}{2},"
},
{
"math_id": 24,
"text": "f_\\text{odd}(x) = \\frac {f(x)-f(-x)}{2}."
},
{
"math_id": 25,
"text": "f_\\text{even}"
},
{
"math_id": 26,
"text": "f_\\text{odd}"
},
{
"math_id": 27,
"text": "f=f_\\text{even}+f_\\text{odd}."
},
{
"math_id": 28,
"text": "f(x)=g(x)+h(x),"
},
{
"math_id": 29,
"text": "g=f_\\text{even}"
},
{
"math_id": 30,
"text": "h=f_\\text{odd},"
},
{
"math_id": 31,
"text": "\\begin{align}\n2f_\\text{e}(x) &=f(x)+f(-x)= g(x) + g(-x) +h(x) +h(-x) = 2g(x),\\\\\n2f_\\text{o}(x) &=f(x)-f(-x)= g(x) - g(-x) +h(x) -h(-x) = 2h(x).\n\\end{align}"
},
{
"math_id": 32,
"text": "e^x=\\underbrace{\\cosh (x)}_{f_\\text{even}(x)} + \\underbrace{\\sinh (x)}_{f_\\text{odd}(x)}"
},
{
"math_id": 33,
"text": "[-A,A]"
},
{
"math_id": 34,
"text": "\\int_{-A}^{A} f(x)\\,dx = 0"
},
{
"math_id": 35,
"text": "\\int_{-A}^{A} f(x)\\,dx = 2\\int_{0}^{A} f(x)\\,dx"
},
{
"math_id": 36,
"text": "V_\\text{out}(t) = f(V_\\text{in}(t))"
},
{
"math_id": 37,
"text": "0f, 2f, 4f, 6f, \\dots "
},
{
"math_id": 38,
"text": "0f"
},
{
"math_id": 39,
"text": "1f, 3f, 5f, \\dots "
},
{
"math_id": 40,
"text": "1f, 2f, 3f, \\dots "
},
{
"math_id": 41,
"text": "f: \\mathbb{R}^n \\to \\mathbb{R} "
},
{
"math_id": 42,
"text": "f(x_1,x_2,\\ldots,x_n)=f(-x_1,-x_2,\\ldots,-x_n) \\quad \\text{for all } x_1,\\ldots,x_n \\in \\mathbb{R}"
},
{
"math_id": 43,
"text": "f(x_1,x_2,\\ldots,x_n)=-f(-x_1,-x_2,\\ldots,-x_n) \\quad \\text{for all } x_1,\\ldots,x_n \\in \\mathbb{R}"
},
{
"math_id": 44,
"text": "f: \\mathbb{R} \\to \\mathbb{C}"
},
{
"math_id": 45,
"text": "f(x)=\\overline{f(-x)} \\quad \\text{for all } x \\in \\mathbb{R}"
},
{
"math_id": 46,
"text": "x \\to e^{ix}=\\cos x + i\\sin x"
},
{
"math_id": 47,
"text": "f(x)=-\\overline{f(-x)} \\quad \\text{for all } x \\in \\mathbb{R}"
},
{
"math_id": 48,
"text": "f: \\left\\{0,1,\\ldots,N-1\\right\\} \\to \\mathbb{R}"
},
{
"math_id": 49,
"text": "f(n) = f(N-n) \\quad \\text{for all } n \\in \\left\\{ 1,\\ldots,N-1 \\right\\}."
},
{
"math_id": 50,
"text": "f(n) = -f(N-n) \\quad \\text{for all } n \\in \\left\\{1,\\ldots,N-1\\right\\}. "
}
] |
https://en.wikipedia.org/wiki?curid=592151
|
59217
|
Quadratic formula
|
Formula that provides the solutions to a quadratic equation
In elementary algebra, the quadratic formula is a closed-form expression describing the solutions of a quadratic equation. Other ways of solving quadratic equations, such as completing the square, yield the same solutions.
Given a general quadratic equation of the form &NoBreak;&NoBreak;, with &NoBreak;&NoBreak; representing an unknown, and coefficients &NoBreak;&NoBreak;, &NoBreak;&NoBreak;, and &NoBreak;&NoBreak; representing known real or complex numbers with &NoBreak;&NoBreak;, the values of &NoBreak;&NoBreak; satisfying the equation, called the "roots" or "zeros", can be found using the quadratic formula,
formula_0
where the plus–minus symbol "&NoBreak;&NoBreak;" indicates that the equation has two roots. Written separately, these are:
formula_1
The quantity &NoBreak;&NoBreak; is known as the discriminant of the quadratic equation. If the coefficients &NoBreak;&NoBreak;, &NoBreak;&NoBreak;, and &NoBreak;&NoBreak; are real numbers then when &NoBreak;&NoBreak;, the equation has two distinct real roots; when &NoBreak;&NoBreak;, the equation has one repeated real root; and when &NoBreak;&NoBreak;, the equation has "no" real roots but has two distinct complex roots, which are complex conjugates of each other.
Geometrically, the roots represent the &NoBreak;&NoBreak; values at which the graph of the quadratic function &NoBreak;&NoBreak;, a parabola, crosses the &NoBreak;&NoBreak;-axis: the graph's &NoBreak;&NoBreak;-intercepts. The quadratic formula can also be used to identify the parabola's axis of symmetry.
Derivation by completing the square.
The standard way to derive the quadratic formula is to apply the method of completing the square to the generic quadratic equation &NoBreak;&NoBreak;. The idea is to manipulate the equation into the form &NoBreak;&NoBreak; for some expressions &NoBreak;&NoBreak; and &NoBreak;&NoBreak; written in terms of the coefficients; take the square root of both sides; and then isolate &NoBreak;&NoBreak;.
We start by dividing the equation by the quadratic coefficient &NoBreak;&NoBreak;, which is allowed because &NoBreak;&NoBreak; is non-zero. Afterwards, we subtract the constant term &NoBreak;&NoBreak; to isolate it on the right-hand side:
formula_2
The left-hand side is now of the form &NoBreak;&NoBreak;, and we can "complete the square" by adding a constant &NoBreak;&NoBreak; to obtain a squared binomial &NoBreak;}&NoBreak;<wbr />&NoBreak;&NoBreak;. In this example we add &NoBreak;&NoBreak; to both sides so that the left-hand side can be factored:
formula_3
Because the left-hand side is now a perfect square, we can easily take the square root of both sides:
formula_4
Finally, subtracting &NoBreak;&NoBreak; from both sides to isolate &NoBreak;&NoBreak; produces the quadratic formula:
formula_5
Equivalent formulations.
The quadratic formula can equivalently be written using various alternative expressions, for instance
formula_6
which can be derived by first dividing a quadratic equation by &NoBreak;&NoBreak;, resulting in &NoBreak;&NoBreak;, then substituting the new coefficients into the standard quadratic formula. Because this variant allows re-use of the intermediately calculated quantity &NoBreak;}&NoBreak;, it can slightly reduce the arithmetic involved.
Square root in the denominator.
A lesser known quadratic formula, first mentioned by Giulio Fagnano, describes the same roots via an equation with the square root in the denominator (assuming &NoBreak;&NoBreak;):
formula_7
Here the minus–plus symbol "&NoBreak;&NoBreak;" indicates that the two roots of the quadratic equation, in the same order as the standard quadratic formula, are
formula_8
This variant has been jokingly called the "citardauq" formula ("quadratic" spelled backwards).
When &NoBreak;&NoBreak; has the opposite sign as either &NoBreak;}&NoBreak; or &NoBreak;}&NoBreak;, subtraction can cause catastrophic cancellation, resulting in poor accuracy in numerical calculations; choosing between the version of the quadratic formula with the square root in the numerator or denominator depending on the sign of &NoBreak;&NoBreak; can avoid this problem. See below.
This version of the quadratic formula is used in Muller's method for finding the roots of general functions. It can be derived from the standard formula from the identity &NoBreak;&NoBreak;, one of Vieta's formulas. Alternately, it can be derived by dividing the &NoBreak;&NoBreak; by &NoBreak;&NoBreak; to get &NoBreak;&NoBreak;, applying the standard formula to find the two roots &NoBreak;&NoBreak;, and then taking the reciprocal to find the roots &NoBreak;&NoBreak; of the original equation.
Other derivations.
Any generic method or algorithm for solving quadratic equations can be applied to an equation with symbolic coefficients and used to derive some closed-form expression equivalent to the quadratic formula. Alternative methods are sometimes simpler than completing the square, and may offer interesting insight into other areas of mathematics.
Completing the square by Śrīdhara's method.
Instead of dividing by &NoBreak;&NoBreak; to isolate &NoBreak;&NoBreak;, it can be slightly simpler to multiply by &NoBreak;&NoBreak; instead to produce &NoBreak;&NoBreak;, which allows us to complete the square without need for fractions. Then the steps of the derivation are:
Applying this method to a generic quadratic equation with symbolic coefficients yields the quadratic formula:
formula_9
This method for completing the square is ancient and was known to the 8th–9th century Indian mathematician Śrīdhara. Compared with the modern standard method for completing the square, this alternate method avoids fractions until the last step and hence does not require a rearrangement after step 3 to obtain a common denominator in the right side.
By substitution.
Another derivation uses a change of variables to eliminate the linear term. Then the equation takes the form &NoBreak;&NoBreak; in terms of a new variable &NoBreak;&NoBreak; and some constant expression &NoBreak;&NoBreak;, whose roots are then &NoBreak;&NoBreak;.
By substituting &NoBreak;}&NoBreak; into &NoBreak;&NoBreak;, expanding the products and combining like terms, and then solving for &NoBreak;&NoBreak;, we have:
formula_10
Finally, after taking a square root of both sides and substituting the resulting expression for &NoBreak;&NoBreak; back into &NoBreak;&NoBreak; the familiar quadratic formula emerges:
formula_11
By using algebraic identities.
The following method was used by many historical mathematicians:
Let the roots of the quadratic equation &NoBreak;&NoBreak; be &NoBreak;&NoBreak; and &NoBreak;&NoBreak;. The derivation starts from an identity for the square of a difference (valid for any two complex numbers), of which we can take the square root on both sides:
formula_12
Since the coefficient &NoBreak;&NoBreak;, we can divide the quadratic equation by &NoBreak;&NoBreak; to obtain a monic polynomial with the same roots. Namely,
formula_13
This implies that the sum &NoBreak;&NoBreak; and the product &NoBreak;&NoBreak;. Thus the identity can be rewritten:
formula_14
Therefore,
formula_15
The two possibilities for each of &NoBreak;&NoBreak; and &NoBreak;&NoBreak; are the same two roots in opposite order, so we can combine them into the standard quadratic equation:
formula_16
By Lagrange resolvents.
An alternative way of deriving the quadratic formula is via the method of Lagrange resolvents, which is an early part of Galois theory.
This method can be generalized to give the roots of cubic polynomials and quartic polynomials, and leads to Galois theory, which allows one to understand the solution of algebraic equations of any degree in terms of the symmetry group of their roots, the Galois group.
This approach focuses on the roots themselves rather than algebraically rearranging the original equation. Given a monic quadratic polynomial &NoBreak;&NoBreak; assume that &NoBreak;&NoBreak; and &NoBreak;&NoBreak; are the two roots. So the polynomial factors as
formula_17
which implies &NoBreak;&NoBreak; and &NoBreak;&NoBreak;.
Since multiplication and addition are both commutative, exchanging the roots &NoBreak;&NoBreak; and &NoBreak;&NoBreak; will not change the coefficients &NoBreak;&NoBreak; and &NoBreak;&NoBreak;: one can say that &NoBreak;&NoBreak; and &NoBreak;&NoBreak; are symmetric polynomials in &NoBreak;&NoBreak; and &NoBreak;&NoBreak;. Specifically, they are the elementary symmetric polynomials – any symmetric polynomial in &NoBreak;&NoBreak; and &NoBreak;&NoBreak; can be expressed in terms of &NoBreak;&NoBreak; and &NoBreak;&NoBreak; instead.
The Galois theory approach to analyzing and solving polynomials is to ask whether, given coefficients of a polynomial each of which is a symmetric function in the roots, one can "break" the symmetry and thereby recover the roots. Using this approach, solving a polynomial of degree &NoBreak;&NoBreak; is related to the ways of rearranging ("permuting") &NoBreak;&NoBreak; terms, called the symmetric group on &NoBreak;&NoBreak; letters and denoted &NoBreak;&NoBreak;. For the quadratic polynomial, the only ways to rearrange two roots are to either leave them be or to transpose them, so solving a quadratic polynomial is simple.
To find the roots &NoBreak;&NoBreak; and &NoBreak;&NoBreak;, consider their sum and difference:
formula_18
These are called the "Lagrange resolvents" of the polynomial, from which the roots can be recovered as
formula_19
Because &NoBreak;&NoBreak; is a symmetric function in &NoBreak;&NoBreak; and &NoBreak;&NoBreak;, it can be expressed in terms of &NoBreak;&NoBreak; and &NoBreak;&NoBreak; specifically &NoBreak;&NoBreak; as described above. However, &NoBreak;&NoBreak; is not symmetric, since exchanging &NoBreak;&NoBreak; and &NoBreak;&NoBreak; yields the additive inverse &NoBreak;&NoBreak;. So &NoBreak;&NoBreak; cannot be expressed in terms of the symmetric polynomials. However, its square &NoBreak;&NoBreak; "is" symmetric in the roots, expressible in terms of &NoBreak;&NoBreak; and &NoBreak;&NoBreak;. Specifically &NoBreak;}&NoBreak;<wbr />&NoBreak;}&NoBreak;<wbr />&NoBreak;&NoBreak;, which implies &NoBreak;}&NoBreak;. Taking the positive root "breaks" the symmetry, resulting in
formula_20
from which the roots &NoBreak;&NoBreak; and &NoBreak;&NoBreak; are recovered as
formula_21
which is the quadratic formula for a monic polynomial.
Substituting &NoBreak;&NoBreak;, &NoBreak;&NoBreak; yields the usual expression for an arbitrary quadratic polynomial. The resolvents can be recognized as
formula_22
respectively the vertex and the discriminant of the monic polynomial.
A similar but more complicated method works for cubic equations, which have three resolvents and a quadratic equation (the "resolving polynomial") relating &NoBreak;&NoBreak; and &NoBreak;&NoBreak;, which one can solve by the quadratic equation, and similarly for a quartic equation (degree 4), whose resolving polynomial is a cubic, which can in turn be solved. The same method for a quintic equation yields a polynomial of degree 24, which does not simplify the problem, and, in fact, solutions to quintic equations in general cannot be expressed using only roots.
Numerical calculation.
The quadratic formula is exactly correct when performed using the idealized arithmetic of real numbers, but when approximate arithmetic is used instead, for example pen-and-paper arithmetic carried out to a fixed number of decimal places or the floating-point binary arithmetic available on computers, the limitations of the number representation can lead to substantially inaccurate results unless great care is taken in the implementation. Specific difficulties include catastrophic cancellation in computing the sum &NoBreak;&NoBreak; if &NoBreak;&NoBreak;; catastrophic calculation in computing the discriminant &NoBreak;&NoBreak; itself in cases where &NoBreak;&NoBreak;; degeneration of the formula when &NoBreak;&NoBreak;, &NoBreak;&NoBreak;, or &NoBreak;&NoBreak;, is represented as zero or infinite; and possible overflow or underflow when multiplying or dividing extremely large or small numbers, even in cases where the roots can be accurately represented.
Catastrophic cancellation occurs when two numbers which are approximately equal are subtracted. While each of the numbers may independently be representable to a certain number of digits of precision, the identical leading digits of each number cancel, resulting in a difference of lower relative precision. When &NoBreak;&NoBreak;, evaluation of &NoBreak;&NoBreak; causes catastrophic cancellation, as does the evaluation of &NoBreak;&NoBreak; when &NoBreak;&NoBreak;. When using the standard quadratic formula, calculating one of the two roots always involves addition, which preserves the working precision of the intermediate calculations, while calculating the other root involves subtraction, which compromises it. Therefore, naïvely following the standard quadratic formula often yields one result with less relative precision than expected. Unfortunately, introductory algebra textbooks typically do not address this problem, even though it causes students to obtain inaccurate results in other school subjects such as introductory chemistry.
For example, if trying to solve the equation &NoBreak;&NoBreak; using a pocket calculator, the result of the quadratic formula &NoBreak;}&NoBreak; might be approximately calculated as:
formula_23
Even though the calculator used ten decimal digits of precision for each step, calculating the difference between two approximately equal numbers has yielded a result for &NoBreak;&NoBreak; with only four correct digits.
One way to recover an accurate result is to use the identity &NoBreak;&NoBreak;. In this example &NoBreak;&NoBreak; can be calculated as &NoBreak;}&NoBreak;<wbr />&NoBreak;&NoBreak;, which is correct to the full ten digits. Another more or less equivalent approach is to use the version of the quadratic formula with the square root in the denominator to calculate one of the roots (see above).
Practical computer implementations of the solution of quadratic equations commonly choose which formula to use for each root depending on the sign of &NoBreak;&NoBreak;.
These methods do not prevent possible overflow or underflow of the floating-point exponent in computing &NoBreak;&NoBreak; or &NoBreak;&NoBreak;, which can lead to numerically representable roots not being computed accurately. A more robust but computationally expensive strategy is to start with the substitution &NoBreak;}&NoBreak;, turning the quadratic equation into
formula_24
where &NoBreak;&NoBreak; is the sign function. Letting &NoBreak;}&NoBreak;, this equation has the form &NoBreak;&NoBreak;, for which one solution is &NoBreak;}&NoBreak; and the other solution is &NoBreak;&NoBreak;. The roots of the original equation are then &NoBreak;&NoBreak; and &NoBreak;&NoBreak;.
With additional complication the expense and extra rounding of the square roots can be avoided by approximating them as powers of two, while still avoiding exponent overflow for representable roots.
Historical development.
The earliest methods for solving quadratic equations were geometric. Babylonian cuneiform tablets contain problems reducible to solving quadratic equations. The Egyptian Berlin Papyrus, dating back to the Middle Kingdom (2050 BC to 1650 BC), contains the solution to a two-term quadratic equation.
The Greek mathematician Euclid (circa 300 BC) used geometric methods to solve quadratic equations in Book 2 of his "Elements", an influential mathematical treatise Rules for quadratic equations appear in the Chinese "The Nine Chapters on the Mathematical Art" circa 200 BC. In his work "Arithmetica", the Greek mathematician Diophantus (circa 250 AD) solved quadratic equations with a method more recognizably algebraic than the geometric algebra of Euclid. His solution gives only one root, even when both roots are positive.
The Indian mathematician Brahmagupta included a generic method for finding one root of a quadratic equation in his treatise "Brāhmasphuṭasiddhānta" (circa 628 AD), written out in words in the style of the time but more or less equivalent to the modern symbolic formula. His solution of the quadratic equation &NoBreak;&NoBreak; was as follows: "To the absolute number multiplied by four times the [coefficient of the] square, add the square of the [coefficient of the] middle term; the square root of the same, less the [coefficient of the] middle term, being divided by twice the [coefficient of the] square is the value."
In modern notation, this can be written &NoBreak;&NoBreak;. The Indian mathematician Śrīdhara (8th–9th century) came up with a similar algorithm for solving quadratic equations in a now-lost work on algebra quoted by Bhāskara II. The modern quadratic formula is sometimes called "Sridharacharya's formula" in India.
The 9th-century Persian mathematician Muḥammad ibn Mūsā al-Khwārizmī solved quadratic equations algebraically. The quadratic formula covering all cases was first obtained by Simon Stevin in 1594. In 1637 René Descartes published "La Géométrie" containing special cases of the quadratic formula in the form we know today.
Geometric significance.
In terms of coordinate geometry, an axis-aligned parabola is a curve whose &NoBreak;&NoBreak;-coordinates are the graph of a second-degree polynomial, of the form &NoBreak;&NoBreak;, where &NoBreak;&NoBreak;, &NoBreak;&NoBreak;, and &NoBreak;&NoBreak; are real-valued constant coefficients with &NoBreak;&NoBreak;.
Geometrically, the quadratic formula defines the points &NoBreak;&NoBreak; on the graph, where the parabola crosses the &NoBreak;&NoBreak;-axis. Furthermore, it can be separated into two terms,
formula_25
The first term describes the axis of symmetry, the line &NoBreak;}&NoBreak;. The second term, &NoBreak;&NoBreak;, gives the distance the roots are away from the axis of symmetry.
If the parabola's vertex is on the &NoBreak;&NoBreak;-axis, then the corresponding equation has a single repeated root on the line of symmetry, and this distance term is zero; algebraically, the discriminant &NoBreak;&NoBreak;.
If the discriminant is positive, then the vertex is not on the &NoBreak;&NoBreak;-axis but the parabola opens in the direction of the &NoBreak;&NoBreak;-axis, crossing it twice, so the corresponding equation has two real roots. If the discriminant is negative, then the parabola opens in the opposite direction, never crossing the &NoBreak;&NoBreak;-axis, and the equation has no real roots; in this case the two complex-valued roots will be complex conjugates whose real part is the &NoBreak;&NoBreak; value of the axis of symmetry.
Dimensional analysis.
If the constants &NoBreak;&NoBreak;, &NoBreak;&NoBreak;, and/or &NoBreak;&NoBreak; are not unitless then the quantities &NoBreak;&NoBreak; and &NoBreak;&NoBreak; must have the same units, because the terms &NoBreak;&NoBreak; and &NoBreak;&NoBreak; agree on their units. By the same logic, the coefficient &NoBreak;&NoBreak; must have the same units as &NoBreak;}&NoBreak;, irrespective of the units of &NoBreak;&NoBreak;. This can be a powerful tool for verifying that a quadratic expression of physical quantities has been set up correctly.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\nx = \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a},\n"
},
{
"math_id": 1,
"text": "\nx_1 = \\frac{-b + \\sqrt {b^2 - 4ac}}{2a}, \\qquad\nx_2 = \\frac{-b - \\sqrt {b^2 - 4ac}}{2a}.\n"
},
{
"math_id": 2,
"text": "\\begin{align}\nax^{2\\vphantom|} + bx + c &= 0 \\\\[3mu]\nx^2 + \\frac{b}{a} x + \\frac{c}{a} &= 0 \\\\[3mu]\nx^2 + \\frac{b}{a} x &= -\\frac{c}{a}.\n\\end{align}"
},
{
"math_id": 3,
"text": "\\begin{align}\nx^2 + 2\\left(\\frac{b}{2a}\\right)x + \\left(\\frac{b}{2a}\\right)^2\n&= -\\frac{c}{a}+\\left( \\frac{b}{2a} \\right)^2 \\\\[5mu]\n\\left(x + \\frac{b}{2a}\\right)^2\n&= \\frac{b^2 - 4ac}{4a^2} .\n\\end{align}"
},
{
"math_id": 4,
"text": "\nx + \\frac{b}{2a} = \\pm\\frac{\\sqrt{b^2 - 4ac}}{2a}.\n"
},
{
"math_id": 5,
"text": "\nx = \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a} .\n"
},
{
"math_id": 6,
"text": "\nx = -\\frac{b}{2a} \\pm \\sqrt{\\left(\\frac{b}{2a}\\right)^2-\\frac{c}{a}},\n"
},
{
"math_id": 7,
"text": "\nx= \\frac{2c}{-b \\mp \\sqrt {b^2 - 4ac}}.\n"
},
{
"math_id": 8,
"text": "\nx_1 = \\frac{2c}{-b - \\sqrt {b^2 - 4ac}}, \\qquad\nx_2 = \\frac{2c}{-b + \\sqrt {b^2 - 4ac}}.\n"
},
{
"math_id": 9,
"text": "\\begin{align}\nax^2 + bx + c &= 0 \\\\[3mu]\n4 a^2 x^2 + 4abx + 4ac &= 0 \\\\[3mu]\n4 a^2 x^2 + 4abx + b^2 &= b^2 - 4ac \\\\[3mu]\n(2ax + b)^2 &= b^2 - 4ac \\\\[3mu]\n2ax + b &= \\pm \\sqrt{b^2 - 4ac} \\\\[5mu]\nx &= \\dfrac{-b\\pm\\sqrt{b^2 - 4ac }}{2a}. \\vphantom\\bigg)\n\\end{align}"
},
{
"math_id": 10,
"text": "\\begin{align}\na\\left(u-\\frac{b}{2a}\\right)^2 + b\\left(u-\\frac{b}{2a}\\right) + c &=0 \\\\[5mu]\na\\left(u^2-\\frac{b}{a}u+\\frac{b^2}{4a^2}\\right) + b\\left(u-\\frac{b}{2a}\\right) + c &= 0 \\\\[5mu]\nau^2 - bu + \\frac{b^2}{4a} + bu - \\frac{b^2}{2a}+c &= 0 \\\\[5mu]\nau^2 + \\frac{4ac - b^2}{4a} &= 0 \\\\[5mu]\nu^2 &= \\frac{b^2 - 4ac}{4a^2}.\n\\end{align}"
},
{
"math_id": 11,
"text": "\nx = \\frac{-b\\pm \\sqrt{b^2 - 4ac}}{2a}.\n"
},
{
"math_id": 12,
"text": "\\begin{align}\n(\\alpha - \\beta)^2 &= (\\alpha + \\beta)^2 - 4 \\alpha\\beta \\\\[3mu]\n\\alpha - \\beta &= \\pm\\sqrt{(\\alpha + \\beta)^2 - 4 \\alpha\\beta} .\n\\end{align}"
},
{
"math_id": 13,
"text": "\nx^2 + \\frac{b}{a}x + \\frac{c}{a}\n= (x - \\alpha)(x - \\beta)\n= x^2 - (\\alpha + \\beta)x + \\alpha\\beta .\n"
},
{
"math_id": 14,
"text": "\n\\alpha - \\beta\n= \\pm\\sqrt{\\left(-\\frac{b}{a}\\right)^2-4\\frac{c}{a}}\n= \\pm\\frac{\\sqrt{b^2 - 4ac}}{a} .\n"
},
{
"math_id": 15,
"text": "\\begin{align}\n\\alpha &= \\tfrac12(\\alpha + \\beta) + \\tfrac12(\\alpha - \\beta)\n = -\\frac{b}{2a} \\pm \\frac{\\sqrt{b^2 - 4ac}}{2a}, \\\\[10mu]\n\\beta &= \\tfrac12(\\alpha + \\beta) - \\tfrac12(\\alpha - \\beta)\n = -\\frac{b}{2a} \\mp \\frac{\\sqrt{b^2 - 4ac}}{2a}.\n\\end{align}"
},
{
"math_id": 16,
"text": " x = \\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a} ."
},
{
"math_id": 17,
"text": "\\begin{align}\nx^2+px+q &= (x-\\alpha)(x-\\beta) \\\\[3mu]\n &= x^2-(\\alpha+\\beta)x+\\alpha\\beta\n\\end{align}"
},
{
"math_id": 18,
"text": "\nr_1 = \\alpha + \\beta, \\quad r_2 = \\alpha - \\beta .\n"
},
{
"math_id": 19,
"text": "\n\\alpha = \\tfrac12 (r_1 + r_2), \\quad \\beta = \\tfrac12(r_1 - r_2).\n"
},
{
"math_id": 20,
"text": "\nr_1 = -p, \\qquad r_2 = {\\textstyle \\sqrt{p^2 - 4q}}\n"
},
{
"math_id": 21,
"text": "\nx = \\tfrac12(r_1 \\pm r_2)\n= \\tfrac{1}{2} \\bigl({-p} \\pm {\\textstyle \\sqrt{p^2 - 4q}}\\,\\bigr)\n"
},
{
"math_id": 22,
"text": "\n\\tfrac12 r_1 = -\\tfrac12p = -\\frac{b}{2a}, \\qquad\nr_2^2 = p_2 - 4q = \\frac{b^2 - 4ac}{a^2},\n"
},
{
"math_id": 23,
"text": "\\begin{alignat}{3}\nx_1 &= 817 + 816.998\\,776\\,0 &&= 1.633\\,998\\,776 \\times 10^3, \\\\\nx_2 &= 817 - 816.998\\,776\\,0 &&= 1.224 \\times 10^{-3}.\n\\end{alignat}"
},
{
"math_id": 24,
"text": "\nu^2 - 2 \\frac{|b|}{2\\sqrt{|a|}\\sqrt{|c|}}u + \\sgn(c) = 0,\n"
},
{
"math_id": 25,
"text": "\nx = \\frac{-b\\pm\\sqrt{b^2 - 4ac }}{2a}\n= -\\frac{b}{2a} \\pm \\frac{\\sqrt{b^2 - 4ac}}{2a}.\n"
}
] |
https://en.wikipedia.org/wiki?curid=59217
|
592198
|
Stretch rule
|
Classical mechanics rule
In classical mechanics, the stretch rule (sometimes referred to as Routh's rule) states that the moment of inertia of a rigid object is unchanged when the object is stretched parallel to an axis of rotation that is a principal axis, provided that the distribution of mass remains unchanged except in the direction parallel to the axis. This operation leaves cylinders oriented parallel to the axis unchanged in radius.
This rule can be applied with the parallel axis theorem and the perpendicular axis theorem to find moments of inertia for a variety of shapes.
Derivation.
The (scalar) moment of inertia of a rigid body around the z-axis is given by:
formula_0
Where formula_1 is the distance of a point from the z-axis. We can expand as follows, since we are dealing with stretching over the "z"-axis only:
formula_2
Here, formula_3 is the body's height. Stretching the object by a factor of formula_4 along the z-axis is equivalent to dividing the mass density by formula_4 (meaning formula_5), as well as integrating over new limits formula_6 and formula_7 (the new height of the object), thus leaving the total mass unchanged. This means the new moment of inertia will be:
formula_8
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " I_z = \\int_V d^3 r \\, \\rho(\\mathbf{r})\\,r^2"
},
{
"math_id": 1,
"text": "r"
},
{
"math_id": 2,
"text": " I_z = \\int_0^L dz \\int_{x,y} dx \\, dy \\, \\rho(x, y, z)\\,r^2 "
},
{
"math_id": 3,
"text": "L"
},
{
"math_id": 4,
"text": "a"
},
{
"math_id": 5,
"text": "\\rho'(x, y, z) = \\rho(x, y, z/a)/a"
},
{
"math_id": 6,
"text": "0"
},
{
"math_id": 7,
"text": "aL"
},
{
"math_id": 8,
"text": "\n\\begin{align}\nI_z' & = \\int_0^{aL} dz \\int_{x,y} dx \\, dy \\, \\rho'(x, y, z) \\,r^2 \\\\[8pt]\n& = \\int_0^L a \\, dz' \\int_{x,y} dx \\, dy \\, \\frac{\\rho(x, y, z/a)}{a} \\,r^2 \\\\[8pt]\n& = \\int_0^L dz' \\int_{x,y} dx \\, dy \\, \\rho(x, y, z') \\,r^2 = I_z\n\\end{align}\n"
}
] |
https://en.wikipedia.org/wiki?curid=592198
|
59220
|
Base (topology)
|
Collection of open sets used to define a topology
In mathematics, a base (or basis; pl.: bases) for the topology τ of a topological space ("X", τ) is a family formula_0 of open subsets of "X" such that every open set of the topology is equal to the union of some sub-family of formula_0. For example, the set of all open intervals in the real number line formula_1 is a basis for the Euclidean topology on formula_1 because every open interval is an open set, and also every open subset of formula_1 can be written as a union of some family of open intervals.
Bases are ubiquitous throughout topology. The sets in a base for a topology, which are called basic open sets, are often easier to describe and use than arbitrary open sets. Many important topological definitions such as continuity and convergence can be checked using only basic open sets instead of arbitrary open sets. Some topologies have a base of open sets with specific useful properties that may make checking such topological definitions easier.
Not all families of subsets of a set formula_2 form a base for a topology on formula_2. Under some conditions detailed below, a family of subsets will form a base for a (unique) topology on formula_2, obtained by taking all possible unions of subfamilies. Such families of sets are very frequently used to define topologies. A weaker notion related to bases is that of a subbase for a topology. Bases for topologies are also closely related to neighborhood bases.
Definition and basic properties.
Given a topological space formula_3, a base (or basis) for the topology formula_4 (also called a "base for" formula_2 if the topology is understood) is a family formula_5 of open sets such that every open set of the topology can be represented as the union of some subfamily of formula_0. The elements of formula_0 are called "basic open sets".
Equivalently, a family formula_0 of subsets of formula_2 is a base for the topology formula_4 if and only if formula_5 and for every open set formula_6 in formula_2 and point formula_7 there is some basic open set formula_8 such that formula_9.
For example, the collection of all open intervals in the real line forms a base for the standard topology on the real numbers. More generally, in a metric space formula_10 the collection of all open balls about points of formula_10 forms a base for the topology.
In general, a topological space formula_3 can have many bases. The whole topology formula_4 is always a base for itself (that is, formula_4 is a base for formula_4). For the real line, the collection of all open intervals is a base for the topology. So is the collection of all open intervals with rational endpoints, or the collection of all open intervals with irrational endpoints, for example. Note that two different bases need not have any basic open set in common. One of the topological properties of a space formula_2 is the minimum cardinality of a base for its topology, called the weight of formula_2 and denoted formula_11. From the examples above, the real line has countable weight.
If formula_0 is a base for the topology formula_4 of a space formula_2, it satisfies the following properties:
(B1) The elements of formula_0 "cover" formula_2, i.e., every point formula_12 belongs to some element of formula_0.
(B2) For every formula_13 and every point formula_14, there exists some formula_15 such that formula_16.
Property (B1) corresponds to the fact that formula_2 is an open set; property (B2) corresponds to the fact that formula_17 is an open set.
Conversely, suppose formula_2 is just a set without any topology and formula_0 is a family of subsets of formula_2 satisfying properties (B1) and (B2). Then formula_0 is a base for the topology that it generates. More precisely, let formula_4 be the family of all subsets of formula_2 that are unions of subfamilies of formula_18 Then formula_4 is a topology on formula_2 and formula_0 is a base for formula_4.
(Sketch: formula_4 defines a topology because it is stable under arbitrary unions by construction, it is stable under finite intersections by (B2), it contains formula_2 by (B1), and it contains the empty set as the union of the empty subfamily of formula_0. The family formula_0 is then a base for formula_4 by construction.) Such families of sets are a very common way of defining a topology.
In general, if formula_2 is a set and formula_0 is an arbitrary collection of subsets of formula_2, there is a (unique) smallest topology formula_4 on formula_2 containing formula_0. (This topology is the intersection of all topologies on formula_2 containing formula_0.) The topology formula_4 is called the topology generated by formula_0, and formula_0 is called a subbase for formula_4. The topology formula_4 can also be characterized as the set of all arbitrary unions of finite intersections of elements of formula_0. (See the article about subbase.) Now, if formula_0 also satisfies properties (B1) and (B2), the topology generated by formula_0 can be described in a simpler way without having to take intersections: formula_4 is the set of all unions of elements of formula_0 (and formula_0 is base for formula_4 in that case).
There is often an easy way to check condition (B2). If the intersection of any two elements of formula_0 is itself an element of formula_0 or is empty, then condition (B2) is automatically satisfied (by taking formula_19). For example, the Euclidean topology on the plane admits as a base the set of all open rectangles with horizontal and vertical sides, and a nonempty intersection of two such basic open sets is also a basic open set. But another base for the same topology is the collection of all open disks; and here the full (B2) condition is necessary.
An example of a collection of open sets that is not a base is the set formula_20 of all semi-infinite intervals of the forms formula_21 and formula_22 with formula_23. The topology generated by formula_20 contains all open intervals formula_24, hence formula_20 generates the standard topology on the real line. But formula_20 is only a subbase for the topology, not a base: a finite open interval formula_25 does not contain any element of formula_20 (equivalently, property (B2) does not hold).
Examples.
The set Γ of all open intervals in formula_26 forms a basis for the Euclidean topology on formula_26.
A non-empty family of subsets of a set X that is closed under finite intersections of two or more sets, which is called a π-system on X, is necessarily a base for a topology on X if and only if it covers X. By definition, every σ-algebra, every filter (and so in particular, every neighborhood filter), and every topology is a covering π-system and so also a base for a topology. In fact, if Γ is a filter on X then { ∅ } ∪ Γ is a topology on X and Γ is a basis for it. A base for a topology does not have to be closed under finite intersections and many are not. But nevertheless, many topologies are defined by bases that are also closed under finite intersections. For example, each of the following families of subset of formula_26 is closed under finite intersections and so each forms a basis for "some" topology on formula_26:
Objects defined in terms of bases.
The Zariski topology on the spectrum of a ring has a base consisting of open sets that have specific useful properties. For the usual base for this topology, every finite intersection of basic open sets is a basic open set.
Base for the closed sets.
Closed sets are equally adept at describing the topology of a space. There is, therefore, a dual notion of a base for the closed sets of a topological space. Given a topological space formula_41 a family formula_42 of closed sets forms a base for the closed sets if and only if for each closed set formula_43 and each point formula_32 not in formula_43 there exists an element of formula_42 containing formula_43 but not containing formula_44
A family formula_42 is a base for the closed sets of formula_2 if and only if its dual in formula_41 that is the family formula_45 of complements of members of formula_42, is a base for the open sets of formula_46
Let formula_42 be a base for the closed sets of formula_46 Then
Any collection of subsets of a set formula_2 satisfying these properties forms a base for the closed sets of a topology on formula_46 The closed sets of this topology are precisely the intersections of members of formula_53
In some cases it is more convenient to use a base for the closed sets rather than the open ones. For example, a space is completely regular if and only if the zero sets form a base for the closed sets. Given any topological space formula_41 the zero sets form the base for the closed sets of some topology on formula_46 This topology will be the finest completely regular topology on formula_2 coarser than the original one. In a similar vein, the Zariski topology on A"n" is defined by taking the zero sets of polynomial functions as a base for the closed sets.
Weight and character.
We shall work with notions established in .
Fix "X" a topological space. Here, a network is a family formula_54 of sets, for which, for all points "x" and open neighbourhoods "U" containing "x", there exists "B" in formula_54 for which formula_55 Note that, unlike a basis, the sets in a network need not be open.
We define the weight, "w"("X"), as the minimum cardinality of a basis; we define the network weight, "nw"("X"), as the minimum cardinality of a network; the character of a point, formula_56 as the minimum cardinality of a neighbourhood basis for "x" in "X"; and the character of "X" to be
formula_57
The point of computing the character and weight is to be able to tell what sort of bases and local bases can exist. We have the following facts:
The last fact follows from "f"("X") being compact Hausdorff, and hence formula_65 (since compact metrizable spaces are necessarily second countable); as well as the fact that compact Hausdorff spaces are metrizable exactly in case they are second countable. (An application of this, for instance, is that every path in a Hausdorff space is compact metrizable.)
Increasing chains of open sets.
Using the above notation, suppose that "w"("X") ≤ "κ" some infinite cardinal. Then there does not exist a strictly increasing sequence of open sets (equivalently strictly decreasing sequence of closed sets) of length ≥ "κ"+.
To see this (without the axiom of choice), fix
formula_66
as a basis of open sets. And suppose "per contra", that
formula_67
were a strictly increasing sequence of open sets. This means
formula_68
For
formula_69
we may use the basis to find some "Uγ" with "x" in "Uγ" ⊆ "Vα". In this way we may well-define a map, "f" : "κ"+ → "κ" mapping each "α" to the least "γ" for which "Uγ" ⊆ "Vα" and meets
formula_70
This map is injective, otherwise there would be "α" < "β" with "f"("α") = "f"("β") = "γ", which would further imply "Uγ" ⊆ "Vα" but also meets
formula_71
which is a contradiction. But this would go to show that "κ"+ ≤ "κ", a contradiction.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Bibliography.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathcal{B}"
},
{
"math_id": 1,
"text": "\\R"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "(X,\\tau)"
},
{
"math_id": 4,
"text": "\\tau"
},
{
"math_id": 5,
"text": "\\mathcal{B}\\subseteq\\tau"
},
{
"math_id": 6,
"text": "U"
},
{
"math_id": 7,
"text": "x\\in U"
},
{
"math_id": 8,
"text": "B\\in\\mathcal{B}"
},
{
"math_id": 9,
"text": "x\\in B\\subseteq U"
},
{
"math_id": 10,
"text": "M"
},
{
"math_id": 11,
"text": "w(X)"
},
{
"math_id": 12,
"text": "x\\in X"
},
{
"math_id": 13,
"text": "B_1,B_2\\in\\mathcal{B}"
},
{
"math_id": 14,
"text": "x\\in B_1\\cap B_2"
},
{
"math_id": 15,
"text": "B_3\\in\\mathcal{B}"
},
{
"math_id": 16,
"text": "x\\in B_3\\subseteq B_1\\cap B_2"
},
{
"math_id": 17,
"text": "B_1\\cap B_2"
},
{
"math_id": 18,
"text": "\\mathcal{B}."
},
{
"math_id": 19,
"text": "B_3=B_1\\cap B_2"
},
{
"math_id": 20,
"text": "S"
},
{
"math_id": 21,
"text": "(-\\infty,a)"
},
{
"math_id": 22,
"text": "(a,\\infty)"
},
{
"math_id": 23,
"text": "a\\in\\mathbb{R}"
},
{
"math_id": 24,
"text": "(a,b)=(-\\infty,b)\\cap(a,\\infty)"
},
{
"math_id": 25,
"text": "(a,b)"
},
{
"math_id": 26,
"text": "\\mathbb{R}"
},
{
"math_id": 27,
"text": "\\mathbb{Q}"
},
{
"math_id": 28,
"text": "\\C^n"
},
{
"math_id": 29,
"text": "\\tau_2"
},
{
"math_id": 30,
"text": "\\tau_1"
},
{
"math_id": 31,
"text": "B"
},
{
"math_id": 32,
"text": "x"
},
{
"math_id": 33,
"text": "\\mathcal{B}_1, \\ldots, \\mathcal{B}_n"
},
{
"math_id": 34,
"text": "\\tau_1, \\ldots, \\tau_n"
},
{
"math_id": 35,
"text": "B_1 \\times \\cdots \\times B_n"
},
{
"math_id": 36,
"text": "B_i\\in\\mathcal{B}_i"
},
{
"math_id": 37,
"text": "\\tau_1 \\times \\cdots \\times \\tau_n."
},
{
"math_id": 38,
"text": "Y"
},
{
"math_id": 39,
"text": "f : X \\to Y"
},
{
"math_id": 40,
"text": "f"
},
{
"math_id": 41,
"text": "X,"
},
{
"math_id": 42,
"text": "\\mathcal{C}"
},
{
"math_id": 43,
"text": "A"
},
{
"math_id": 44,
"text": "x."
},
{
"math_id": 45,
"text": "\\{X\\setminus C: C\\in \\mathcal{C}\\}"
},
{
"math_id": 46,
"text": "X."
},
{
"math_id": 47,
"text": "\\bigcap \\mathcal{C} = \\varnothing"
},
{
"math_id": 48,
"text": "C_1, C_2 \\in \\mathcal{C}"
},
{
"math_id": 49,
"text": "C_1 \\cup C_2"
},
{
"math_id": 50,
"text": "x \\in X"
},
{
"math_id": 51,
"text": "C_1 \\text{ or } C_2"
},
{
"math_id": 52,
"text": "C_3 \\in \\mathcal{C}"
},
{
"math_id": 53,
"text": "\\mathcal{C}."
},
{
"math_id": 54,
"text": "\\mathcal{N}"
},
{
"math_id": 55,
"text": "x \\in B \\subseteq U."
},
{
"math_id": 56,
"text": "\\chi(x,X),"
},
{
"math_id": 57,
"text": "\\chi(X)\\triangleq\\sup\\{\\chi(x,X):x\\in X\\}."
},
{
"math_id": 58,
"text": "B'\\subseteq B"
},
{
"math_id": 59,
"text": "|B'|\\leq w(X)."
},
{
"math_id": 60,
"text": "N'\\subseteq N"
},
{
"math_id": 61,
"text": "|N'|\\leq \\chi(x,X)."
},
{
"math_id": 62,
"text": "f'''B \\triangleq \\{f''U : U\\in B\\}"
},
{
"math_id": 63,
"text": "(X,\\tau')"
},
{
"math_id": 64,
"text": "w(X,\\tau')\\leq nw(X,\\tau)."
},
{
"math_id": 65,
"text": "nw(f(X))=w(f(X))\\leq w(X)\\leq\\aleph_0"
},
{
"math_id": 66,
"text": "\\left \\{ U_{\\xi} \\right \\}_{\\xi\\in\\kappa},"
},
{
"math_id": 67,
"text": "\\left \\{ V_{\\xi}\\right \\}_{\\xi\\in\\kappa^{+}}"
},
{
"math_id": 68,
"text": "\\forall \\alpha<\\kappa^+: \\qquad V_{\\alpha}\\setminus\\bigcup_{\\xi<\\alpha} V_{\\xi} \\neq \\varnothing."
},
{
"math_id": 69,
"text": "x\\in V_{\\alpha}\\setminus\\bigcup_{\\xi<\\alpha}V_{\\xi},"
},
{
"math_id": 70,
"text": "V_{\\alpha} \\setminus \\bigcup_{\\xi<\\alpha} V_{\\xi}."
},
{
"math_id": 71,
"text": "V_{\\beta} \\setminus \\bigcup_{\\xi<\\alpha} V_{\\xi} \\subseteq V_{\\beta} \\setminus V_{\\alpha},"
}
] |
https://en.wikipedia.org/wiki?curid=59220
|
59230603
|
Persistent array
|
Computer science data structure
In computer science, and more precisely regarding data structures, a persistent array is a persistent data structure with properties similar to a (non-persistent) array. That is, after a value's update in a persistent array, there exist two persistent arrays: one persistent array in which the update is taken into account, and one which is equal to the array before the update.
Difference between persistent arrays and arrays.
An array
formula_0 is a data structure,
with a fixed number "n" of elements formula_1. It is expected that, given the array "ar" and an
index formula_2, the value formula_3 can be
retrieved quickly. This operation is called a
lookup. Furthermore, given the array "ar", an index
formula_2 and a new value "v", a new array "ar2" with
content formula_4 can
be created quickly. This operation is called an update. The
main difference between persistent and non-persistent arrays being
that, in non-persistent arrays, the array "ar" is destroyed during
the creation of "ar2".
For example, consider the following pseudocode.
"array" = [0, 0, 0]
"updated_array" = "array".update(0, 8)
"other_array" = "array".update(1, 3)"
"last_array" = "updated_array".update(2, 5)
At the end of execution, the value of "array" is still [0, 0, 0], the
value of "updated_array" is [8, 0, 0], the value of "other_array"
is [0, 3, 0], and the value of "last_array" is [8, 0, 5].
There exist two kinds of persistent arrays. A persistent array may be
either partially or fully persistent. A fully persistent
array may be updated an arbitrary number of times while a partially
persistent array may be updated at most once. In our previous example,
if "array" were only partially persistent, the creation of
"other_array" would be forbidden; however, the creation of
"last_array" would still be valid. Indeed, "updated_array" is an array
distinct from "array" and has never been updated before the creation
of "last_array".
Lower Bound on Persistent Array Lookup Time.
Given that non-persistent arrays support both updates and lookups in constant time, it is natural to ask whether the same is possible with persistent arrays. The following theorem shows that under mild assumptions about the space complexity of the array, lookups must take formula_5 time in the worst case, regardless of update time, in the cell-probe model.
<templatestyles src="Math_theorem/styles.css" />
Theorem — Consider a partially persistent array with formula_6 elements and formula_7 modifications, where formula_8 is a constant fulfilling formula_9.
Assuming the space complexity of the array is formula_10 for a constant formula_11,
the lower bound on the lookup complexity in this partially persistent
array is formula_5.
Implementations.
In this section, formula_6 is the number of elements of the array, and formula_12 is the number of updates.
Worst case log-time.
The most straightforward implementation of a fully persistent array uses an arbitrary persistent map, whose keys are the numbers from "0" to "n" − 1. A persistent map may be implemented using a persistent balanced tree, in which case both updates and lookups would take formula_13 time. This implementation is optimal for the pointer machine model.
Shallow binding.
A fully persistent array may be implemented using an array and the
so-called Baker's trick. This implementation is used in the OCaml module parray.ml by Jean-Christophe Filliâtre.
In order to define this implementation, a few other definitions must
be given. An initial array is an array that is not generated by
an update on another array. A child of an array "ar" is an
array of the form "ar.update(i,v)", and "ar" is the parent
of "ar.update(i,v)". A descendant of an array "ar" is either
"ar" or the descendant of a child of "ar". The initial array
of an array "ar" is either "ar" if "ar" is initial, or it is the
initial array of the parent of "ar". That is, the initial array of
"ar" is the unique array "init" such that formula_14, with "ar" initial
and formula_15 an arbitrary sequence of indexes and
formula_16 an arbitrary sequence of value. A
"family" of arrays is thus a set of arrays containing an initial
array and all of its descendants. Finally, the tree of a family of
arrays is the tree whose nodes are the
arrays, and with an edge "e" from "ar" to each of its children
"ar.update(i,v)".
A persistent array using Baker's trick consists of a pair with
an actual array called "array" and the tree of arrays. This tree
admits an arbitrary root - not necessarily the initial array. The
root may be moved to an arbitrary node of the tree. Changing the root
from "root" to an arbitrary node "ar" takes time proportional to
the depth of "ar". That is, in the distance between "root" and
"ar". Similarly, looking up a value takes time proportional to the
distance between the array and the root of its family. Thus, if the
same array "ar" may be lookup multiple times, it is more efficient
to move the root to "ar" before doing the lookup. Finally updating
an array only takes constant time.
Technically, given two adjacent arrays "ar1" and "ar2", with
"ar1" closer to the root than "ar2", the edge from "ar1" to
"ar2" is labelled by "(i,ar2[i])", where "i" the only position
whose value differ between "ar1" and "ar2".
Accessing an element "i" of an array "ar" is done as follows. If
"ar" is the root, then "ar[i]" equals "root[i]". Otherwise, let
"e" the edge leaving "ar" toward the root. If the label of "e"
is "(i,v)" then "ar[i]" equals "v". Otherwise, let "ar2" be
the other node of the edge "e". Then "ar[i]" equals
"ar2[i]". The computation of "ar2[i]" is done recursively using
the same definition.
The creation of "ar.update(i,v)" consists in adding a new node
"ar2" to the tree, and an edge "e" from "ar" to "ar2" labelled
by "(i,v)".
Finally, moving the root to a node "ar" is done as follows. If
"ar" is already the root, there is nothing to do. Otherwise, let
"e" the edge leaving "ar" toward the current root, "(i,v)" its
label and "ar2" the other end of "e". Moving the root to "ar" is
done by first moving the root to "ar2", changing the label of "e"
to "(i, ar2[i])", and changing "array[i]" to "v".
Updates take formula_17 time. Lookups take formula_17 time if the root is the array being looked up, but formula_18 time in the worst case.
Expected amortized log-log-time.
In 1989, Dietz
gave an implementation of fully persistent arrays using formula_19 space such that lookups can be done in formula_20 worst-case time, and updates can be done in
formula_21 expected amortized time. By the lower bound from the previous section, this time complexity for lookup is optimal when formula_22 for formula_23. This implementation is related to the order-maintenance problem and involves vEB trees, one for the entire array and one for each index.
Straka showed that the times for both operations can be (slightly) improved to formula_24.
Worst case log-log-time.
Straka showed how to achieve formula_25 worst-case time and linear (formula_19) space, or formula_21 worst-case time and super-linear space. It remains open whether it is possible to achieve worst-case time formula_21 subject to linear space.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{ar}=[e_0,\\dots,e_{n-1}]"
},
{
"math_id": 1,
"text": "e_0, \\dots,\ne_{n-1}"
},
{
"math_id": 2,
"text": "0\\le i<n"
},
{
"math_id": 3,
"text": "e_i"
},
{
"math_id": 4,
"text": "[e_0,\\dots,e_{i-1},v,e_{i+1},\\dots,e_{n-1}]"
},
{
"math_id": 5,
"text": "\\Omega(\\log \\log n)"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": "m = n^{\\gamma}"
},
{
"math_id": 8,
"text": "\\gamma"
},
{
"math_id": 9,
"text": "1 < \\gamma \\le 2"
},
{
"math_id": 10,
"text": "O(m\\log^k m)"
},
{
"math_id": 11,
"text": "k"
},
{
"math_id": 12,
"text": "m"
},
{
"math_id": 13,
"text": "O(\\log n)"
},
{
"math_id": 14,
"text": "\\mathrm{ar} =\ninit.update(i_0,v_0).\\dots.update(i_m,v_m)"
},
{
"math_id": 15,
"text": "i_0,\\dots,i_m"
},
{
"math_id": 16,
"text": "v_0,\\dots,v_m"
},
{
"math_id": 17,
"text": "O(1)"
},
{
"math_id": 18,
"text": "\\Theta(m)"
},
{
"math_id": 19,
"text": "O(m+n)"
},
{
"math_id": 20,
"text": "O(\\log \\log m)"
},
{
"math_id": 21,
"text": "O(\\log\\log m)"
},
{
"math_id": 22,
"text": "m=n^{\\gamma}"
},
{
"math_id": 23,
"text": "\\gamma\\in (1,2]"
},
{
"math_id": 24,
"text": "O(\\log\\log \\min(m,n))"
},
{
"math_id": 25,
"text": "O((\\log \\log m)^2/\\log\\log\\log m)"
}
] |
https://en.wikipedia.org/wiki?curid=59230603
|
59232947
|
Poisson-type random measure
|
Family of three random counting measures
Poisson-type random measures are a family of three random counting measures which are closed under restriction to a subspace, i.e. closed under thinning. They are the only distributions in the canonical non-negative power series family of distributions to possess this property and include the Poisson distribution, negative binomial distribution, and binomial distribution. The PT family of distributions is also known as the Katz family of distributions, the Panjer or (a,b,0) class of distributions and may be retrieved through the Conway–Maxwell–Poisson distribution.
Throwing stones.
Let formula_0 be a non-negative integer-valued random variable formula_1) with law formula_2, mean formula_3 and when it exists variance formula_4. Let formula_5 be a probability measure on the measurable space formula_6. Let formula_7 be a collection of iid random variables (stones) taking values in formula_6 with law formula_5.
The random counting measure formula_8 on formula_6 depends on the pair of deterministic probability measures formula_9 through the stone throwing construction (STC)
formula_10
where formula_0 has law formula_2 and iid formula_11 have law formula_5. formula_8 is a mixed binomial process
Let formula_12 be the collection of positive formula_13-measurable functions. The probability law of formula_8 is encoded in the Laplace functional
formula_14
where formula_15 is the generating function of formula_0. The mean and variance are given by
formula_16
and
formula_17
The covariance for arbitrary formula_18 is given by
formula_19
When formula_0 is Poisson, negative binomial, or binomial, it is said to be Poisson-type (PT). The joint distribution of the collection formula_20 is for formula_21 and formula_22
formula_23
The following result extends construction of a random measure formula_24 to the case when the collection formula_25 is expanded to formula_26 where formula_27 is a random transformation of formula_28. Heuristically, formula_27 represents some properties (marks) of formula_28. We assume that the conditional law of formula_29 follows some transition kernel according to formula_30.
Theorem: Marked STC.
Consider random measure formula_24 and the transition probability kernel formula_31 from formula_32 into formula_33. Assume that given the collection formula_25 the variables formula_34 are conditionally independent with formula_35. Then formula_36 is a random measure on formula_37. Here formula_38 is understood as formula_39. Moreover, for any formula_40 we have that formula_41 where formula_42 is pgf of formula_0 and formula_43 is defined as formula_44
The following corollary is an immediate consequence.
Corollary: Restricted STC.
The quantity formula_45 is a well-defined random measure on the measurable subspace formula_46 where formula_47 and formula_48. Moreover, for any formula_49, we have that formula_50 where formula_51.
Note formula_52 where we use formula_53.
Collecting Bones.
The probability law of the random measure is determined by its Laplace functional and hence generating function.
Definition: Bone.
Let formula_54 be the counting variable of formula_0 restricted to formula_55. When formula_56 and formula_57 share the same family of laws subject to a rescaling formula_58 of the parameter formula_59, then formula_0 is a called a bone distribution. The bone condition for the pgf is given by
formula_60.
Equipped with the notion of a bone distribution and condition, the main result for the existence and uniqueness of Poisson-type (PT) random counting measures is given as follows.
Theorem: existence and uniqueness of PT random measures.
Assume that formula_61 with pgf formula_62 belongs to the canonical non-negative power series (NNPS) family of distributions and formula_63. Consider the random measure formula_64 on the space formula_6 and assume that formula_5 is diffuse. Then for any formula_55 with formula_65 there exists a mapping formula_66 such that the restricted random measure is formula_67, that is,
formula_68
iff formula_0 is Poisson, negative binomial, or binomial (Poisson-type).
The proof for this theorem is based on a generalized additive Cauchy equation and its solutions. The theorem states that out of all NNPS distributions, only PT have the property that their restrictions formula_69 share the same family of distribution as formula_0, that is, they are closed under thinning. The PT random measures are the Poisson random measure, negative binomial random measure, and binomial random measure. Poisson is additive with independence on disjoint sets, whereas negative binomial has positive covariance and binomial has negative covariance. The binomial process is a limiting case of binomial random measure where formula_70.
Distributional self-similarity applications.
The "bone" condition on the pgf formula_62 of formula_0 encodes a distributional self-similarity property whereby all counts in restrictions (thinnings) to subspaces (encoded by pgf formula_71) are in the same family as formula_62 of formula_0 through rescaling of the canonical parameter. These ideas appear closely connected to those of self-decomposability and stability of discrete random variables. Binomial thinning is a foundational model to count time-series. The Poisson random measure has the well-known splitting property, is prototypical to the class of additive (completely random) random measures, and is related to the structure of Lévy processes, the jumps of Kolmogorov equations (Markov jump process), and the excursions of Brownian motion. Hence the self-similarity property of the PT family is fundamental to multiple areas. The PT family members are "primitives" or prototypical random measures by which many random measures and processes can be constructed.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K"
},
{
"math_id": 1,
"text": "K\\in\\mathbb{N}_{\\ge0}=\\mathbb{N}_{>0}\\cup\\{0\\}"
},
{
"math_id": 2,
"text": "\\kappa"
},
{
"math_id": 3,
"text": "c\\in(0,\\infty)"
},
{
"math_id": 4,
"text": "\\delta^2>0"
},
{
"math_id": 5,
"text": "\\nu"
},
{
"math_id": 6,
"text": "(E,\\mathcal{E})"
},
{
"math_id": 7,
"text": "\\mathbf{X}=\\{X_i\\}"
},
{
"math_id": 8,
"text": "N"
},
{
"math_id": 9,
"text": "(\\kappa,\\nu)"
},
{
"math_id": 10,
"text": "\\quad N_\\omega(A) = N(\\omega,A) = \\sum_{i=1}^{K(\\omega)}\\mathbb{I}_A(X_i(\\omega))\\quad \\text{for} \\quad\\omega\\in\\Omega,\\,\\,\\,A\\in\\mathcal{E}"
},
{
"math_id": 11,
"text": "X_1,X_2,\\dotsb"
},
{
"math_id": 12,
"text": "\\mathcal{E}_+=\\{f: E\\mapsto\\mathbb{R}_+\\}"
},
{
"math_id": 13,
"text": "\\mathcal{E}"
},
{
"math_id": 14,
"text": "\\quad\\mathbb{E} e^{-N f} =\\mathbb{E} (\\mathbb{E} e^{-f(X)})^K =\\mathbb{E} (\\nu e^{-f})^K=\\psi(\\nu e^{-f})\\quad\\text{for}\\quad f\\in\\mathcal{E}_+"
},
{
"math_id": 15,
"text": "\\psi(\\cdot)"
},
{
"math_id": 16,
"text": "\\quad\\mathbb{E} Nf =c\\nu f"
},
{
"math_id": 17,
"text": "\\quad\\mathbb{V}\\text{ar} Nf = c\\nu f^2 + (\\delta^2-c) (\\nu f)^2"
},
{
"math_id": 18,
"text": "f,g\\in\\mathcal{E}_+"
},
{
"math_id": 19,
"text": "\\quad\\mathbb{C}\\text{ov}(Nf,Ng) = c\\nu(fg) + (\\delta^2-c)\\nu f \\nu g"
},
{
"math_id": 20,
"text": "N(A),\\ldots,N(B)"
},
{
"math_id": 21,
"text": "i,\\ldots, j \\in \\N"
},
{
"math_id": 22,
"text": "i+\\cdots+j =k"
},
{
"math_id": 23,
"text": "\n\\mathbb{P}(N(A)=i,\\ldots, N(B)=j)=\\mathbb{P}(N(A)=i,\\ldots, N(B)=j|K=k)\\,\\mathbb{P}(K=k)=\\frac{k!}{i!\\cdots j!}\\,\\nu(A)^i\\cdots \\nu(B)^j\\, \\mathbb{P}(K=k)"
},
{
"math_id": 24,
"text": "N=(\\kappa,\\nu)"
},
{
"math_id": 25,
"text": "\\mathbf{X}"
},
{
"math_id": 26,
"text": "(\\mathbf{X},\\mathbf{Y})=\\{(X_i,Y_i)\\}"
},
{
"math_id": 27,
"text": "Y_i"
},
{
"math_id": 28,
"text": "X_i"
},
{
"math_id": 29,
"text": "Y"
},
{
"math_id": 30,
"text": "\\mathbb{P}(Y\\in B|X=x)=Q(x,B)"
},
{
"math_id": 31,
"text": "Q"
},
{
"math_id": 32,
"text": "(E, \\cal E)"
},
{
"math_id": 33,
"text": "(F, \\cal F)"
},
{
"math_id": 34,
"text": "\\mathbf{Y}=\\{Y_i\\}"
},
{
"math_id": 35,
"text": "Y_i\\sim Q(X_i,\\cdot)"
},
{
"math_id": 36,
"text": "M=(\\kappa, \\nu\\times Q)"
},
{
"math_id": 37,
"text": "(E\\times F, \\cal E\\otimes F)"
},
{
"math_id": 38,
"text": "\\mu=\\nu\\times Q"
},
{
"math_id": 39,
"text": "\\mu(dx,dy)=\\nu(dx)Q(x,dy)"
},
{
"math_id": 40,
"text": "f\\in ({\\cal E}\\otimes {\\cal F})_+"
},
{
"math_id": 41,
"text": "\\mathbb{E} e^{-M f}=\\psi(\\nu e^{-g})"
},
{
"math_id": 42,
"text": "\\psi(\\cdot )"
},
{
"math_id": 43,
"text": "g\\in \\mathcal{E}_+"
},
{
"math_id": 44,
"text": "e^{-g(x)}= \\int_F Q(x,dy)e^{-f(x,y)}."
},
{
"math_id": 45,
"text": "N_A=(N\\mathbb{I}_A,\\nu_A)"
},
{
"math_id": 46,
"text": "(E\\cap A, \\mathcal{E}_A)"
},
{
"math_id": 47,
"text": "\\mathcal{E}_A=\\{A\\cap B: B\\in\\mathcal{E}\\}"
},
{
"math_id": 48,
"text": "\\nu_A(B)=\\nu(A\\cap B)/\\nu(A)"
},
{
"math_id": 49,
"text": "f\\in\\mathcal{E}_+"
},
{
"math_id": 50,
"text": "\\mathbb{E} e^{-N_A f} = \\psi(\\nu e^{-f}\\mathbb{I}_A+b)"
},
{
"math_id": 51,
"text": "b=1-\\nu(A)"
},
{
"math_id": 52,
"text": "\\psi(\\nu e^{-f}\\mathbb{I}_A+1-a)=\\psi_A(\\nu_A e^{-f})"
},
{
"math_id": 53,
"text": "\\nu e^{-f}\\mathbb{I}_A=a\\nu_A e^{-f}"
},
{
"math_id": 54,
"text": "K_A = N\\mathbb{I}_A"
},
{
"math_id": 55,
"text": "A\\subset E"
},
{
"math_id": 56,
"text": "\\{N\\mathbb{I}_A: A\\subset E\\}"
},
{
"math_id": 57,
"text": "K=N\\mathbb{I}_E"
},
{
"math_id": 58,
"text": "h_a(\\theta)"
},
{
"math_id": 59,
"text": "\\theta"
},
{
"math_id": 60,
"text": "\\psi_{\\theta}(at+1-a)=\\psi_{h_a(\\theta)}(t)"
},
{
"math_id": 61,
"text": "K\\sim \\kappa_\\theta"
},
{
"math_id": 62,
"text": "\\psi_\\theta"
},
{
"math_id": 63,
"text": "\\{0,1\\}\\subset\\text{supp}(K)"
},
{
"math_id": 64,
"text": "N=(\\kappa_\\theta,\\nu)"
},
{
"math_id": 65,
"text": "\\nu(A)=a>0"
},
{
"math_id": 66,
"text": "h_a:\\Theta\\rightarrow\\Theta"
},
{
"math_id": 67,
"text": "N_A=(\\kappa_{h_a(\\theta)},\\nu_A)"
},
{
"math_id": 68,
"text": "\\quad \\mathbb{E} e^{-N_A f} = \\psi_{h_a(\\theta)}(\\nu_A e^{-f})\\quad \\text{for}\\quad f\\in\\mathcal{E}_+"
},
{
"math_id": 69,
"text": "N\\mathbb{I}_A"
},
{
"math_id": 70,
"text": "p\\rightarrow 1, n\\rightarrow c"
},
{
"math_id": 71,
"text": "\\psi_A"
}
] |
https://en.wikipedia.org/wiki?curid=59232947
|
5924217
|
Hilbert symbol
|
In mathematics, the Hilbert symbol or norm-residue symbol is a function (–, –) from "K"× × "K"× to the group of "n"th roots of unity in a local field "K" such as the fields of reals or p-adic numbers. It is related to reciprocity laws, and can be defined in terms of the Artin symbol of local class field theory. The Hilbert symbol was introduced by David Hilbert (1897, sections 64, 131, 1998, English translation) in his Zahlbericht, with the slight difference that he defined it for elements of global fields rather than for the larger local fields.
The Hilbert symbol has been generalized to higher local fields.
Quadratic Hilbert symbol.
Over a local field "K" whose multiplicative group of non-zero elements is "K"×,
the quadratic Hilbert symbol is the function (–, –) from "K"× × "K"× to {−1,1} defined by
formula_0
Equivalently, formula_1 if and only if formula_2 is equal to the norm of an element of the quadratic extension formula_3 page 110.
Properties.
The following three properties follow directly from the definition, by choosing suitable solutions of the diophantine equation above:
The (bi)multiplicativity, i.e.,
("a", "b"1"b"2) = ("a", "b"1)·("a", "b"2)
for any "a", "b"1 and "b"2 in "K"× is, however, more difficult to prove, and requires the development of local class field theory.
The third property shows that the Hilbert symbol is an example of a Steinberg symbol and thus factors over the second Milnor K-group formula_4, which is by definition
"K"× ⊗ "K"× / ("a" ⊗ (1−"a)", "a" ∈ "K"× \ {1})
By the first property it even factors over formula_5. This is the first step towards the Milnor conjecture.
Interpretation as an algebra.
The Hilbert symbol can also be used to denote the central simple algebra over "K" with basis 1,"i","j","k" and multiplication rules formula_6, formula_7, formula_8. In this case the algebra represents an element of order 2 in the Brauer group of "K", which is identified with -1 if it is a division algebra and +1 if it is isomorphic to the algebra of 2 by 2 matrices.
Hilbert symbols over the rationals.
For a place "v" of the rational number field and rational numbers "a", "b" we let ("a", "b")"v" denote the value of the Hilbert symbol in the corresponding completion Q"v". As usual, if "v" is the valuation attached to a prime number "p" then the corresponding completion is the p-adic field and if "v" is the infinite place then the completion is the real number field.
Over the reals, ("a", "b")∞ is +1 if at least one of "a" or "b" is positive, and −1 if both are negative.
Over the p-adics with "p" odd, writing formula_9 and formula_10, where "u" and "v" are integers coprime to "p", we have
formula_11, where formula_12
and the expression involves two Legendre symbols.
Over the 2-adics, again writing formula_13 and formula_14, where "u" and "v" are odd numbers, we have
formula_15, where formula_16
It is known that if "v" ranges over all places, ("a", "b")"v" is 1 for almost all places. Therefore, the following product formula
formula_17
makes sense. It is equivalent to the law of quadratic reciprocity.
Kaplansky radical.
The Hilbert symbol on a field "F" defines a map
formula_18
where Br("F") is the Brauer group of "F". The kernel of this mapping, the elements "a" such that ("a","b")=1 for all "b", is the Kaplansky radical of "F".
The radical is a subgroup of F*/F*2, identified with a subgroup of F*. The radical is equal to F* if and only if "F" has "u"-invariant at most 2. In the opposite direction, a field with radical F*2 is termed a Hilbert field.
The general Hilbert symbol.
If "K" is a local field containing the group of "n"th roots of unity for some positive integer "n" prime to the characteristic of "K", then the Hilbert symbol (,) is a function from "K"*×"K"* to μ"n". In terms of the Artin symbol it can be defined by
formula_19
Hilbert originally defined the Hilbert symbol before the Artin symbol was discovered, and his definition (for "n" prime) used the power residue symbol when "K" has residue characteristic coprime to "n", and was rather complicated when "K" has residue characteristic dividing "n".
Properties.
The Hilbert symbol is (multiplicatively) bilinear:
("ab","c") = ("a","c")("b","c")
("a","bc") = ("a","b")("a","c")
skew symmetric:
("a","b") = ("b","a")−1
nondegenerate:
("a","b")=1 for all "b" if and only if "a" is in "K"*"n"
It detects norms (hence the name norm residue symbol):
("a","b")=1 if and only if "a" is a norm of an element in "K"("n"√"b")
It has the "symbol" properties:
("a",1–"a")=1, ("a",–a)=1.
Hilbert's reciprocity law.
Hilbert's reciprocity law states that if "a" and "b" are in an algebraic number field containing the "n"th roots of unity then
formula_20
where the product is over the finite and infinite primes "p" of the number field, and where (,)"p" is the Hilbert symbol of the completion at "p". Hilbert's reciprocity law follows from the Artin reciprocity law and the definition of the Hilbert symbol in terms of the Artin symbol.
Power residue symbol.
If "K" is a number field containing the "n"th roots of unity, "p" is a prime ideal not dividing "n", π is a prime element of the local field of "p", and "a" is coprime to "p", then the power residue symbol () is related to the Hilbert symbol by
formula_21
The power residue symbol is extended to fractional ideals by multiplicativity, and defined for elements of the number field
by putting ()=() where ("b") is the principal ideal generated by "b".
Hilbert's reciprocity law then implies the following reciprocity law for the residue symbol, for "a" and "b" prime to each other and to "n":
formula_22
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(a,b)=\\begin{cases}+1,&\\mbox{ if }z^2=ax^2+by^2\\mbox{ has a non-zero solution }(x,y,z)\\in K^3;\\\\-1,&\\mbox{ otherwise.}\\end{cases}"
},
{
"math_id": 1,
"text": "(a, b) = 1"
},
{
"math_id": 2,
"text": "b"
},
{
"math_id": 3,
"text": "K[\\sqrt{a}]"
},
{
"math_id": 4,
"text": "K^M_2 (K)"
},
{
"math_id": 5,
"text": "K^M_2 (K) / 2"
},
{
"math_id": 6,
"text": "i^2=a"
},
{
"math_id": 7,
"text": "j^2=b"
},
{
"math_id": 8,
"text": "ij=-ji=k"
},
{
"math_id": 9,
"text": "a = p^{\\alpha} u"
},
{
"math_id": 10,
"text": "b = p^{\\beta} v"
},
{
"math_id": 11,
"text": "(a,b)_p = (-1)^{\\alpha\\beta\\epsilon(p)} \\left(\\frac{u}{p}\\right)^\\beta \\left(\\frac{v}{p}\\right)^\\alpha"
},
{
"math_id": 12,
"text": "\\epsilon(p) = (p-1)/2"
},
{
"math_id": 13,
"text": "a = 2^\\alpha u"
},
{
"math_id": 14,
"text": "b = 2^\\beta v"
},
{
"math_id": 15,
"text": "(a,b)_2 = (-1)^{\\epsilon(u)\\epsilon(v) + \\alpha\\omega(v) + \\beta\\omega(u)}"
},
{
"math_id": 16,
"text": "\\omega(x) = (x^2-1)/8."
},
{
"math_id": 17,
"text": "\\prod_v (a,b)_v = 1"
},
{
"math_id": 18,
"text": " (\\cdot,\\cdot) : F^*/F^{*2} \\times F^*/F^{*2} \\rightarrow \\mathop{Br}(F) "
},
{
"math_id": 19,
"text": " (a,b)\\sqrt[n]{b} = (a,K(\\sqrt[n]{b})/K)\\sqrt[n]{b}"
},
{
"math_id": 20,
"text": "\\prod_p (a,b)_p=1"
},
{
"math_id": 21,
"text": "\\binom{a}{p} = (\\pi,a)_p"
},
{
"math_id": 22,
"text": "\\binom{a}{b}=\\binom{b}{a}\\prod_{p|n,\\infty}(a,b)_p"
}
] |
https://en.wikipedia.org/wiki?curid=5924217
|
59249680
|
Polyhedral complex
|
Math concept
In mathematics, a polyhedral complex is a set of polyhedra in a real vector space that fit together in a specific way. Polyhedral complexes generalize simplicial complexes and arise in various areas of polyhedral geometry, such as tropical geometry, splines and hyperplane arrangements.
Definition.
A polyhedral complex formula_0 is a set of polyhedra that satisfies the following conditions:
1. Every face of a polyhedron from formula_0 is also in formula_0.
2. The intersection of any two polyhedra formula_1 is a face of both formula_2 and formula_3.
Note that the empty set is a face of every polyhedron, and so the intersection of two polyhedra in formula_0 may be empty.
Fans.
A fan is a polyhedral complex in which every polyhedron is a cone from the origin. Examples of fans include:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathcal{K}"
},
{
"math_id": 1,
"text": "\\sigma_1, \\sigma_2 \\in \\mathcal{K}"
},
{
"math_id": 2,
"text": "\\sigma_1"
},
{
"math_id": 3,
"text": "\\sigma_2"
}
] |
https://en.wikipedia.org/wiki?curid=59249680
|
5926
|
Computation
|
Any type of calculation
A computation is any type of arithmetic or non-arithmetic calculation that is well-defined. Common examples of computation are mathematical equation solving and the execution of computer algorithms.
Mechanical or electronic devices (or, historically, people) that perform computations are known as "computers". Computer science is a field that involves the study of computation.
Introduction.
The notion that mathematical statements should be 'well-defined' had been argued by mathematicians since at least the 1600s, but agreement on a suitable definition proved elusive. A candidate definition was proposed independently by several mathematicians in the 1930s. The best-known variant was formalised by the mathematician Alan Turing, who defined a well-defined statement or calculation as any statement that could be expressed in terms of the initialisation parameters of a Turing machine. Other (mathematically equivalent) definitions include Alonzo Church's "lambda-definability", Herbrand-Gödel-Kleene's "general recursiveness" and Emil Post's "1-definability".
Today, any formal statement or calculation that exhibits this quality of well-definedness is termed computable, while the statement or calculation itself is referred to as a computation.
Turing's definition apportioned "well-definedness" to a very large class of mathematical statements, including all well-formed algebraic statements, and all statements written in modern computer programming languages.
Despite the widespread uptake of this definition, there are some mathematical concepts that have no well-defined characterisation under this definition. This includes the halting problem and the busy beaver game. It remains an open question as to whether there exists a more powerful definition of 'well-defined' that is able to capture both computable and 'non-computable' statements.
Some examples of mathematical statements that are computable include:
Some examples of mathematical statements that are "not" computable include:
The Physical process of computation.
Computation can be seen as a purely physical process occurring inside a closed physical system called a computer. Turing's 1937 proof, "On Computable Numbers, with an Application to the Entscheidungsproblem", demonstrated that there is a formal equivalence between computable statements and particular physical systems, commonly called computers. Examples of such physical systems are: Turing machines, human mathematicians following strict rules, digital computers, mechanical computers, analog computers and others.
Alternative accounts of computation.
The mapping account.
An alternative account of computation is found throughout the works of Hilary Putnam and others. Peter Godfrey-Smith has dubbed this the "simple mapping account." Gualtiero Piccinini's summary of this account states that a physical system can be said to perform a specific computation when there is a mapping between the state of that system and the computation such that the "microphysical states [of the system] mirror the state transitions between the computational states."
The semantic account.
Philosophers such as Jerry Fodor have suggested various accounts of computation with the restriction that semantic content be a necessary condition for computation (that is, what differentiates an arbitrary physical system from a computing system is that the operands of the computation represent something). This notion attempts to prevent the logical abstraction of the mapping account of pancomputationalism, the idea that everything can be said to be computing everything.
The mechanistic account.
Gualtiero Piccinini proposes an account of computation based on mechanical philosophy. It states that physical computing systems are types of mechanisms that, by design, perform physical computation, or the manipulation (by a functional mechanism) of a "medium-independent" vehicle according to a rule. "Medium-independence" requires that the property can be instantiated by multiple realizers and multiple mechanisms, and that the inputs and outputs of the mechanism also be multiply realizable. In short, medium-independence allows for the use of physical variables with properties other than voltage (as in typical digital computers); this is imperative in considering other types of computation, such as that which occurs in the brain or in a quantum computer. A rule, in this sense, provides a mapping among inputs, outputs, and internal states of the physical computing system.
Mathematical models.
In the theory of computation, a diversity of mathematical models of computation has been developed.
Typical mathematical models of computers are the following:
Giunti calls the models studied by computation theory "computational systems," and he argues that all of them are mathematical dynamical systems with discrete time and discrete state space. He maintains that a computational system is a complex object which consists of three parts. First, a mathematical dynamical system formula_0 with discrete time and discrete state space; second, a computational setup formula_1, which is made up of a theoretical part formula_2, and a real part formula_3; third, an interpretation formula_4, which links the dynamical system formula_0 with the setup formula_5.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "DS"
},
{
"math_id": 1,
"text": "H=\\left(F, B_F\\right)"
},
{
"math_id": 2,
"text": "F"
},
{
"math_id": 3,
"text": "B_F"
},
{
"math_id": 4,
"text": "I_{DS,H}"
},
{
"math_id": 5,
"text": "H"
}
] |
https://en.wikipedia.org/wiki?curid=5926
|
592613
|
YCbCr
|
Family of digital colour spaces
YCbCr, Y′CbCr, or Y Pb/Cb Pr/Cr, also written as YCBCR or Y′CBCR, is a family of color spaces used as a part of the color image pipeline in video and digital photography systems. Y′ is the luma component and CB and CR are the blue-difference and red-difference chroma components. Y′ (with prime) is distinguished from Y, which is luminance, meaning that light intensity is nonlinearly encoded based on gamma corrected RGB primaries.
Y′CbCr color spaces are defined by a mathematical coordinate transformation from an associated RGB primaries and white point. If the underlying RGB color space is absolute, the Y′CbCr color space is an absolute color space as well; conversely, if the RGB space is ill-defined, so is Y′CbCr. The transformation is defined in equations 32, 33 in ITU-T H.273. Nevertheless that rule does not apply to P3-D65 primaries used by Netflix with BT.2020-NCL matrix, so that means matrix was not derived from primaries, but now Netflix allows BT.2020 primaries (since 2021). The same happens with JPEG: it has BT.601 matrix derived from System M primaries, yet the primaries of most images are BT.709.
Rationale.
Cathode ray tube displays are driven by red, green, and blue voltage signals, but these RGB signals are not efficient as a representation for storage and transmission, since they have a lot of redundancy.
YCbCr and Y′CbCr are a practical approximation to color processing and perceptual uniformity, where the primary colors corresponding roughly to red, green and blue are processed into perceptually meaningful information. By doing this, subsequent image/video processing, transmission and storage can do operations and introduce errors in perceptually meaningful ways. Y′CbCr is used to separate out a luma signal (Y′) that can be stored with high resolution or transmitted at high bandwidth, and two chroma components (CB and CR) that can be bandwidth-reduced, subsampled, compressed, or otherwise treated separately for improved system efficiency.
One practical example would be decreasing the bandwidth or resolution allocated to "color" compared to "black and white", since humans are more sensitive to the black-and-white information (see image example to the right). This is called chroma subsampling.
CbCr.
YCbCr is sometimes abbreviated to YCC.
Typically the terms Y′CbCr, YCbCr, YPbPr and YUV are used interchangeably, leading to some confusion. The main difference is that YPbPr is used with analog images and YCbCr with digital images, leading to different scaling values for Umax and Vmax (in YCbCr both are formula_0) when converting to/from YUV. Y′CbCr and YCbCr differ due to the values being gamma corrected or not.
The equations below give a better picture of the common principles and general differences between these formats.
RGB conversion.
R'G'B' to Y′PbPr.
Y′CbCr signals (prior to scaling and offsets to place the signals into digital form) are called YPbPr, and are created from the corresponding gamma-adjusted RGB (red, green and blue) source using three defined constants KR, KG, and KB as follows:
formula_1
where KR, KG, and KB are ordinarily derived from the definition of the corresponding RGB space, and required to satisfy formula_2.
The equivalent matrix manipulation is often referred to as the "color matrix":
formula_3
And its inverse:
formula_4
Here, the prime (′) symbols mean gamma correction is being used; thus R′, G′ and B′ nominally range from 0 to 1, with 0 representing the minimum intensity (e.g., for display of the color black) and 1 the maximum (e.g., for display of the color white). The resulting luma (Y) value will then have a nominal range from 0 to 1, and the chroma (PB and PR) values will have a nominal range from -0.5 to +0.5. The reverse conversion process can be readily derived by inverting the above equations.
Y′PbPr to Y′CbCr.
When representing the signals in digital form, the results are scaled and rounded, and offsets are typically added. For example, the scaling and offset applied to the Y′ component per specification (e.g. MPEG-2) results in the value of 16 for black and the value of 235 for white when using an 8-bit representation. The standard has 8-bit digitized versions of CB and CR scaled to a different range of 16 to 240. Consequently, rescaling by the fraction (235-16)/(240-16) = 219/224 is sometimes required when doing color matrixing or processing in YCbCr space, resulting in quantization distortions when the subsequent processing is not performed using higher bit depths.
The scaling that results in the use of a smaller range of digital values than what might appear to be desirable for representation of the nominal range of the input data allows for some "overshoot" and "undershoot" during processing without necessitating undesirable clipping. This "headroom" and "toeroom" can also be used for extension of the nominal color gamut, as specified by xvYCC.
The value 235 accommodates a maximum overshoot of (255 - 235) / (235 - 16) = 9.1%, which is slightly larger than the theoretical maximum overshoot (Gibbs' Phenomenon) of about 8.9% of the maximum (black-to-white) step. The toeroom is smaller, allowing only 16 / 219 = 7.3% overshoot, which is less than the theoretical maximum overshoot of 8.9%. In addition, because values 0 and 255 are reserved in HDMI, the room is actually slightly less.
Y′CbCr to xvYCC.
Since the equations defining Y′CbCr are formed in a way that rotates the entire nominal RGB color cube and scales it to fit within a (larger) YCbCr color cube, there are some points within the Y′CbCr color cube that "cannot" be represented in the corresponding RGB domain (at least not within the nominal RGB range). This causes some difficulty in determining how to correctly interpret and display some Y′CbCr signals. These out-of-range Y′CbCr values are used by xvYCC to encode colors outside the BT.709 gamut.
ITU-R BT.601 conversion.
The form of Y′CbCr that was defined for standard-definition television use in the ITU-R BT.601 (formerly CCIR 601) standard for use with digital component video is derived from the corresponding RGB space (ITU-R BT.470-6 System M primaries) as follows:
formula_5
From the above constants and formulas, the following can be derived for ITU-R BT.601.
Analog YPbPr from analog R'G'B' is derived as follows:
formula_6
Digital Y′CbCr (8 bits per sample) is derived from analog R'G'B' as follows:
formula_7
or simply componentwise
formula_8
The resultant signals range from 16 to 235 for Y′ (Cb and Cr range from 16 to 240); the values from 0 to 15 are called "footroom", while the values from 236 to 255 are called "headroom". The same quantisation ranges, different for Y and Cb, Cr also apply to BT.2020 and BT.709.
Alternatively, digital Y′CbCr can derived from digital R'dG'dB'd (8 bits per sample, each using the full range with zero representing black and 255 representing white) according to the following equations:
formula_9
In the formula below, the scaling factors are multiplied by formula_10. This allows for the value 256 in the denominator, which can be calculated by a single bitshift.
formula_11
If the R'd G'd B'd digital source includes footroom and headroom, the footroom offset 16 needs to be subtracted first from each signal, and a scale factor of formula_12 needs to be included in the equations.
The inverse transform is:
formula_13
The inverse transform without any roundings (using values coming directly from ITU-R BT.601 recommendation) is:
formula_14
This form of Y′CbCr is used primarily for older standard-definition television systems, as it uses an RGB model that fits the phosphor emission characteristics of older CRTs.
ITU-R BT.709 conversion.
A different form of Y′CbCr is specified in the ITU-R BT.709 standard, primarily for HDTV use. The newer form is also used in some computer-display oriented applications, as sRGB (though the matrix used for sRGB form of YCbCr, sYCC, is still BT.601). In this case, the values of Kb and Kr differ, but the formulas for using them are the same. For ITU-R BT.709, the constants are:
formula_15
This form of Y′CbCr is based on an RGB model that more closely fits the phosphor emission characteristics of newer CRTs and other modern display equipment.
The conversion matrices for BT.709 are these:
formula_16
The definitions of the R', G', and B' signals also differ between BT.709 and BT.601, and differ within BT.601 depending on the type of TV system in use (625-line as in PAL and SECAM or 525-line as in NTSC), and differ further in other specifications. In different designs there are differences in the definitions of the R, G, and B chromaticity coordinates, the reference white point, the supported gamut range, the exact gamma pre-compensation functions for deriving R', G' and B' from R, G, and B, and in the scaling and offsets to be applied during conversion from R'G'B' to Y′CbCr. So proper conversion of Y′CbCr from one form to the other is not just a matter of inverting one matrix and applying the other. In fact, when Y′CbCr is designed ideally, the values of KB and KR are derived from the precise specification of the RGB color primary signals, so that the luma (Y′) signal corresponds as closely as possible to a gamma-adjusted measurement of luminance (typically based on the CIE 1931 measurements of the response of the human visual system to color stimuli).
ITU-R BT.2020 conversion.
The ITU-R BT.2020 standard uses the same gamma function as BT.709. It defines:
For both, the coefficients derived from the primaries are:
formula_17
For NCL, the definition is classical: Y' = 0.2627R' + 0.6780 G' + 0.0593 B'; Cb = (B' - Y') / 1.8814; Cr = (R' - Y') / 1.4746. The encoding conversion can, as usual, be written as a matrix. The decoding matrix for BT.2020-NCL is this with 14 decimal places:
formula_18
The smaller values in the matrix are not rounded; they are precise values. For systems with limited precision (8 or 10 bit, for example) a lower precision of the above matrix could be used, for example, retaining only 6 digits after decimal point.
The CL version, YcCbcCrc, codes:
The CL function can be used when preservation of luminance is of primary importance (see: ), or when "there is an expectation of improved coding efficiency for delivery." The specification refers to Report ITU-R BT.2246 on this matter. BT.2246 states that CL has improved compression efficiency and luminance preservation, but NCL will be more familiar to a staff that has previously handled color mixing and other production practices in HDTV YCbCr.
BT.2020 does not define PQ and thus HDR, it is further defined in SMPTE ST 2084 and BT.2100. BT.2100 will introduce the use of ICTCP, a semi-perceptual colorspace derived from linear RGB with good hue linearity. It is "near-constant luminance".
SMPTE 240M conversion.
The SMPTE 240M standard (used on the MUSE analog HD television system) defines YCC with these coefficients:
formula_19
The coefficients are derived from SMPTE 170M primaries and white point, as used in 240M standard.
JPEG conversion.
JFIF usage of JPEG supports a modified Rec. 601 Y′CbCr where Y′, CB and CR have the full 8-bit range of [0...255]. Below are the conversion equations expressed to six decimal digits of precision. (For ideal equations, see ITU-T T.871.)
Note that for the following formulae, the range of each input (R,G,B) is also the full 8-bit range of [0...255].
formula_20
And back:
formula_21
The above conversion is identical to sYCC when the input is given as sRGB, except that IEC 61966-2-1:1999/Amd1:2003 only gives four decimal digits.
JPEG also defines a "YCCK" format from Adobe for CMYK input. In this format, the "K" value is passed as-is, while CMY are used to derive YCbCr with the above matrix by assuming R=1-C, G=1-M, and B=1-Y. As a result, a similar set of subsampling techniques can be used.
formula_22
Coefficients for BT.470-6 System B, G primaries.
These coefficients are not in use and were never in use.
Chromaticity-derived luminance systems.
H.273 also describes constant and non-constant luminance systems which are derived strictly from primaries and white point, so that situations like sRGB/BT.709 default primaries of JPEG that use BT.601 matrix (that is derived from BT.470-6 System M) do not happen.
Numerical approximations.
Prior to the development of fast SIMD floating-point processors, most digital implementations of RGB → Y′UV used integer math, in particular fixed-point approximations. Approximation means that the precision of the used numbers (input data, output data and constant values) is limited, and thus a precision loss of typically about the last binary digit is accepted by whoever makes use of that option in typically a trade-off to improved computation speeds.
Y′ values are conventionally shifted and scaled to the range [16, 235] (referred to as studio swing or "TV levels") rather than using the full range of [0, 255] (referred to as full swing or "PC levels"). This practice was standardized in SMPTE-125M in order to accommodate signal overshoots ("ringing") due to filtering. U and V values, which may be positive or negative, are summed with 128 to make them always positive, giving a studio range of 16–240 for U and V. (These ranges are important in video editing and production, since using the wrong range will result either in an image with "clipped" blacks and whites, or a low-contrast image.)
Approximate 8-bit matrices for BT.601.
These matrices round all factors to the closest 1/256 unit. As a result, only one 16-bit intermediate value is formed for each component, and a simple right-shift with rounding can take care of the division.
For studio-swing:
formula_23
For full-swing:
formula_24
Google's Skia used to use the above 8-bit full-range matrix, resulting in a slight greening effect on JPEG images encoded by Android devices, more noticeable on repeated saving. The issue was fixed in 2016, when the more accurate version was used instead. Due to SIMD optimizations in libjpeg-turbo, the accurate version is actually faster.
Packed pixel formats and conversion.
RGB files are typically encoded in 8, 12, 16 or 24 bits per pixel. In these examples, we will assume 24 bits per pixel, which is written as RGB888. The standard byte format is simply codice_0.
YCbCr Packed pixel formats are often referred to as "YUV". Such files can be encoded in 12, 16 or 24 bits per pixel. Depending on subsampling, the formats can largely be described as 4:4:4, 4:2:2, and 4:2:0p. The apostrophe after the Y is often omitted, as is the "p" (for planar) after YUV420p. In terms of actual file formats, 4:2:0 is the most common, as the data is more reduced, and the file extension is usually ".YUV". The relation between data rate and sampling (A:B:C) is defined by the ratio between Y to U and V channel. The notation of "YUV" followed by three numbers is vague: the three numbers could refer to the subsampling (as is done in "YUV420"), or it could refer to bit depth in each channel (as is done in "YUV565"). The unambiguous way to refer to these formats is via the FourCC code.
To convert from RGB to YUV or back, it is simplest to use RGB888 and 4:4:4. For 4:1:1, 4:2:2 and 4:2:0, the bytes need to be converted to 4:4:4 first.
4:4:4.
4:4:4 is straightforward, as no pixel-grouping is done: the difference lies solely in how many bits each channel is given, and their arrangement. The basic scheme uses 3 bytes per pixel, with the order codice_1 (using "u" for Cb and "v" for Cr; the same applies to content below). In computers, it is more common to see a format, which adds an alpha channel and goes codice_2, because groups of 32-bits are easier to deal with.
4:2:2.
4:2:2 groups 2 pixels together horizontally in each conceptual "container". Two main arrangements are:
4:1:1.
4:1:1 is rarely used. Pixels are in horizontal groups of 4.
4:2:0.
4:2:0 is very commonly used. The main formats are IMC2, IMC4, YV12, and NV12. All of these four formats are "planar", meaning that the Y, U, and V values are grouped together instead of interspersed. They all occupy 12 bits per pixel, assuming a 8-bit channel.
There are also "tiled" variants of planar formats.
References.
<templatestyles src="Reflist/styles.css" />
External links.
Software resources for packed pixels:
|
[
{
"math_id": 0,
"text": "\\tfrac{1}{2}"
},
{
"math_id": 1,
"text": "\\begin{align}\n Y' &= K_R \\cdot R' + K_G \\cdot G' + K_B \\cdot B'\\\\\n P_B &=\\frac12 \\cdot \\frac{B' - Y'}{1 - K_B}\\\\ \n P_R &=\\frac12 \\cdot \\frac{R' - Y'}{1 - K_R}\n\\end{align}"
},
{
"math_id": 2,
"text": "K_R + K_G + K_B = 1"
},
{
"math_id": 3,
"text": "\n\\begin{bmatrix}\nY' \\\\ P_B \\\\ P_R\n\\end{bmatrix}\n=\n\\begin{bmatrix}\nK_R & K_G & K_B \\\\\n-\\frac12 \\cdot \\frac{K_R}{1 - K_B} &-\\frac12 \\cdot \\frac{K_G}{1 - K_B} &\\frac12 \\\\\n\\frac12 & -\\frac12 \\cdot \\frac{K_G}{1 - K_R} & -\\frac12 \\cdot \\frac{K_B}{1 - K_R}\n\\end{bmatrix}\n\\begin{bmatrix}\nR' \\\\ G' \\\\ B'\n\\end{bmatrix}\n"
},
{
"math_id": 4,
"text": "\n\\begin{bmatrix}\nR' \\\\ G' \\\\ B'\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n1 & 0 & 2-2 \\cdot K_R \\\\\n1 & -\\frac{K_B}{K_G} \\cdot (2-2 \\cdot K_B) & -\\frac{K_R}{K_G} \\cdot (2-2 \\cdot K_R) \\\\\n1 & 2-2 \\cdot K_B & 0\n\\end{bmatrix}\n\\begin{bmatrix}\nY' \\\\ P_B \\\\ P_R\n\\end{bmatrix}\n"
},
{
"math_id": 5,
"text": "\\begin{align}\n K_R &= 0.299\\\\\n K_G &= 0.587\\\\\n K_B &= 0.114\n\\end{align}"
},
{
"math_id": 6,
"text": "\\begin{align}\n Y' &= & 0.299 \\cdot R' &+& 0.587 \\cdot G' &+& 0.114 \\cdot B'\\\\\n P_B &= -& 0.168736 \\cdot R' &-& 0.331264 \\cdot G' &+& 0.5 \\cdot B'\\\\\n P_R &= & 0.5 \\cdot R' &-& 0.418688 \\cdot G' &-& 0.081312 \\cdot B'\n\\end{align}"
},
{
"math_id": 7,
"text": "\\begin{align}\n Y' &=& 16 &+& ( 65.481 \\cdot R' &+& 128.553 \\cdot G' &+& 24.966 \\cdot B')\\\\\n C_B &=& 128 &+& (-37.797 \\cdot R' &-& 74.203 \\cdot G' &+& 112.0 \\cdot B')\\\\\n C_R &=& 128 &+& (112.0 \\cdot R' &-& 93.786 \\cdot G' &-& 18.214 \\cdot B')\n\\end{align}"
},
{
"math_id": 8,
"text": "\\begin{align}\n (Y', C_B, C_R) &=& ( 16, 128, 128 ) + ( 219 \\cdot Y, 224 \\cdot P_B, 224 \\cdot P_R)\\\\\n\\end{align}"
},
{
"math_id": 9,
"text": "\\begin{align}\n Y' &=& 16 &+& \\frac{ 65.481 \\cdot R'_D}{255} &+& \\frac{128.553 \\cdot G'_D}{255} &+& \\frac{ 24.966 \\cdot B'_D}{255}\\\\\n C_B &=& 128 &-& \\frac{ 37.797 \\cdot R'_D}{255} &-& \\frac{ 74.203 \\cdot G'_D}{255} &+& \\frac{112.0 \\cdot B'_D}{255}\\\\\n C_R &=& 128 &+& \\frac{112.0 \\cdot R'_D}{255} &-& \\frac{ 93.786 \\cdot G'_D}{255} &-& \\frac{ 18.214 \\cdot B'_D}{255}\n\\end{align}"
},
{
"math_id": 10,
"text": "\\frac{256}{255}"
},
{
"math_id": 11,
"text": "\\begin{align}\n Y' &=& 16 &+& \\frac{ 65.738 \\cdot R'_D}{256} &+& \\frac{129.057 \\cdot G'_D}{256} &+& \\frac{ 25.064 \\cdot B'_D}{256}\\\\\n C_B &=& 128 &-& \\frac{ 37.945 \\cdot R'_D}{256} &-& \\frac{ 74.494 \\cdot G'_D}{256} &+& \\frac{112.439 \\cdot B'_D}{256}\\\\\n C_R &=& 128 &+& \\frac{112.439 \\cdot R'_D}{256} &-& \\frac{ 94.154 \\cdot G'_D}{256} &-& \\frac{ 18.285 \\cdot B'_D}{256}\n\\end{align}"
},
{
"math_id": 12,
"text": "\\frac{255}{219}"
},
{
"math_id": 13,
"text": "\\begin{align}\n R'_D &=& \\frac{298.082 \\cdot Y'}{256} &&&+& \\frac{408.583 \\cdot C_R}{256} &-& 222.921\\\\\n G'_D &=& \\frac{298.082 \\cdot Y'}{256} &-& \\frac{100.291 \\cdot C_B}{256} &-& \\frac{208.120 \\cdot C_R}{256} &+& 135.576\\\\\n B'_D &=& \\frac{298.082 \\cdot Y'}{256} &+& \\frac{516.412 \\cdot C_B}{256} &&&-& 276.836\n\\end{align}"
},
{
"math_id": 14,
"text": "\\begin{align}\n R'_D = \\frac{255}{219}\\cdot(Y'-16) && && && &+ \\frac{255}{224}\\cdot1.402 \\cdot(C_R-128)\\\\\n G'_D = \\frac{255}{219}\\cdot(Y'-16) &-& \\frac{255}{224}\\cdot1.772 && \\cdot\\frac{0.114}{0.587} &&\\cdot(C_B-128) &- \\frac{255}{224}\\cdot1.402 \\cdot\\frac{0.299}{0.587}\\cdot(C_R-128)\\\\\n B'_D = \\frac{255}{219}\\cdot(Y'-16) &+& \\frac{255}{224}\\cdot1.772 && &&\\cdot(C_B-128)\n\\end{align}"
},
{
"math_id": 15,
"text": "\\begin{align}\n K_B &= 0.0722 \\\\\n K_R &= 0.2126 \\\\\n (K_G &= 1-K_B-K_R = 0.7152)\n\\end{align}"
},
{
"math_id": 16,
"text": "\\begin{align}\n \\begin{bmatrix} Y' \\\\ C_B \\\\ C_R \\end{bmatrix}\n &=\n \\begin{bmatrix}\n 0.2126 & 0.7152 & 0.0722 \\\\\n -0.1146 & -0.3854 & 0.5 \\\\\n 0.5 & -0.4542 & -0.0458\n \\end{bmatrix}\n \\begin{bmatrix} R' \\\\ G' \\\\ B' \\end{bmatrix} \\\\\n\n \\begin{bmatrix} R' \\\\ G' \\\\ B' \\end{bmatrix}\n &=\n \\begin{bmatrix}\n 1 & 0 & 1.5748 \\\\\n 1 & -0.1873 & -0.4681 \\\\\n 1 & 1.8556 & 0\n \\end{bmatrix}\n \\begin{bmatrix} Y' \\\\ C_B \\\\ C_R \\end{bmatrix}\n\\end{align}"
},
{
"math_id": 17,
"text": "\\begin{align}\n K_B &= 0.0593 \\\\\n K_R &= 0.2627 \\\\\n(K_G &= 1-K_B-K_R = 0.6780)\n\\end{align}"
},
{
"math_id": 18,
"text": "\\begin{align}\n \\begin{bmatrix} R \\\\ G \\\\ B \\end{bmatrix}\n &=\n \\begin{bmatrix}\n 1 & 0 & 1.4746 \\\\\n 1 & -0.16455312684366 & -0.57135312684366 \\\\\n 1 & 1.8814 & 0\n \\end{bmatrix}\n \\begin{bmatrix} Y' \\\\ C_B \\\\ C_R \\end{bmatrix}\n\\end{align}"
},
{
"math_id": 19,
"text": "\\begin{align}\n K_B &= 0.087 \\\\\n K_R &= 0.212\n\\end{align}"
},
{
"math_id": 20,
"text": "\\begin{align}\n Y' &=& 0 &+ (0.299 & \\cdot R'_D) &+ (0.587 & \\cdot G'_D) &+ (0.114 & \\cdot B'_D)\\\\\n C_B &=& 128 & - (0.168736 & \\cdot R'_D) &- (0.331264 & \\cdot G'_D) &+ (0.5 & \\cdot B'_D)\\\\\n C_R &=& 128 &+ (0.5 & \\cdot R'_D) &- (0.418688 & \\cdot G'_D) &- (0.081312 & \\cdot B'_D)\n\\end{align}"
},
{
"math_id": 21,
"text": "\\begin{align}\n R'_D &=& Y' &&& + 1.402 & \\cdot (C_R-128) \\\\\n G'_D &=& Y' & - 0.344136 & \\cdot (C_B-128)& - 0.714136 & \\cdot (C_R-128) \\\\\n B'_D &=& Y' & + 1.772 & \\cdot (C_B-128)&\n\\end{align}"
},
{
"math_id": 22,
"text": "\\begin{align}\n K_B &= 0.0713\\\\\n K_R &= 0.2220\n\\end{align}"
},
{
"math_id": 23,
"text": "\n \\begin{bmatrix} Y' \\\\ C_B \\\\ C_R \\end{bmatrix}\n =\n \\frac{1}{256}\n \\begin{bmatrix}\n 66 & 129 & 25 \\\\\n -38 & -74 & 112 \\\\\n 112 & -94 & -18\n \\end{bmatrix}\n \\begin{bmatrix} R' \\\\ G' \\\\ B' \\end{bmatrix}\n +\n \\begin{bmatrix} 16 \\\\ 128 \\\\ 128 \\end{bmatrix}\n"
},
{
"math_id": 24,
"text": "\n \\begin{bmatrix} Y' \\\\ C_B \\\\ C_R \\end{bmatrix}\n =\n \\frac{1}{256}\n \\begin{bmatrix}\n 77 & 150 & 29 \\\\\n -43 & -84 & 127 \\\\\n 127 & -106 & -21\n \\end{bmatrix}\n \\begin{bmatrix} R' \\\\ G' \\\\ B' \\end{bmatrix}\n +\n \\begin{bmatrix} 0 \\\\ 128 \\\\ 128 \\end{bmatrix}\n"
}
] |
https://en.wikipedia.org/wiki?curid=592613
|
59263836
|
Three-photon microscopy
|
Three-photon microscopy (3PEF) is a high-resolution fluorescence microscopy based on nonlinear excitation effect. Different from two-photon excitation microscopy, it uses three exciting photons. It typically uses 1300 nm or longer wavelength lasers to excite the fluorescent dyes with three simultaneously absorbed photons. The fluorescent dyes then emit one photon whose energy is (slightly smaller than) three times the energy of each incident photon. Compared to two-photon microscopy, three-photon microscopy reduces the fluorescence away from the focal plane by formula_0, which is much faster than that of two-photon microscopy by formula_1. In addition, three-photon microscopy employs near-infrared light with less tissue scattering effect. This causes three-photon microscopy to have higher resolution than conventional microscopy.
Concept.
Three-photon excited fluorescence was first observed by Singh and Bradley in 1964 when they estimated the three-photon absorption cross section of naphthalene crystals. In 1996, Stefan W. Hell designed experiments to validate the feasibility of applying three-photon excitation to scanning fluorescence microscopy, which further proved the concept of three-photon excited fluorescence.
Three-photon microscopy shares a few similarities with Two-photon excitation microscopy. Both of them employ the point scanning method. Both are able to image 3D samples by adjusting the position of the focus lens along the axial and lateral directions. The structures of both systems do not require a pinhole to block out-focus light. However, three-photon microscopy differs from Two-photon excitation microscopy in their Point spread function, resolution, penetration depth, resistance to out-of-focus light and strength of photobleaching.
In three-photon excitation, the fluorophore absorbs three photons almost simultaneously. The wavelength of the excitation laser is about 1200 nm or more in three photon microscopy with the emission wavelength slightly longer than one-third of the excitation wavelength. Three photon microscopy has deeper tissue penetration because of the longer excitation wavelengths and the higher order nonlinear excitation. However, a three-photon microscope needs a laser with higher power due to relatively smaller cross-section of the dyes for three-photon excitation, which is on the order of formula_2. This is much smaller than the typical two-photon excitation cross-sections of formula_3. The Ultrashort pulses are usually around 100 fs.
Resolution.
For three photon fluorescence scanning microscopy, the three dimensional intensity point-spread function (IPSF) can be denoted as,
formula_4,
where formula_5denotes the 3-D convolution operation, formula_6 denotes the intensity sensitivity of an incoherent detector, and formula_7, formula_8 denotes the 3-D IPSF for the objective lens and collector lens in single-photon fluorescence, respectively. The 3-D IPSF formula_7 can be expressed in
formula_9,
where formula_10 is a Bessel function of the first kind of order zero. The axial and radial coordinates formula_11 and formula_12 are defined by
formula_13and
formula_14,
where formula_15is the numerical aperture of the objective lens, formula_16 is the real defocus, and formula_17 is the radial coordinates.
Coupling with other multiphoton techniques.
Correlative images can be obtained using different multiphoton schemes such as 2PEF, 3PEF, and third-harmonic generation (THG), in parallel (since the corresponding wavelengths are different, they can be easily separated onto different detectors). A multichannel image is then constructed.
3PEF is also compared to 2PEF: it generally gives a smaller degradation of the signal-to-background ratio (SBR) with depth, even if the emitted signal is smaller than with 2PEF.
Development.
After three-photon excited fluorescence was observed by Singh and Bradley and further validated by Hell, Chris Xu and Watt W. Webb reported measurement of excitation cross sections of several native chromophores and biological indicators, and implemented three-photon excited fluorescence in Laser Scanning Microscopy of living cells. In November 1996, David Wokosin applied three photon excitation fluorescence for fixed in vivo biological specimen imaging.
In 2010s, three photon microscopy was applied for deep tissue imaging using excitation wavelengths beyond 1060 nm. In January 2013, Horton, Wang, Kobat and Xu invented in vivo deep imaging of an intact mouse brain by employing point scanning method to three photon microscope at the long wavelength window of 1700 nm. In February 2017, Dimitre Ouzounov, Tainyu Wang, and Chris Xu demonstrated deep activity imaging of GCaMP6-labeled neurons in the hippocampus of an intact, adult mouse brain using three-photon microscopy at the 1300 nm wavelength window. In May 2017, Rowlands applied wide-field three-photon excitation to three photon microscope for larger penetration depth. In Oct 2018, T Wang, D Ouzounov, and C Xu were able to image vasculature and GCaMP6 calcium activity using three photon microscope through the intact mouse skull.
Applications.
Three-photon microscopy has similar application fields with two-photon excitation microscopy including neuroscience, and oncology. However, compared to standard single-photon or two-photon excitation, three-photon excitation has several benefits such as the use of longer wavelengths reduces the effects of light scattering and increasing the penetration depth of the illumination beam into the sample. The nonlinear nature of three photon microscopy confines the excitation target to a smaller volume, reducing out-of-focus light as well as minimizing photobleaching on the biological sample. These advantages of three-photon microscopy gives it an edge in visualize in vivo and ex vivo tissue morphology and physiology at a cellular level deep within scattering tissue and Rapid volumetric imaging. In the recent study, Xu has demonstrated the potential of three-photon imaging for noninvasive studies of live biological systems. The paper used three-photon fluorescence microscopy at a spectral excitation window of 1,320 nm to imaging the mouse brain structure and function through the intact skull with high spatial and temporal resolution(The lateral and axial FWHM was 0.96μm and 4.6μm) and large FOVs (hundreds of micrometers), and at substantial depth(>500 μm). This work demonstrates the advantage of higher-order nonlinear excitation for imaging through a highly scattering layer, in addition to the previously reported advantage of 3PM for deep imaging of densely labeled samples. Localized isomerization of photoswitchable drugs "in vivo" using three-photon excitation at 1560 nm has also been reported and used to control neuronal activity in a pharmacologically specific way.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "1/z^4"
},
{
"math_id": 1,
"text": "1/z^2"
},
{
"math_id": 2,
"text": "10^{-82}\\text{cm}^6(s/\\text{photon})^2"
},
{
"math_id": 3,
"text": "10^{-49}\\text{cm}^4s/\\text{photon}"
},
{
"math_id": 4,
"text": "\nh_i(\\nu,u) = \\left|I_1(\\nu/3,u/3)\\right|^3I_2(\\nu,u) \\otimes_3 D\n"
},
{
"math_id": 5,
"text": "\\otimes_3"
},
{
"math_id": 6,
"text": "D"
},
{
"math_id": 7,
"text": " I_1(\\nu,u) "
},
{
"math_id": 8,
"text": " I_2(\\nu,u) "
},
{
"math_id": 9,
"text": "\nI_1(\\nu,u) = \\left|\\int_{0}^{1}2J_0(\\nu\\rho)\\exp(iu/2)\\rho d\\rho\\right|^2\n"
},
{
"math_id": 10,
"text": "J_0"
},
{
"math_id": 11,
"text": "u"
},
{
"math_id": 12,
"text": "\\nu"
},
{
"math_id": 13,
"text": "\nu = (8\\pi/\\lambda_f)z \\sin^2(\\alpha_0/2)\n"
},
{
"math_id": 14,
"text": "\n\\nu = (2\\pi/\\lambda_f)r\\ \\sin\\ \\alpha_0\n"
},
{
"math_id": 15,
"text": "\\alpha_0 "
},
{
"math_id": 16,
"text": "z "
},
{
"math_id": 17,
"text": "r"
}
] |
https://en.wikipedia.org/wiki?curid=59263836
|
592735
|
Hypocycloid
|
Curve traced by a point on a circle rolling within another circle
In geometry, a hypocycloid is a special plane curve generated by the trace of a fixed point on a small circle that rolls within a larger circle. As the radius of the larger circle is increased, the hypocycloid becomes more like the cycloid created by rolling a circle on a line.
History.
The 2-cusped hypocycloid called Tusi couple was first described by the 13th-century Persian astronomer and mathematician Nasir al-Din al-Tusi in "Tahrir al-Majisti (Commentary on the Almagest)". German painter and German Renaissance theorist Albrecht Dürer described epitrochoids in 1525, and later Roemer and Bernoulli concentrated on some specific hypocycloids, like the astroid, in 1674 and 1691, respectively.
Properties.
If the smaller circle has radius r, and the larger circle has radius "R" = "kr", then the
parametric equations for the curve can be given by either:
formula_0
or:
formula_1
If k is an integer, then the curve is closed, and has k cusps (i.e., sharp corners, where the curve is not
differentiable). Specially for "k" = 2 the curve is a straight line and the circles are called Tusi Couple. Nasir al-Din al-Tusi was the first to describe these hypocycloids and their applications to high-speed printing.
If k is a rational number, say "k" = "p"/"q" expressed in simplest terms, then the curve has p cusps.
If k is an irrational number, then the curve never closes, and fills the space between the larger circle and a circle of radius "R" − 2"r".
Each hypocycloid (for any value of r) is a brachistochrone for the gravitational potential inside a homogeneous sphere of radius R.
The area enclosed by a hypocycloid is given by:
formula_2
The arc length of a hypocycloid is given by:
formula_3
Examples.
The hypocycloid is a special kind of hypotrochoid, which is a particular kind of roulette.
A hypocycloid with three cusps is known as a deltoid.
A hypocycloid curve with four cusps is known as an astroid.
The hypocycloid with two "cusps" is a degenerate but still very interesting case, known as the Tusi couple.
Relationship to group theory.
Any hypocycloid with an integral value of "k", and thus "k" cusps, can move snugly inside another hypocycloid with "k"+1 cusps, such that the points of the smaller hypocycloid will always be in contact with the larger. This motion looks like 'rolling', though it is not technically rolling in the sense of classical mechanics, since it involves slipping.
Hypocycloid shapes can be related to special unitary groups, denoted SU("k"), which consist of "k" × "k" unitary matrices with determinant 1. For example, the allowed values of the sum of diagonal entries for a matrix in SU(3), are precisely the points in the complex plane lying inside a hypocycloid of three cusps (a deltoid). Likewise, summing the diagonal entries of SU(4) matrices gives points inside an astroid, and so on.
Thanks to this result, one can use the fact that SU("k") fits inside SU("k+1") as a subgroup to prove that an epicycloid with "k" cusps moves snugly inside one with "k"+1 cusps.
Derived curves.
The evolute of a hypocycloid is an enlarged version of the hypocycloid itself, while
the involute of a hypocycloid is a reduced copy of itself.
The pedal of a hypocycloid with pole at the center of the hypocycloid is a rose curve.
The isoptic of a hypocycloid is a hypocycloid.
Hypocycloids in popular culture.
Curves similar to hypocycloids can be drawn with the Spirograph toy. Specifically, the Spirograph can draw hypotrochoids and epitrochoids.
The Pittsburgh Steelers' logo, which is based on the Steelmark, includes three astroids (hypocycloids of four cusps). In his weekly NFL.com column "Tuesday Morning Quarterback," Gregg Easterbrook often refers to the Steelers as the Hypocycloids. Chilean soccer team CD Huachipato based their crest on the Steelers' logo, and as such features hypocycloids.
The first Drew Carey season of "The Price Is Right"'s set features astroids on the three main doors, giant price tag, and the turntable area. The astroids on the doors and turntable were removed when the show switched to high definition broadcasts starting in 2008, and only the giant price tag prop still features them today.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\begin{align}\n& x (\\theta) = (R - r) \\cos \\theta + r \\cos \\left(\\frac{R-r}{r} \\theta \\right) \\\\\n& y (\\theta) = (R - r) \\sin \\theta - r \\sin \\left( \\frac{R - r}{r} \\theta \\right)\n\\end{align}"
},
{
"math_id": 1,
"text": "\\begin{align}\n& x (\\theta) = r (k - 1) \\cos \\theta + r \\cos \\left( (k - 1) \\theta \\right) \\\\\n& y (\\theta) = r (k - 1) \\sin \\theta - r \\sin \\left( (k - 1) \\theta \\right)\n\\end{align}"
},
{
"math_id": 2,
"text": "A = \\frac {(k - 1)(k - 2)} {k^2} \\pi R^2 = (k - 1)(k - 2) \\pi r^2 "
},
{
"math_id": 3,
"text": "s = \\frac {8(k - 1)} {k} R = 8(k - 1) r "
}
] |
https://en.wikipedia.org/wiki?curid=592735
|
59274182
|
Chirgwin–Coulson weights
|
In modern valence bond (VB) theory calculations, Chirgwin–Coulson weights (also called Mulliken weights) are the relative weights of a set of possible VB structures of a molecule. Related methods of finding the relative weights of valence bond structures are the Löwdin and the inverse weights.
Background.
For a wave function formula_0 where formula_1 are a linearly independent, orthogonal set of basis orbitals, the weight of a constituent orbital formula_2 would be formula_3 since the overlap integral, formula_4 , between two wave functions formula_5 would be 1 for formula_6 and 0 for formula_7 . In valence bond theory, however, the generated structures are not necessarily orthogonal with each other, and oftentimes have substantial overlap between the two structures. As such, when considering non-orthogonal constituent orbitals (i.e. orbitals with non-zero overlap) the non-diagonal terms in the overlap matrix would be non-zero, and must be included in determining the weight of a constituent orbital. A method of computing the weight of a constituent orbital, formula_8, proposed by Chirgwin and Coulson would be:
Chirgwin-Coulson Formula
formula_9
Application of the Chirgwin-Coulson formula to a molecular orbital yields the Mulliken population of the molecular orbital.
Rigorous formulation.
Determination of VB Structures.
Rumer's method.
A method of creating a linearly independent, complete set of valence bond structures for a molecule was proposed by Yuri Rumer. For a system with n electrons and n orbitals, Rumer's method involves arranging the orbitals in a circle and connecting the orbitals together with lines that do not intersect one another. Covalent, or uncharged, structures can be created by connecting all of the orbitals with one another. Ionic, or charged, structures for a given atom can be determined by assigning a charge to a molecule, and then following Rumer's method. For the case of butadiene, the 20 possible Rumer structures are shown, where 1 and 2 are the covalent structures, 3-14 are the monoionic structures, and 15-20 are the diionic structures. The resulting VB structures can be represented by a linear combination of determinants formula_10, where a letter without an over-line indicates an electron with formula_11 spin, while a letter with over-line indicates an electron with formula_12 spin. The VB structure for 1, for example would be a linear combination of the determinants formula_13, formula_14,formula_15, and formula_16. For a monoanionic species, the VB structure for 11 would be a linear combination of formula_17and formula_18, namely:
formula_19
Matrix representation of VB structures.
An arbitrary VB structure formula_20containing formula_21 electrons, represented by the electron indices formula_22, and formula_21 orbitals, represented by formula_23, can be represented by the following Slater determinant:
formula_24
Where formula_25 and formula_26 represent an formula_11 or formula_12 spin on the formula_27 electron, respectively. For the case of a two electron system with orbitals formula_28 and formula_29, the VB structure, formula_30, can be represented:formula_31
Evaluating the determinant yields:
formula_32
Definition of Chirgwin–Coulson weights.
Given a wave function formula_0 where formula_33 is a complete, linearly independent set of VB structures and formula_34 is the coefficient of each structure, the Chirgwin-Coulson weight formula_35 of a VB structure formula_36 can be computed in the following manner:
formula_37
Where formula_38is the overlap matrix satisfyingformula_39.
Other methods of computing weights of VB structure include Löwdin weights, whereformula_40, and inverse weights, where formula_41with formula_42 being a normalization factor defined by formula_43. The use of Löwdin and inverse weights is appropriate when the Chirgwin–Coulson weights either exceed 1 or are negative.
Half determinant decomposition of molecular orbitals.
Given a set of molecular orbitals, formula_44, for a molecule, consider the determinant of a given orbital population, represented by formula_45. The determinant can be written as the following Slater determinant:
formula_46
Computing the determinant explicitly by multiplying this expression can be a computationally difficult task, given that each molecular orbital is composed of a combination of atomic orbitals. On the other hand, because the determinant of a product of matrices is equal to the product of determinants, the determinant can be regrouped to half-determinants, one of which contains only electrons with formula_11 spin and the only with electrons of formula_47 spin, that is: formula_48where formula_49 and formula_50.
Note that any given molecular orbital formula_51 can be written as a linear combination of atomic orbitals formula_52, that is for each formula_2, there exist formula_53 such that formula_54. As such, the half determinant formula_55 can be further decomposed into the half determinants for an ordering of atomic orbitals formula_56 corresponding to a VB structure formula_57. As such, the molecular orbital formula_2 can be represented as a combination of the half determinants of the atomic orbitals, formula_58. The coefficient formula_59 can be determined by evaluating the following matrix:
formula_60
The same method can be used to evaluate the half determinant for the formula_12 electrons, formula_61. As such, the determinant formula_62 can be expressed as formula_63, where formula_64 index across all possible VB structures.
Sample computations for simple molecules.
Computations for the hydrogen molecule.
The hydrogen molecule can be considered to be a linear combination of two <chem>H</chem> formula_65 orbitals, indicated as formula_66 and formula_67. The possible VB structures for <chem>H_2</chem> are the two covalent structures, formula_68and formula_69 indicated as 1 and 2 respectively, as well as the ionic structures formula_70and formula_71 indicated as 3 and 4 respectively, shown below.
Because structures 1 and 2 both represent covalent bonding in the hydrogen molecule and exchanging the electrons of structure 1 yields structure 2, the two covalent structures can be combined into one wave function. As such, the Heitler-London model for bonding in <chem>H_2</chem>, formula_72, can be used in place of the VB structures formula_68 and formula_73:
formula_74
Where the negative sign arises from the antisymmetry of electron exchange. As such, the wave function for the <chem>H_2</chem> molecule, formula_75, can be considered to be a linear combination of the Heitler-London structure and the two ionic valence bond structures.
formula_76
The overlap matrix between the atomic orbitals between the three valence bond configurations formula_72, formula_70, and formula_71 is given in the output for valence bond calculations. A sample output is given below:
formula_77
Finding the eigenvectors of the matrix formula_78, where formula_79 is the hamiltonian and formula_80 is energy due to orbital overlap, yields the VB-vector formula_81, which satisfies:
formula_82
Solving for the VB-vector formula_81 using density functional theory yields the coefficients formula_83 and formula_84. Thus, the Coulson-Chrigwin weights can be computed:
formula_85
formula_86
To check for consistency, the inverse weights can be computed by first determining the inverse of the overlap matrix:
formula_87
Next, the normalization constant formula_88 can be determined:
formula_89
The final weights are: formula_90, and formula_91.
Informally, the computed weights indicate that the wave function for the <chem>H_2</chem> molecule has a minor contribution from an ionic species not predicted from a strictly MO model for bonding.
Computations for ozone.
Determining the relative weights of each resonance structure of ozone requires, first, the determination of the possible VB structures for <chem>O_3</chem>. Considering only the formula_92 orbitals of oxygen, and labeling the formula_92 orbital on the formula_93oxygen formula_94, <chem>O_3</chem>has 6 possible VB structures by Rumer's method. Assuming no atomic orbital overlap, the formula_95 structure can be represented by the determinants formula_96:
formula_97
formula_98
formula_99
formula_100
formula_101
formula_102
<chem>O_3</chem>has the following three molecular orbitals, one where all of the oxygen formula_92 orbitals are in phase, one where there is a node on the central oxygen, and one where all of the oxygen formula_92 orbitals are out of phase, shown below:
The wave functions for each of the molecular orbitals formula_104can be written as a linear combination of each of the oxygen formula_92 orbitals as follows:
formula_105
Where formula_53indicates the coefficient of formula_106in a molecular orbital formula_104. Consider, the VB contributions for the ground state of <chem>O_3</chem>, formula_107. Using the methods of half determinants, the half determinants for the ground state are:
formula_108
formula_109
formula_110
By the method of half determinant expansion, the coefficient, formula_111, for a structure formula_112 is:
formula_113
Which implies that the ground state has the following coefficients:
formula_114
Given the following overlap matrix for the half determinants:
formula_115
The overlap between two VB structures represented by the product of two VB determinants formula_116can be evaluated by finding the product of the overlap between the two half determinants, that is:
formula_117
For example, the overlap between the orbitals formula_118and formula_119would be:
formula_120
The weights of the standard Lewis structures for <chem>O_3</chem> would be formula_121and formula_122. The weights can be found by first computing the Chirgwin–Coulson weights for their constituent determinants:
formula_123
formula_124
The weights for the standard lewis structures would be the sum of the weights of the constituent determinants. As such:
formula_125
formula_126
This compares well with reported Chirgwin–Coulson weights of 0.226 for the standard Lewis structure of ozone in the ground state.
For the diradical state, formula_127, the weight is:
formula_128
formula_129
formula_130
This also compares favorably with reported Chirgwin–Coulson weights of 0.213 for the diradical state of ozone in the ground state.
Applications to main group compounds.
Borazine.
Borazine, (chemical formula <chem>B_3N_3H_6</chem>) is a cyclic, planar compound that is isoelectronic with benzene. Given the lone pair in the nitrogen p orbital out of the plane and the empty p orbital of boron, the following resonance structure is possible:
However, VB calculations using a double‐zeta D95 basis set indicate that the predominant resonance structures are the structure with all three lone pairs on the nitrogen (labeled 1 below) and the six resonance structures with one double bond between boron and nitrogen (labeled 2 below). The relative weights of the two structures are 0.17 and 0.08 respectively.
By contrast, the dominant resonance structures of benzene are the two Kekule structures, with weight 0.15, and 12 monozwitterionic structures with weight 0.03. The data, together, indicate that, despite the similarity in appearance and structure, the electrons on borazine are less delocalized than those on benzene.
S2N2.
Disulfur dinitride is a square planar compound that contains a 6 electron conjugated formula_103 system. The primary diradical resonance structures (1 and 2) and a secondary zwitterionic structure (3) are shown below:
Valence bond calculations using the Dunning's D95 full double-zeta basis set indicate that the dominant resonance structure is the singlet diradical with a long nitrogen-nitrogen bond (structure 1), with Chirgwin-Coulson weight 0.47. This value is substantially higher than the weight for the singlet diradical centered on the sulfurs (structure 2), which has a Chirgwin-Coulson weight of 0.06. This result corresponds nicely with the general rules regarding Lewis structures, namely that formal charges ought to be minimized, and contrasts with earlier computational results indicating that 1 is the dominant structure.
|
[
{
"math_id": 0,
"text": "\\Psi=\\sum\\limits_{i}C_i\\Phi_i"
},
{
"math_id": 1,
"text": "\\Phi_1, \\Phi_2, \\dots, \\Phi_n"
},
{
"math_id": 2,
"text": "\\Psi_i"
},
{
"math_id": 3,
"text": "C_i^2"
},
{
"math_id": 4,
"text": "S_{ij}"
},
{
"math_id": 5,
"text": "\\Psi_i, \\Psi_j"
},
{
"math_id": 6,
"text": "i=j"
},
{
"math_id": 7,
"text": "i\\neq j"
},
{
"math_id": 8,
"text": "\\Phi_i"
},
{
"math_id": 9,
"text": " \\begin{align}\nW_i &=C_i\\langle\\Phi_i \\vert\\Psi\\rangle=C_i\\sum\\limits_{j}C_j\\langle\\Psi_i \\vert\\Psi_j\\rangle\\\\\n & =\\sum\\limits_{j}C_iC_jS_{ij}\n\\end{align}\n"
},
{
"math_id": 10,
"text": "|a\\overline{b}c\\overline{d}|"
},
{
"math_id": 11,
"text": "\\alpha"
},
{
"math_id": 12,
"text": "\\beta"
},
{
"math_id": 13,
"text": "|1\\overline{2}3\\overline{4}|"
},
{
"math_id": 14,
"text": "|2\\overline{1}3\\overline{4}|"
},
{
"math_id": 15,
"text": "|1\\overline{2}4\\overline{3}|"
},
{
"math_id": 16,
"text": "|2\\overline{1}4\\overline{3}|"
},
{
"math_id": 17,
"text": "|1\\overline{2}4\\overline{4}|"
},
{
"math_id": 18,
"text": "|2\\overline{1}4\\overline{4}|"
},
{
"math_id": 19,
"text": "\\phi_{11}=\\frac{1}{\\sqrt{2}}(|1\\overline{2}4\\overline{4}|+|2\\overline{1}4\\overline{4}|)"
},
{
"math_id": 20,
"text": "|\\varphi_1\\overline{\\varphi_2}\\varphi_3\\overline{\\varphi_4}\\dots|"
},
{
"math_id": 21,
"text": "n"
},
{
"math_id": 22,
"text": "1,2,\\dots,n"
},
{
"math_id": 23,
"text": "\\varphi_1,\\varphi_2,\\dots, \\varphi_n"
},
{
"math_id": 24,
"text": "|\\varphi_1\\overline{\\varphi_2}\\varphi_3\\overline{\\varphi_4}\\dots|=\\frac{1}{\\sqrt{n!}}\n\\begin{vmatrix}\n\\varphi_1(1)\\alpha(1) & \\varphi_1(2)\\alpha(2) & \\dots & \\varphi_1(n)\\alpha(n)\\\\\n\\varphi_2(1)\\beta(1) & \\varphi_2(2)\\beta(2) & \\dots & \\varphi_2(n)\\beta(n)\\\\\n\\vdots & \\vdots & \\ddots & \\vdots\n\\end{vmatrix}\n"
},
{
"math_id": 25,
"text": "\\alpha(k)"
},
{
"math_id": 26,
"text": "\\beta(k)"
},
{
"math_id": 27,
"text": "k^\\text{th}"
},
{
"math_id": 28,
"text": "a"
},
{
"math_id": 29,
"text": "b"
},
{
"math_id": 30,
"text": "|a\\overline{b}|"
},
{
"math_id": 31,
"text": "|a\\overline{b}|=\\frac{1}{\\sqrt{2}}\n\\begin{vmatrix}\na(1)\\alpha(1) & a(2)\\alpha(2)\\\\\nb(1)\\beta(1) & b(2)\\beta(2)\n\\end{vmatrix}\n\n"
},
{
"math_id": 32,
"text": "|a\\overline{b}|=\\frac{1}{\\sqrt{2}}(a(1)b(2)[\\alpha(1)\\beta(2)]-a(2)b(1)[\\alpha(2)\\beta(1)])"
},
{
"math_id": 33,
"text": "\\Phi_1,\\Phi_2,\\dots,\\Phi_N"
},
{
"math_id": 34,
"text": "C_k"
},
{
"math_id": 35,
"text": "W_K"
},
{
"math_id": 36,
"text": "\\Phi_K"
},
{
"math_id": 37,
"text": "W_i=\\sum\\limits_{j}C_iC_j\\langle\\Phi_i|\\Phi_j\\rangle=\\sum\\limits_{j}C_iC_jS_{ij}"
},
{
"math_id": 38,
"text": "S"
},
{
"math_id": 39,
"text": "\\langle\\Phi_i|\\Phi_j\\rangle=S_{ij}"
},
{
"math_id": 40,
"text": "W_i^{\\text{Lowdin}}=\\sum\\limits_{j,k}S_{ij}^{1/2}C_jS_{ik}^{1/2}C_k\n"
},
{
"math_id": 41,
"text": "W_i^{\\text{inverse}}=\\frac{1}{N}\\bigg(\\frac{C^2_i}{(S^{-1})_{ii}}\\bigg)\n"
},
{
"math_id": 42,
"text": "N\n"
},
{
"math_id": 43,
"text": "N=\\sum\\limits_i\\frac{C_i^2}{(S^{-1})_{ii}}\n"
},
{
"math_id": 44,
"text": "\\Psi_1,\\Psi_2,\\dots,\\Psi_m"
},
{
"math_id": 45,
"text": "D_{\\text{MO}}"
},
{
"math_id": 46,
"text": "D_{\\text{MO}}=|\\Psi_1\\overline\\Psi_1\\Psi_2\\overline\\Psi_2\\dots|"
},
{
"math_id": 47,
"text": "\\beta "
},
{
"math_id": 48,
"text": "D_{\\text{MO}}=h^\\alpha_{\\text{MO}}h^\\beta_{\\text{MO}}"
},
{
"math_id": 49,
"text": "h^\\alpha_{\\text{MO}}=|\\phi_1\\phi_2\\dots|"
},
{
"math_id": 50,
"text": "h^\\beta_{\\text{MO}}=|\\overline\\phi_1\\overline\\phi_2\\dots|"
},
{
"math_id": 51,
"text": "\\Psi_{\\text{MO}}"
},
{
"math_id": 52,
"text": "\\phi_1,\\phi_2,\\dots,\\phi_n"
},
{
"math_id": 53,
"text": "C_{ij}"
},
{
"math_id": 54,
"text": "\\Psi_i=\\sum\\limits_{j}C_{ij}\\phi_j"
},
{
"math_id": 55,
"text": "h^\\alpha_\\text{MO}"
},
{
"math_id": 56,
"text": "h^\\alpha_r=|\\phi_1,\\phi_2,\\dots,\\phi_n|"
},
{
"math_id": 57,
"text": "r"
},
{
"math_id": 58,
"text": "h^\\alpha_\\text{MO}=\\sum\\limits_rC^\\alpha_rh^\\alpha_r"
},
{
"math_id": 59,
"text": "C_r^\\alpha"
},
{
"math_id": 60,
"text": "C_r^\\alpha=\n\\begin{vmatrix}\nC_{11} & C_{21} & \\dots C_{n1}\\\\\nC_{12} & C_{22} & \\dots C_{n2}\\\\\n\\vdots & \\vdots & \\ddots\\\\\nC_{1n} & C_{2n} & \\dots C_{nn}\\\\\n\\end{vmatrix}"
},
{
"math_id": 61,
"text": "h^\\beta_\\text{MO}"
},
{
"math_id": 62,
"text": "D_\\text{MO}"
},
{
"math_id": 63,
"text": "D_\\text{MO}=\\sum\\limits_{r,s}C^\\alpha_rC^\\beta_rh^\\alpha_rh^\\beta_s"
},
{
"math_id": 64,
"text": "r, s"
},
{
"math_id": 65,
"text": "1s"
},
{
"math_id": 66,
"text": "\\varphi_1"
},
{
"math_id": 67,
"text": "\\varphi_2"
},
{
"math_id": 68,
"text": "|\\varphi_1\\overline{\\varphi_2}|"
},
{
"math_id": 69,
"text": "|\\varphi_2\\overline{\\varphi_1}|"
},
{
"math_id": 70,
"text": "|\\varphi_1\\overline{\\varphi_1}|"
},
{
"math_id": 71,
"text": "|\\varphi_2\\overline{\\varphi_2}|"
},
{
"math_id": 72,
"text": "\\Phi_{HL}"
},
{
"math_id": 73,
"text": "|\\overline{\\varphi_1}\\varphi_2|"
},
{
"math_id": 74,
"text": "\\Phi_{HL}=|\\varphi_1\\overline{\\varphi_2}|- |\\overline{\\varphi_1}\\varphi_2|"
},
{
"math_id": 75,
"text": "\\Psi_{\\text{H}_2}"
},
{
"math_id": 76,
"text": "\\Psi_{\\text{H}_2}=C_1\\Phi_{HL}+C_2|\\varphi_1\\overline{\\varphi_1}|+C_3|\\varphi_2\\overline{\\varphi_2}|\n"
},
{
"math_id": 77,
"text": "S=\n\\begin{vmatrix}\nS_{11}\\\\\nS_{21} & S_{22}\\\\\nS_{31} & S_{32} & S_{33}\\\\\n\\end{vmatrix}\n=\n\\begin{vmatrix}\n1 \\\\\n0.77890423 & 1\\\\\n0.77890423 & 0.43543258 & 1\\\\\n\\end{vmatrix}"
},
{
"math_id": 78,
"text": "H-ES=0"
},
{
"math_id": 79,
"text": "H"
},
{
"math_id": 80,
"text": "E"
},
{
"math_id": 81,
"text": "\\vec{c}"
},
{
"math_id": 82,
"text": "\\Psi_H=\\vec{c}\\{\\Phi_{HL},|\\varphi_1\\overline{\\varphi_1}|,|\\varphi_2\\overline{\\varphi_2}|\\}=C_1\\Phi_{HL}+C_2|\\varphi_1\\overline{\\varphi_1}|+C_3|\\varphi_2\\overline{\\varphi_2}|"
},
{
"math_id": 83,
"text": "C_1=0.787469"
},
{
"math_id": 84,
"text": "C_2=C_3=0.133870"
},
{
"math_id": 85,
"text": "W_1=C_1^2S_{11}+C_1C_2S_{12}+C_1C_3S_{13}=0.784\n"
},
{
"math_id": 86,
"text": "W_2=W_3=0.108"
},
{
"math_id": 87,
"text": "S^{-1}=\n\\begin{vmatrix}\n6.46449\\\\\n-3.5078 & 3.13739\\\\ \n-3.5078 & 1.36612 & 3.13739\\\\ \n\\end{vmatrix}"
},
{
"math_id": 88,
"text": "N"
},
{
"math_id": 89,
"text": "N=\\sum\\limits_{K}\\frac{C_K^2}{(S^{-1})_{KK}}=0.0185"
},
{
"math_id": 90,
"text": "W_1=\\frac{1}{N}\\bigg(\\frac{C_1^2}{(S^{-1})_{11}}\\bigg)=0.803"
},
{
"math_id": 91,
"text": "W_2=W_3=0.098"
},
{
"math_id": 92,
"text": "p"
},
{
"math_id": 93,
"text": "i^{\\text{th}}"
},
{
"math_id": 94,
"text": "\\phi_i"
},
{
"math_id": 95,
"text": "k^{\\text{th}}"
},
{
"math_id": 96,
"text": "\\Phi_k"
},
{
"math_id": 97,
"text": "\\Phi_1=\\frac{1}{\\sqrt{2}}(|\\phi_2\\overline{\\phi_2}\\phi_1\\overline{\\phi_3}|+|\\phi_2\\overline{\\phi_2}\\phi_3\\overline{\\phi_1}|)"
},
{
"math_id": 98,
"text": "\\Phi_2=\\frac{1}{\\sqrt{2}}(|\\phi_1\\overline{\\phi_1}\\phi_2\\overline{\\phi_3}|+|\\phi_1\\overline{\\phi_1}\\phi_3\\overline{\\phi_2}|)"
},
{
"math_id": 99,
"text": "\\Phi_3=\\frac{1}{\\sqrt{2}}(|\\phi_1\\overline{\\phi_2}\\phi_3\\overline{\\phi_3}|+|\\phi_2\\overline{\\phi_1}\\phi_3\\overline{\\phi_3}|)"
},
{
"math_id": 100,
"text": "\\Phi_4=|\\phi_1\\overline{\\phi_1}\\phi_2\\overline{\\phi_2}|"
},
{
"math_id": 101,
"text": "\\Phi_5=|\\phi_2\\overline{\\phi_2}\\phi_3\\overline{\\phi_3}|"
},
{
"math_id": 102,
"text": "\\Phi_6=|\\phi_1\\overline{\\phi_1}\\phi_3\\overline{\\phi_3}|"
},
{
"math_id": 103,
"text": "\\pi"
},
{
"math_id": 104,
"text": "\\pi_i"
},
{
"math_id": 105,
"text": "\\begin{vmatrix}\n\\pi_1\\\\\n\\pi_2\\\\\n\\pi_3\\\\\n\\end{vmatrix}=\n\\begin{vmatrix}\nC_{11} & C_{12} & C_{13}\\\\\nC_{21} & C_{22} & C_{23} \\\\\nC_{31} & C_{32} & C_{33}\\\\\n\\end{vmatrix}\n\\begin{vmatrix}\n\\phi_1\\\\\n\\phi_2\\\\\n\\phi_3\\\\\n\\end{vmatrix}=\n\\begin{vmatrix}\n0.368 & 0.764 & 0.368\\\\\n0.710 & 0 & -0.710 \\\\\n0.614 & -0.671 & 0.614\\\\\n\\end{vmatrix}\n\\begin{vmatrix}\n\\phi_1\\\\\n\\phi_2\\\\\n\\phi_3\\\\\n\\end{vmatrix}"
},
{
"math_id": 106,
"text": "\\phi_j"
},
{
"math_id": 107,
"text": "|\\pi_1\\overline{\\pi_1}\\pi_2\\overline{\\pi_2}|"
},
{
"math_id": 108,
"text": "|\\phi_1\\phi_2|_g=\n\\begin{Vmatrix}\nC_{11} & C_{12} \\\\\nC_{21} & C_{22} \\\\\n\\end{Vmatrix}=-0.542"
},
{
"math_id": 109,
"text": "|\\phi_2\\phi_3|_g=\n\\begin{Vmatrix}\nC_{12} & C_{13} \\\\\nC_{22} & C_{23} \\\\\n\\end{Vmatrix}=-0.542"
},
{
"math_id": 110,
"text": "|\\phi_1\\phi_3|_g=\n\\begin{Vmatrix}\nC_{11} & C_{13} \\\\\nC_{21} & C_{23} \\\\\n\\end{Vmatrix}=\n-0.523"
},
{
"math_id": 111,
"text": "C_i"
},
{
"math_id": 112,
"text": "|\\phi_i\\overline{\\phi_j}\\phi_k\\overline{\\phi_l}|"
},
{
"math_id": 113,
"text": "|\\phi_i\\overline{\\phi_j}\\phi_k\\overline{\\phi_l}|=|\\phi_i\\phi_k||\\phi_j\\phi_l|"
},
{
"math_id": 114,
"text": "\\begin{align}\n\\Psi_g&=-0.416\\Phi_1+0.400\\Phi_2+0.400\\Phi_3+0.294\\Phi_4+0.294\\Phi_5+0.274\\Phi_6\\\\\n&=-0.294(|\\phi_2\\overline{\\phi_2}\\phi_1\\overline{\\phi_3}|+|\\phi_2\\overline{\\phi_2}\\phi_3\\overline{\\phi_1}|)+0.283( |\\phi_1\\overline{\\phi_1}\\phi_2\\overline{\\phi_3}|+|\\phi_1\\overline{\\phi_1}\\phi_3\\overline{\\phi_2}|)+0.283(|\\phi_1\\overline{\\phi_2}\\phi_3\\overline{\\phi_3}|+|\\phi_2\\overline{\\phi_1}\\phi_3\\overline{\\phi_3}|)+\\\\\n&\\quad\\quad 0.294|\\phi_1\\overline{\\phi_1}\\phi_2\\overline{\\phi_2}|+0.294|\\phi_2\\overline{\\phi_2}\\phi_3\\overline{\\phi_3}|+0.274|\\phi_1\\overline{\\phi_1}\\phi_3\\overline{\\phi_3}|\n\\end{align}"
},
{
"math_id": 115,
"text": "S=\\begin{vmatrix}\n\\langle|\\phi_1\\phi_2|||\\phi_1\\phi_2|\\rangle\\\\\n\\langle|\\phi_1\\phi_2|||\\phi_1\\phi_3|\\rangle & \\langle|\\phi_1\\phi_3|||\\phi_1\\phi_3|\\rangle\\\\\n\\langle|\\phi_1\\phi_2|||\\phi_2\\phi_3|\\rangle & \\langle|\\phi_1\\phi_3|||\\phi_2\\phi_3|\\rangle &\n\\langle|\\phi_2\\phi_3|||\\phi_2\\phi_3|\\rangle\n\\end{vmatrix}=\n\\begin{vmatrix}\n0.98377\\\\\n0.12634 & 0.99993\\\\\n0.00810 & 0.12634& 0.98377\n\\end{vmatrix}"
},
{
"math_id": 116,
"text": "\\langle|\\phi_a\\overline{\\phi_b}\\phi_c\\overline{\\phi_d}|||\\phi_w\\overline{\\phi_x}\\phi_y\\overline{\\phi_z}|\\rangle"
},
{
"math_id": 117,
"text": "\\langle|\\phi_a\\overline{\\phi_b}\\phi_c\\overline{\\phi_d}|||\\phi_w\\overline{\\phi_x}\\phi_y\\overline{\\phi_z}|\\rangle=(\\langle|\\phi_a\\phi_c|||\\phi_w\\phi_y|\\rangle)(\\langle|\\phi_b\\phi_d|||\\phi_x\\phi_z|\\rangle)"
},
{
"math_id": 118,
"text": "|\\phi_1\\overline{\\phi_2}\\phi_3\\overline{\\phi_3}|"
},
{
"math_id": 119,
"text": "|\\phi_1\\overline{\\phi_2}\\phi_2\\overline{\\phi_3}| "
},
{
"math_id": 120,
"text": "\\langle|\\phi_1\\overline{\\phi_2}\\phi_3\\overline{\\phi_3}|||\\phi_1\\overline{\\phi_2}\\phi_2\\overline{\\phi_3}|\\rangle=(\\langle|\\phi_1\\phi_3|||\\phi_1\\phi_2|\\rangle)(\\langle|\\phi_2\\phi_3|||\\phi_2\\phi_3|\\rangle)=(0.12634)(0.98377)=0.12429"
},
{
"math_id": 121,
"text": "W(\\Psi_2) "
},
{
"math_id": 122,
"text": "W(\\Psi_3) "
},
{
"math_id": 123,
"text": "\\begin{align}\nW(|\\phi_1\\overline{\\phi_2}\\phi_3\\overline{\\phi_3}|)&=\\sum\\limits_k0.283C_k\\langle|\\phi_1\\overline{\\phi_2}\\phi_3\\overline{\\phi_3}|||\\Phi_k|\\rangle\\\\\n\n&=0.283[-0.294(\\langle|\\phi_1\\overline{\\phi_2}\\phi_3\\overline{\\phi_3}|||\\phi_2\\overline{\\phi_2}\\phi_1\\overline{\\phi_3}|\\rangle+\\langle|\\phi_1\\overline{\\phi_2}\\phi_3\\overline{\\phi_3}|||\\phi_2\\overline{\\phi_2}\\phi_3\\overline{\\phi_1}|\\rangle)+0.283(\\langle|\\phi_1\\overline{\\phi_2}\\phi_3\\overline{\\phi_3}|||\\phi_1\\overline{\\phi_1}\\phi_2\\overline{\\phi_3}|\\rangle+\\langle|\\phi_1\\overline{\\phi_2}\\phi_3\\overline{\\phi_3}|||\\phi_1\\overline{\\phi_1}\\phi_3\\overline{\\phi_2}|\\rangle)\\\\\n& \\quad\\quad +0.283(\\langle|\\phi_1\\overline{\\phi_2}\\phi_3\\overline{\\phi_3}|||\\phi_1\\overline{\\phi_2}\\phi_3\\overline{\\phi_3}|\\rangle+\\langle|\\phi_1\\overline{\\phi_2}\\phi_3\\overline{\\phi_3}|||\\phi_2\\overline{\\phi_1}\\phi_3\\overline{\\phi_3}|\\rangle)+0.294\\langle|\\phi_1\\overline{\\phi_2}\\phi_3\\overline{\\phi_3}|||\\phi_1\\overline{\\phi_1}\\phi_2\\overline{\\phi_2}|\\rangle+0.294\\langle|\\phi_1\\overline{\\phi_2}\\phi_3\\overline{\\phi_3}|||\\phi_2\\overline{\\phi_2}\\phi_3\\overline{\\phi_3}|\\rangle\\\\\n&\\quad\\quad + 0.274\\langle|\\phi_1\\overline{\\phi_2}\\phi_3\\overline{\\phi_3}|||\\phi_1\\overline{\\phi_1}\\phi_3\\overline{\\phi_3}|\\rangle]\\\\\n& =0.111\n\\end{align}"
},
{
"math_id": 124,
"text": "W(|\\phi_2\\overline{\\phi_1}\\phi_3\\overline{\\phi_3}|)=W(|\\phi_1\\overline{\\phi_1}\\phi_2\\overline{\\phi_3}|)=W(|\\phi_1\\overline{\\phi_1}\\phi_3\\overline{\\phi_2}|)=0.111 "
},
{
"math_id": 125,
"text": "W(\\Psi_2)=W(|\\phi_1\\overline{\\phi_1}\\phi_2\\overline{\\phi_3}|)+W(|\\phi_1\\overline{\\phi_1}\\phi_3\\overline{\\phi_2}|)=0.222 "
},
{
"math_id": 126,
"text": "W(\\Psi_3)=W(|\\phi_1\\overline{\\phi_2}\\phi_3\\overline{\\phi_3}|)+W(|\\phi_2\\overline{\\phi_1}\\phi_3\\overline{\\phi_3}|)=0.222 "
},
{
"math_id": 127,
"text": "\\Psi_1 "
},
{
"math_id": 128,
"text": "W(|\\phi_2\\overline{\\phi_2}\\phi_1\\overline{\\phi_3}|)=\\sum\\limits_k-0.294C_k|\\phi_2\\overline{\\phi_2}\\phi_1\\overline{\\phi_3}||\\Phi_k|=0.106"
},
{
"math_id": 129,
"text": "W(|\\phi_2\\overline{\\phi_2}\\phi_3\\overline{\\phi_1}|)=0.106"
},
{
"math_id": 130,
"text": "W(\\Psi_1)=W(|\\phi_2\\overline\\phi_2\\phi_1\\overline\\phi_3|)+W(|\\phi_2\\overline\\phi_2\\phi_1\\overline\\phi_3|)=0.106+0.106=0.212 "
}
] |
https://en.wikipedia.org/wiki?curid=59274182
|
592897
|
Hellinger–Toeplitz theorem
|
Theorem on boundedness of symmetric operators
In functional analysis, a branch of mathematics, the Hellinger–Toeplitz theorem states that an everywhere-defined symmetric operator on a Hilbert space with inner product formula_0 is bounded. By definition, an operator "A" is "symmetric" if
formula_1
for all "x", "y" in the domain of "A". Note that symmetric "everywhere-defined" operators are necessarily self-adjoint, so this theorem can also be stated as follows: an everywhere-defined self-adjoint operator is bounded. The theorem is named after Ernst David Hellinger and Otto Toeplitz.
This theorem can be viewed as an immediate corollary of the closed graph theorem, as self-adjoint operators are closed. Alternatively, it can be argued using the uniform boundedness principle. One relies on the symmetric assumption, therefore the inner product structure, in proving the theorem. Also crucial is the fact that the given operator "A" is defined everywhere (and, in turn, the completeness of Hilbert spaces).
The Hellinger–Toeplitz theorem reveals certain technical difficulties in the mathematical formulation of quantum mechanics. Observables in quantum mechanics correspond to self-adjoint operators on some Hilbert space, but some observables (like energy) are unbounded. By Hellinger–Toeplitz, such operators cannot be everywhere defined (but they may be defined on a dense subset). Take for instance the quantum harmonic oscillator. Here the Hilbert space is L2(R), the space of square integrable functions on R, and the energy operator "H" is defined by (assuming the units are chosen such that ℏ = "m" = ω = 1)
formula_2
This operator is self-adjoint and unbounded (its eigenvalues are 1/2, 3/2, 5/2, ...), so it cannot be defined on the whole of L2(R).
|
[
{
"math_id": 0,
"text": " \\langle \\cdot | \\cdot \\rangle "
},
{
"math_id": 1,
"text": " \\langle A x | y \\rangle = \\langle x | A y\\rangle "
},
{
"math_id": 2,
"text": " [Hf](x) = - \\frac12 \\frac{\\mathrm{d}^2}{\\mathrm{d}x^2} f(x) + \\frac12 x^2 f(x). "
}
] |
https://en.wikipedia.org/wiki?curid=592897
|
592949
|
Pharmacognosy
|
Study of plants as a source of drugs
Pharmacognosy is the study of crude drugs obtained from medicinal plants, animals, fungi, and other natural sources. The American Society of Pharmacognosy defines pharmacognosy as "the study of the physical, chemical, biochemical, and biological properties of drugs, drug substances, or potential drugs or drug substances of natural origin as well as the search for new drugs from natural sources".
Description.
The word "pharmacognosy" is derived from two Greek words: , (drug), and "gnosis" (knowledge) or the Latin verb "cognosco" (, 'with', and , 'know'; itself a cognate of the Greek verb , , meaning 'I know, perceive'), meaning 'to conceptualize' or 'to recognize'.
The term "pharmacognosy" was used for the first time by the German physician Johann Adam Schmidt (1759–1809) in his published book "Lehrbuch der Materia Medica" in 1811, and by Anotheus Seydler in 1815, in his "Analecta Pharmacognostica".
Originally—during the 19th century and the beginning of the 20th century—"pharmacognosy" was used to define the branch of medicine or commodity sciences ("Warenkunde" in German) which deals with drugs in their crude, or unprepared form. Crude drugs are the dried, unprepared material of plant, animal or mineral origin, used for medicine. The study of these materials under the name "Pharmakognosie" was first developed in German-speaking areas of Europe, while other language areas often used the older term "materia medica" taken from the works of Galen and Dioscorides. In German, the term "Drogenkunde" ("science of crude drugs") is also used synonymously.
As late as the beginning of the 20th century, the subject had developed mainly on the botanical side, being particularly concerned with the description and identification of drugs both in their whole state and in powder form. Such branches of pharmacognosy are still of fundamental importance, particularly for botanical products (widely available as dietary supplements in the U.S. and Canada), quality control purposes, pharmacopoeial protocols and related health regulatory frameworks. At the same time, development in other areas of research has enormously expanded the subject. The advent of the 21st century brought a renaissance of pharmacognosy, and its conventional botanical approach has been broadened up to molecular and metabolomic levels.
In addition to the previously mentioned definition, the American Society of Pharmacognosy defines pharmacognosy as "the study of natural product molecules (typically secondary metabolites) that are useful for their medicinal, ecological, gustatory, or other functional properties." Similarly, the mission of the Pharmacognosy Institute at the University of Illinois at Chicago involves plant-based and plant-related health products for the benefit of human health. Other definitions are more encompassing, drawing on a broad spectrum of biological subjects, including botany, ethnobotany, marine biology, microbiology, herbal medicine, chemistry, biotechnology, phytochemistry, pharmacology, pharmaceutics, clinical pharmacy, and pharmacy practice.
Biological background.
All plants produce chemical compounds as part of their normal metabolic activities. These phytochemicals are divided into (1) primary metabolites such as sugars and fats, which are found in all plants; and (2) secondary metabolites—compounds which are found in a smaller range of plants, serving more specific functions. For example, some secondary metabolites are toxins used by plants to deter predation and others are pheromones used to attract insects for pollination. It is these secondary metabolites and pigments that can have therapeutic actions in humans and which can be refined to produce drugs—examples are inulin from the roots of dahlias, quinine from the cinchona, THC and CBD from the flowers of cannabis, morphine and codeine from the poppy, and digoxin from the foxglove.
Plants synthesize a variety of phytochemicals, but most are derivatives:
Natural products chemistry.
A typical protocol to isolate a pure chemical agent from natural origin is bioassay-guided fractionation, meaning step-by-step separation of extracted components based on differences in their physicochemical properties, and assessing the biological activity, followed by next round of separation and assaying. Typically, such work is initiated after a given crude drug formulation (typically prepared by solvent extraction of the natural material) is deemed "active" in a particular "in vitro" assay. If the end-goal of the work at hand is to identify which one(s) of the scores or hundreds of compounds are responsible for the observed "in vitro" activity, the path to that end is fairly straightforward:
"In vitro" activity does not necessarily translate to biological activity in humans or other living systems.
Herbal.
In the past, in some countries in Asia and Africa, up to 80% of the population may rely on traditional medicine (including herbal medicine) for primary health care. Native American cultures have also relied on traditional medicine such as ceremonial smoking of tobacco, potlatch ceremonies, and herbalism, to name a few, prior to European colonization. Knowledge of traditional medicinal practices is disappearing in indigenous communities, particularly in the Amazon.
With worldwide research into pharmacology as well as medicine, traditional medicines or ancient herbal medicines are often translated into modern remedies, such as the anti-malarial group of drugs called artemisinin isolated from "Artemisia annua" herb, a herb that was known in Chinese medicine to treat fever. However, it was found that its plant extracts had antimalarial activity, leading to the Nobel Prize winning discovery of artemisinin.
Microscopical evaluation.
Microscopic evaluation is essential for the initial identification of herbs, identifying small fragments of crude or powdered herbs, identifying adulterants (such as insects, animal feces, mold, fungi, etc.), and recognizing the plant by its characteristic tissue features. Techniques such as microscopic linear measurements, determination of leaf constants, and quantitative microscopy are also utilized in this evaluation. The determination of leaf constants includes stomatal number, stomatal index, vein islet number, vein termination number, and palisade ratio.
The stomatal index is the percentage formed by the number of stomata divided by the total number of epidermal cells, with each stoma being counted as one cell.
formula_0
where:
S.I. is the stomatal index
S is the number of stomata per unit area
E is the number of epidermal cells in the same unit area.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "S.I.= \\frac{S}{E+S}\\times100"
}
] |
https://en.wikipedia.org/wiki?curid=592949
|
5930192
|
Bessel–Clifford function
|
In mathematical analysis, the Bessel–Clifford function, named after Friedrich Bessel and William Kingdon Clifford, is an entire function of two complex variables that can be used to provide an alternative development of the theory of Bessel functions. If
formula_0
is the entire function defined by means of the reciprocal gamma function, then the Bessel–Clifford function is defined by the series
formula_1
The ratio of successive terms is "z"/"k"("n" + "k"), which for all values of "z" and "n" tends to zero with increasing "k". By the ratio test, this series converges absolutely for all "z" and "n", and uniformly for all regions with bounded |"z"|, and hence the Bessel–Clifford function is an entire function of the two complex variables "n" and "z".
Differential equation of the Bessel–Clifford function.
It follows from the above series on differentiating with respect to "x" that formula_2 satisfies the linear second-order homogeneous differential equation
formula_3
This equation is of generalized hypergeometric type, and in fact the Bessel–Clifford function is up to a scaling factor a Pochhammer–Barnes hypergeometric function; we have
formula_4
Unless n is a negative integer, in which case the right-hand side is undefined, the two definitions are essentially equivalent; the hypergeometric function being normalized so that its value at "z" = 0 is one.
Relation to Bessel functions.
The Bessel function of the first kind can be defined in terms of the Bessel–Clifford function as
formula_5
when "n" is not an integer. We can see from this that the Bessel function is not entire. Similarly, the modified Bessel function of the first kind can be defined as
formula_6
The procedure can of course be reversed, so that we may define the Bessel–Clifford function as
formula_7
but from this starting point we would then need to show formula_8 was entire.
Recurrence relation.
From the defining series, it follows immediately that formula_9
Using this, we may rewrite the differential equation for formula_8 as
formula_10
which defines the recurrence relationship for the Bessel–Clifford function. This is equivalent to a similar relation for
0"F"1. We have, as a special case of Gauss's continued fraction
formula_11
It can be shown that this continued fraction converges in all cases.
The Bessel–Clifford function of the second kind.
The Bessel–Clifford differential equation
formula_12
has two linearly independent solutions. Since the origin is a regular singular point of the differential equation, and since formula_8 is entire, the second solution must be singular at the origin.
If we set
formula_13
which converges for formula_14, and analytically continue it, we obtain a second linearly independent solution to the differential equation.
The factor of 1/2 is inserted in order to make formula_15 correspond to the Bessel functions of the second kind. We have
formula_16
and
formula_17
In terms of "K", we have
formula_18
Hence, just as the Bessel function and modified Bessel function of the first kind can both be expressed in terms of formula_8, those of the second kind can both be expressed in terms of formula_15.
Generating function.
If we multiply the absolutely convergent series for exp("t") and
exp("z"/"t") together, we get (when "t" is not zero) an absolutely convergent series for exp("t" + "z"/"t"). Collecting terms in "t", we find on comparison with the power series definition for formula_19 that we have
formula_20
This generating function can then be used to obtain further formulas, in particular we may use Cauchy's integral formula and obtain formula_19 for integer "n" as
formula_21
|
[
{
"math_id": 0,
"text": "\\pi(x) = \\frac{1}{\\Pi(x)} = \\frac{1}{\\Gamma(x+1)}"
},
{
"math_id": 1,
"text": "{\\mathcal C}_n(z) = \\sum_{k=0}^\\infty \\pi(k+n) \\frac{z^k}{k!}"
},
{
"math_id": 2,
"text": "{\\mathcal C}_n(x)"
},
{
"math_id": 3,
"text": "xy'' + (n+1)y' = y. \\qquad"
},
{
"math_id": 4,
"text": "{\\mathcal C}_n(z) = \\pi(n)\\ _0F_1(;n+1; z)."
},
{
"math_id": 5,
"text": "J_n(z) = \\left(\\frac{z}{2}\\right)^n {\\mathcal C}_n\\left(-\\frac{z^2}{4}\\right);"
},
{
"math_id": 6,
"text": "I_n(z) = \\left(\\frac{z}{2}\\right)^n {\\mathcal C}_n\\left(\\frac{z^2}{4}\\right)."
},
{
"math_id": 7,
"text": "{\\mathcal C}_n(z) = z^{-n/2} I_n(2 \\sqrt{z});"
},
{
"math_id": 8,
"text": "{\\mathcal C}"
},
{
"math_id": 9,
"text": "\\frac{d}{dx}{\\mathcal C}_n(x) = {\\mathcal C}_{n+1}(x)."
},
{
"math_id": 10,
"text": "x {\\mathcal C}_{n+2}(x) + (n+1){\\mathcal C}_{n+1}(x) = {\\mathcal C}_n(x),"
},
{
"math_id": 11,
"text": "\\frac{{\\mathcal C}_{n+1}(x)}{{\\mathcal C}_n(x)} = \\cfrac{1}{n+1 + \\cfrac{x}{n+2+\\cfrac{x}{n+3+ \\cfrac{x}{\\ddots}}}}."
},
{
"math_id": 12,
"text": "xy'' + (n+1)y' = y \\qquad"
},
{
"math_id": 13,
"text": "{\\mathcal K}_n(x) = \\frac{1}{2} \\int_0^\\infty \\exp\\left(-t-\\frac{x}{t}\\right) \\frac{dt}{t^{n+1}}"
},
{
"math_id": 14,
"text": "\\Re(x) > 0"
},
{
"math_id": 15,
"text": "{\\mathcal K}"
},
{
"math_id": 16,
"text": "K_n(x) = \\left(\\frac{x}{2}\\right)^n {\\mathcal K}_n\\left(\\frac{x^2}{4}\\right)."
},
{
"math_id": 17,
"text": "Y_n(x) = \\left(\\frac{x}{2}\\right)^n {\\mathcal K}_n\\left(-\\frac{x^2}{4}\\right)."
},
{
"math_id": 18,
"text": "{\\mathcal K}_n(x) = x^{-n/2} K_n(2 \\sqrt{x})."
},
{
"math_id": 19,
"text": "{\\mathcal C}_n"
},
{
"math_id": 20,
"text": "\\exp\\left(t + \\frac{z}{t}\\right) = \\sum_{n=-\\infty}^\\infty t^n {\\mathcal C}_n(z)."
},
{
"math_id": 21,
"text": "{\\mathcal C}_n(z) = \\frac{1}{2 \\pi i} \\oint_C \\frac{\\exp(t+z/t)}{t^{n+1}}\\, dt = \\frac{1}{2 \\pi}\\int_0^{2 \\pi} \\exp(z\\exp(-i\\theta)+\\exp(i\\theta)-ni\\theta)\\,d\\theta."
}
] |
https://en.wikipedia.org/wiki?curid=5930192
|
59302425
|
Siamese neural network
|
Neural network working on two input vectors
A Siamese neural network (sometimes called a twin neural network) is an artificial neural network that uses the same weights while working in tandem on two different input vectors to compute comparable output vectors. Often one of the output vectors is precomputed, thus forming a baseline against which the other output vector is compared. This is similar to comparing fingerprints but can be described more technically as a distance function for locality-sensitive hashing.
It is possible to build an architecture that is functionally similar to a twin network but implements a slightly different function. This is typically used for comparing similar instances in different type sets.
Uses of similarity measures where a twin network might be used are such things as recognizing handwritten checks, automatic detection of faces in camera images, and matching queries with indexed documents. The perhaps most well-known application of twin networks are face recognition, where known images of people are precomputed and compared to an image from a turnstile or similar. It is not obvious at first, but there are two slightly different problems. One is recognizing a person among a large number of other persons, that is the facial recognition problem. DeepFace is an example of such a system. In its most extreme form this is recognizing a single person at a train station or airport. The other is face verification, that is to verify whether the photo in a pass is the same as the person claiming he or she is the same person. The twin network might be the same, but the implementation can be quite different.
Learning.
Learning in twin networks can be done with triplet loss or contrastive loss. For learning by triplet loss a baseline vector (anchor image) is compared against a positive vector (truthy image) and a negative vector (falsy image). The negative vector will force learning in the network, while the positive vector will act like a regularizer. For learning by contrastive loss there must be a weight decay to regularize the weights, or some similar operation like a normalization.
A distance metric for a loss function may have the following properties
In particular, the triplet loss algorithm is often defined with squared Euclidean (which unlike Euclidean, does not have triangle inequality) distance at its core.
Predefined metrics, Euclidean distance metric.
The common learning goal is to minimize a distance metric for similar objects and maximize for distinct ones. This gives a loss function like
formula_4
formula_5 are indexes into a set of vectors
formula_6 function implemented by the twin network
The most common distance metric used is Euclidean distance, in case of which the loss function can be rewritten in matrix form as
formula_7
Learned metrics, nonlinear distance metric.
A more general case is where the output vector from the twin network is passed through additional network layers implementing non-linear distance metrics.
formula_8
formula_5 are indexes into a set of vectors
formula_6function implemented by the twin network
formula_9function implemented by the network joining outputs from the twin network
On a matrix form the previous is often approximated as a Mahalanobis distance for a linear space as
formula_10
This can be further subdivided in at least Unsupervised learning and Supervised learning.
Learned metrics, half-twin networks.
This form also allows the twin network to be more of a half-twin, implementing a slightly different functions
formula_11
formula_5 are indexes into a set of vectors
formula_12function implemented by the half-twin network
formula_9function implemented by the network joining outputs from the twin network
Twin networks for object tracking.
Twin networks have been used in object tracking because of its unique two tandem inputs and similarity measurement. In object tracking, one input of the twin network is user pre-selected exemplar image, the other input is a larger search image, which twin network's job is to locate exemplar inside of search image. By measuring the similarity between exemplar and each part of the search image, a map of similarity score can be given by the twin network. Furthermore, using a Fully Convolutional Network, the process of computing each sector's similarity score can be replaced with only one cross correlation layer.
After being first introduced in 2016, Twin fully convolutional network has been used in many High-performance Real-time Object Tracking Neural Networks. Like CFnet, StructSiam, SiamFC-tri, DSiam, SA-Siam, SiamRPN, DaSiamRPN, Cascaded SiamRPN, SiamMask, SiamRPN++, Deeper and Wider SiamRPN.
|
[
{
"math_id": 0,
"text": "\\delta ( x, y ) \\ge 0"
},
{
"math_id": 1,
"text": "\\delta ( x, y ) = 0 \\iff x=y"
},
{
"math_id": 2,
"text": "\\delta ( x, y ) = \\delta ( y, x )"
},
{
"math_id": 3,
"text": "\\delta ( x, z ) \\le \\delta ( x, y ) + \\delta ( y, z )"
},
{
"math_id": 4,
"text": "\\begin{align}\n\n\\delta(x^{(i)}, x^{(j)})=\n\\begin {cases}\n\\min \\ \\| \\operatorname{f} \\left ( x^{(i)} \\right ) - \\operatorname{f} \\left ( x^{(j)} \\right ) \\| \\, , i = j \\\\\n\\max \\ \\| \\operatorname{f} \\left ( x^{(i)} \\right ) - \\operatorname{f} \\left ( x^{(j)} \\right ) \\| \\, , i \\neq j\n\\end{cases}\n\\end{align}"
},
{
"math_id": 5,
"text": "i,j"
},
{
"math_id": 6,
"text": "\\operatorname{f}(\\cdot)"
},
{
"math_id": 7,
"text": "\\operatorname{\\delta} ( \\mathbf{x}^{(i)}, \\mathbf{x}^{(j)} ) \\approx (\\mathbf{x}^{(i)} - \\mathbf{x}^{(j)})^{T}(\\mathbf{x}^{(i)} - \\mathbf{x}^{(j)})"
},
{
"math_id": 8,
"text": "\\begin{align}\n\\text{if} \\, i = j \\, \\text{then} & \\, \\operatorname{\\delta} \\left [ \\operatorname{f} \\left ( x^{(i)} \\right ), \\, \\operatorname{f} \\left ( x^{(j)} \\right ) \\right ] \\, \\text{is small} \\\\\n\\text{otherwise} & \\, \\operatorname{\\delta} \\left [ \\operatorname{f} \\left ( x^{(i)} \\right ), \\, \\operatorname{f} \\left ( x^{(j)} \\right ) \\right ] \\, \\text{is large}\n\\end{align}"
},
{
"math_id": 9,
"text": "\\operatorname{\\delta}(\\cdot)"
},
{
"math_id": 10,
"text": "\\operatorname{\\delta} ( \\mathbf{x}^{(i)}, \\mathbf{x}^{(j)} ) \\approx (\\mathbf{x}^{(i)} - \\mathbf{x}^{(j)})^{T}\\mathbf{M}(\\mathbf{x}^{(i)} - \\mathbf{x}^{(j)})"
},
{
"math_id": 11,
"text": "\\begin{align}\n\\text{if} \\, i = j \\, \\text{then} & \\, \\operatorname{\\delta} \\left [ \\operatorname{f} \\left ( x^{(i)} \\right ), \\, \\operatorname{g} \\left ( x^{(j)} \\right ) \\right ] \\, \\text{is small} \\\\\n\\text{otherwise} & \\, \\operatorname{\\delta} \\left [ \\operatorname{f} \\left ( x^{(i)} \\right ), \\, \\operatorname{g} \\left ( x^{(j)} \\right ) \\right ] \\, \\text{is large}\n\\end{align}"
},
{
"math_id": 12,
"text": "\\operatorname{f}(\\cdot), \\operatorname{g}(\\cdot)"
}
] |
https://en.wikipedia.org/wiki?curid=59302425
|
5930652
|
Degree of a polynomial
|
Mathematical concept
In mathematics, the degree of a polynomial is the highest of the degrees of the polynomial's monomials (individual terms) with non-zero coefficients. The degree of a term is the sum of the exponents of the variables that appear in it, and thus is a non-negative integer. For a univariate polynomial, the degree of the polynomial is simply the highest exponent occurring in the polynomial. The term order has been used as a synonym of "degree" but, nowadays, may refer to several other concepts (see Order of a polynomial (disambiguation)).
For example, the polynomial formula_0 which can also be written as formula_1 has three terms. The first term has a degree of 5 (the sum of the powers 2 and 3), the second term has a degree of 1, and the last term has a degree of 0. Therefore, the polynomial has a degree of 5, which is the highest degree of any term.
To determine the degree of a polynomial that is not in standard form, such as formula_2, one can put it in standard form by expanding the products (by distributivity) and combining the like terms; for example, formula_3 is of degree 1, even though each summand has degree 2. However, this is not needed when the polynomial is written as a product of polynomials in standard form, because the degree of a product is the sum of the degrees of the factors.
Names of polynomials by degree.
The following names are assigned to polynomials according to their degree:
Names for degree above three are based on Latin ordinal numbers, and end in "-ic". This should be distinguished from the names used for the number of variables, the arity, which are based on Latin distributive numbers, and end in "-ary". For example, a degree two polynomial in two variables, such as formula_4, is called a "binary quadratic": "binary" due to two variables, "quadratic" due to degree two. There are also names for the number of terms, which are also based on Latin distributive numbers, ending in "-nomial"; the common ones are "monomial", "binomial", and (less commonly) "trinomial"; thus formula_5 is a "binary quadratic binomial".
Examples.
The polynomial formula_6 is a cubic polynomial: after multiplying out and collecting terms of the same degree, it becomes formula_7, with highest exponent 3.
The polynomial formula_8 is a quintic polynomial: upon combining like terms, the two terms of degree 8 cancel, leaving formula_9, with highest exponent 5.
Behavior under polynomial operations.
The degree of the sum, the product or the composition of two polynomials is strongly related to the degree of the input polynomials.
Addition.
The degree of the sum (or difference) of two polynomials is less than or equal to the greater of their degrees; that is,
formula_10 and formula_11.
For example, the degree of formula_12 is 2, and 2 ≤ max{3, 3}.
The equality always holds when the degrees of the polynomials are different. For example, the degree of formula_13 is 3, and 3 = max{3, 2}.
Multiplication.
The degree of the product of a polynomial by a non-zero scalar is equal to the degree of the polynomial; that is,
formula_14.
For example, the degree of formula_15 is 2, which is equal to the degree of formula_16.
Thus, the set of polynomials (with coefficients from a given field "F") whose degrees are smaller than or equal to a given number "n" forms a vector space; for more, see Examples of vector spaces.
More generally, the degree of the product of two polynomials over a field or an integral domain is the sum of their degrees:
formula_17.
For example, the degree of formula_18 is 5 = 3 + 2.
For polynomials over an arbitrary ring, the above rules may not be valid, because of cancellation that can occur when multiplying two nonzero constants. For example, in the ring formula_19 of integers modulo 4, one has that formula_20, but formula_21, which is not equal to the sum of the degrees of the factors.
Composition.
The degree of the composition of two non-constant polynomials formula_22 and formula_23 over a field or integral domain is the product of their degrees:
formula_24
For example, if formula_25 has degree 3 and formula_26 has degree 2, then their composition is formula_27 which has degree 6.
Note that for polynomials over an arbitrary ring, the degree of the composition may be less than the product of the degrees. For example, in formula_28 the composition of the polynomials formula_29 and formula_30 (both of degree 1) is the constant polynomial formula_31 of degree 0.
Degree of the zero polynomial.
The degree of the zero polynomial is either left undefined, or is defined to be negative (usually −1 or formula_32).
Like any constant value, the value 0 can be considered as a (constant) polynomial, called the zero polynomial. It has no nonzero terms, and so, strictly speaking, it has no degree either. As such, its degree is usually undefined. The propositions for the degree of sums and products of polynomials in the above section do not apply, if any of the polynomials involved is the zero polynomial.
It is convenient, however, to define the degree of the zero polynomial to be "negative infinity", formula_33 and to introduce the arithmetic rules
formula_34
and
formula_35
These examples illustrate how this extension satisfies the behavior rules above:
Computed from the function values.
A number of formulae exist which will evaluate the degree of a polynomial function "f". One based on asymptotic analysis is
formula_42;
this is the exact counterpart of the method of estimating the slope in a log–log plot.
This formula generalizes the concept of degree to some functions that are not polynomials.
For example:
The formula also gives sensible results for many combinations of such functions, e.g., the degree of formula_48 is formula_49.
Another formula to compute the degree of "f" from its values is
formula_50;
this second formula follows from applying L'Hôpital's rule to the first formula. Intuitively though, it is more about exhibiting the degree "d" as the extra constant factor in the derivative formula_51 of formula_52.
A more fine grained (than a simple numeric degree) description of the asymptotics of a function can be had by using big O notation. In the analysis of algorithms, it is for example often relevant to distinguish between the growth rates of formula_53 and formula_54, which would both come out as having the "same" degree according to the above formulae.
Extension to polynomials with two or more variables.
For polynomials in two or more variables, the degree of a term is the "sum" of the exponents of the variables in the term; the degree (sometimes called the total degree) of the polynomial is again the maximum of the degrees of all terms in the polynomial. For example, the polynomial "x"2"y"2 + 3"x"3 + 4"y" has degree 4, the same degree as the term "x"2"y"2.
However, a polynomial in variables "x" and "y", is a polynomial in "x" with coefficients which are polynomials in "y", and also a polynomial in "y" with coefficients which are polynomials in "x". The polynomial
formula_55
has degree 3 in "x" and degree 2 in "y".
Degree function in abstract algebra.
Given a ring "R", the polynomial ring "R"["x"] is the set of all polynomials in "x" that have coefficients in "R". In the special case that "R" is also a field, the polynomial ring "R"["x"] is a principal ideal domain and, more importantly to our discussion here, a Euclidean domain.
It can be shown that the degree of a polynomial over a field satisfies all of the requirements of the "norm" function in the euclidean domain. That is, given two polynomials "f"("x") and "g"("x"), the degree of the product "f"("x")"g"("x") must be larger than both the degrees of "f" and "g" individually. In fact, something stronger holds:
formula_56
For an example of why the degree function may fail over a ring that is not a field, take the following example. Let "R" = formula_57, the ring of integers modulo 4. This ring is not a field (and is not even an integral domain) because 2 × 2 = 4 ≡ 0 (mod 4). Therefore, let "f"("x") = "g"("x") = 2"x" + 1. Then, "f"("x")"g"("x") = 4"x"2 + 4"x" + 1 = 1. Thus deg("f"⋅"g") = 0 which is not greater than the degrees of "f" and "g" (which each had degree 1).
Since the "norm" function is not defined for the zero element of the ring, we consider the degree of the polynomial "f"("x") = 0 to also be undefined so that it follows the rules of a norm in a Euclidean domain.
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "7x^2y^3 + 4x - 9,"
},
{
"math_id": 1,
"text": "7x^2y^3 + 4x^1y^0 - 9x^0y^0,"
},
{
"math_id": 2,
"text": "(x+1)^2 - (x-1)^2"
},
{
"math_id": 3,
"text": "(x+1)^2 - (x-1)^2 = 4x"
},
{
"math_id": 4,
"text": "x^2 + xy + y^2"
},
{
"math_id": 5,
"text": "x^2 + y^2"
},
{
"math_id": 6,
"text": "(y - 3)(2y + 6)(-4y - 21)"
},
{
"math_id": 7,
"text": "- 8 y^3 - 42 y^2 + 72 y + 378"
},
{
"math_id": 8,
"text": "(3 z^8 + z^5 - 4 z^2 + 6) + (-3 z^8 + 8 z^4 + 2 z^3 + 14 z)"
},
{
"math_id": 9,
"text": "z^5 + 8 z^4 + 2 z^3 - 4 z^2 + 14 z + 6"
},
{
"math_id": 10,
"text": "\\deg(P + Q) \\leq \\max\\{\\deg(P),\\deg(Q)\\}"
},
{
"math_id": 11,
"text": "\\deg(P - Q) \\leq \\max\\{\\deg(P),\\deg(Q)\\}"
},
{
"math_id": 12,
"text": "(x^3+x)-(x^3+x^2)=-x^2+x"
},
{
"math_id": 13,
"text": "(x^3+x)+(x^2+1)=x^3+x^2+x+1"
},
{
"math_id": 14,
"text": "\\deg(cP)=\\deg(P)"
},
{
"math_id": 15,
"text": "2(x^2+3x-2)=2x^2+6x-4"
},
{
"math_id": 16,
"text": "x^2+3x-2"
},
{
"math_id": 17,
"text": "\\deg(PQ) = \\deg(P) + \\deg(Q)"
},
{
"math_id": 18,
"text": "(x^3+x)(x^2+1)=x^5+2x^3+x"
},
{
"math_id": 19,
"text": "\\mathbf{Z}/4\\mathbf{Z}"
},
{
"math_id": 20,
"text": "\\deg(2x) = \\deg(1+2x) = 1"
},
{
"math_id": 21,
"text": "\\deg(2x(1+2x)) = \\deg(2x) = 1"
},
{
"math_id": 22,
"text": "P"
},
{
"math_id": 23,
"text": "Q"
},
{
"math_id": 24,
"text": "\\deg(P \\circ Q) = \\deg(P)\\deg(Q)."
},
{
"math_id": 25,
"text": "P = x^3+x"
},
{
"math_id": 26,
"text": "Q = x^2 - 1"
},
{
"math_id": 27,
"text": "P \\circ Q = P \\circ (x^2 - 1) = (x^2 - 1)^3+(x^2 - 1) = x^6 - 3x^4+4x^2 - 2,"
},
{
"math_id": 28,
"text": "\\mathbf{Z}/4\\mathbf{Z},"
},
{
"math_id": 29,
"text": "2x"
},
{
"math_id": 30,
"text": "1+2x"
},
{
"math_id": 31,
"text": "2x\\circ(1+2x) = 2+4x= 2,"
},
{
"math_id": 32,
"text": "-\\infty"
},
{
"math_id": 33,
"text": "-\\infty,"
},
{
"math_id": 34,
"text": "\\max(a,-\\infty) = a,"
},
{
"math_id": 35,
"text": "a + (-\\infty) = -\\infty."
},
{
"math_id": 36,
"text": "(x^3+x)+(0)=x^3+x"
},
{
"math_id": 37,
"text": "3 \\le \\max(3, -\\infty)"
},
{
"math_id": 38,
"text": "(x)-(x) = 0"
},
{
"math_id": 39,
"text": "-\\infty \\le \\max(1,1)"
},
{
"math_id": 40,
"text": "(0)(x^2+1)=0"
},
{
"math_id": 41,
"text": "-\\infty = -\\infty + 2"
},
{
"math_id": 42,
"text": "\\deg f = \\lim_{x\\rarr\\infty}\\frac{\\log |f(x)|}{\\log x}"
},
{
"math_id": 43,
"text": "\\ 1/x"
},
{
"math_id": 44,
"text": "\\sqrt x "
},
{
"math_id": 45,
"text": "\\ \\log x"
},
{
"math_id": 46,
"text": "\\exp x"
},
{
"math_id": 47,
"text": "\\infty."
},
{
"math_id": 48,
"text": "\\frac{1 + \\sqrt{x}}{x}"
},
{
"math_id": 49,
"text": "-1/2"
},
{
"math_id": 50,
"text": "\\deg f = \\lim_{x\\to\\infty}\\frac{x f'(x)}{f(x)}"
},
{
"math_id": 51,
"text": "d x^{d-1}"
},
{
"math_id": 52,
"text": "x^d"
},
{
"math_id": 53,
"text": " x "
},
{
"math_id": 54,
"text": " x \\log x "
},
{
"math_id": 55,
"text": "x^2y^2 + 3x^3 + 4y = (3)x^3 + (y^2)x^2 + (4y) = (x^2)y^2 + (4)y + (3x^3)"
},
{
"math_id": 56,
"text": "\\deg(f(x)g(x)) = \\deg(f(x)) + \\deg(g(x))"
},
{
"math_id": 57,
"text": "\\mathbb{Z}/4\\mathbb{Z}"
}
] |
https://en.wikipedia.org/wiki?curid=5930652
|
5930730
|
Category of relations
|
In mathematics, the category Rel has the class of sets as objects and binary relations as morphisms.
A morphism (or arrow) "R" : "A" → "B" in this category is a relation between the sets "A" and "B", so "R" ⊆ "A" × "B".
The composition of two relations "R": "A" → "B" and "S": "B" → "C" is given by
("a", "c") ∈ "S" "R" ⇔ for some "b" ∈ "B", ("a", "b") ∈ "R" and ("b", "c") ∈ "S".
Rel has also been called the "category of correspondences of sets".
Properties.
The category Rel has the category of sets Set as a (wide) subcategory, where the arrow "f" : "X" → "Y" in Set corresponds to the relation "F" ⊆ "X" × "Y" defined by ("x", "y") ∈ "F" ⇔ "f"("x") = "y".
A morphism in Rel is a relation, and the corresponding morphism in the opposite category to Rel has arrows reversed, so it is the converse relation. Thus Rel contains its opposite and is self-dual.
The involution represented by taking the converse relation provides the dagger to make Rel a dagger category.
The category has two functors into itself given by the hom functor: A binary relation "R" ⊆ "A" × "B" and its transpose "R"T ⊆ "B" × "A" may be composed either as "R R"T or as "R"T "R". The first composition results in a homogeneous relation on "A" and the second is on "B". Since the images of these hom functors are in Rel itself, in this case hom is an internal hom functor. With its internal hom functor, Rel is a closed category, and furthermore a dagger compact category.
The category Rel can be obtained from the category Set as the Kleisli category for the monad whose functor corresponds to power set, interpreted as a covariant functor.
Perhaps a bit surprising at first sight is the fact that product in Rel is given by the disjoint union (rather than the cartesian product as it is in Set), and so is the coproduct.
Rel is monoidal closed, if one defines both the monoidal product "A" ⊗ "B" and the internal hom "A" ⇒ "B" by the cartesian product of sets. It is also a monoidal category if one defines the monoidal product by the disjoint union of sets.
The category Rel was the prototype for the algebraic structure called an allegory by Peter J. Freyd and Andre Scedrov in 1990. Starting with a regular category and a functor "F": "A" → "B", they note properties of the induced functor Rel("A,B") → Rel("FA, FB"). For instance, it preserves composition, conversion, and intersection. Such properties are then used to provide axioms for an allegory.
Relations as objects.
David Rydeheard and Rod Burstall consider Rel to have objects that are homogeneous relations. For example, "A" is a set and "R" ⊆ "A" × "A" is a binary relation on "A". The morphisms of this category are functions between sets that preserve a relation: Say "S" ⊆ "B" × "B" is a second relation and "f": "A" → "B" is a function such that formula_0 then "f" is a morphism.
The same idea is advanced by Adamek, Herrlich and Strecker, where they designate the objects ("A, R") and ("B, S"), set and relation.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "xRy \\implies f(x)Sf(y),"
}
] |
https://en.wikipedia.org/wiki?curid=5930730
|
5931368
|
277 (number)
|
277 (two hundred [and] seventy-seven) is the natural number following 276 and preceding 278.
Natural number
Mathematical properties.
277 is the 59th prime number, and is a regular prime.
It is the smallest prime "p" such that the sum of the inverses of the primes up to "p" is greater than two.
Since 59 is itself prime, 277 is a super-prime. 59 is also a super-prime (it is the 17th prime), as is 17 (the 7th prime). However, 7 is the fourth prime number, and 4 is not prime. Thus, 277 is a super-super-super-prime but not a super-super-super-super-prime. It is the largest prime factor of the Euclid number 510511 = 2 × 3 × 5 × 7 × 11 × 13 × 17 + 1.
As a member of the lazy caterer's sequence, 277 counts the maximum number of pieces obtained by slicing a pancake with 23 straight cuts.
277 is also a Perrin number, and as such counts the number of maximal independent sets in an icosagon. There are 277 ways to tile a 3 × 8 rectangle with integer-sided squares, and 277 degree-7 monic polynomials with integer coefficients and all roots in the unit disk.
On an infinite chessboard, there are 277 squares that a knight can reach from a given starting position in exactly six moves.
277 appears as the numerator of the fifth term of the Taylor series for the secant function:
formula_0
Since no number added to the sum of its digits generates 277, it is a self number. The next prime self number is not reached until 367.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\sec x = 1 + \\frac{1}{2} x^2 + \\frac{5}{24} x^4 + \\frac{61}{720} x^6 + \\frac{277}{8064} x^8 + \\cdots"
}
] |
https://en.wikipedia.org/wiki?curid=5931368
|
59317908
|
Triplet loss
|
Function for machine learning algorithms
Triplet loss is a loss function for machine learning algorithms where a reference input (called anchor) is compared to a matching input (called positive) and a non-matching input (called negative). The distance from the anchor to the positive is minimized, and the distance from the anchor to the negative input is maximized.
An early formulation equivalent to triplet loss was introduced (without the idea of using anchors) for metric learning from relative comparisons by M. Schultze and T. Joachims in 2003.
By enforcing the order of distances, triplet loss models embed in the way that a pair of samples with same labels are smaller in distance than those with different labels. Unlike t-SNE which preserves embedding orders via probability distributions, triplet loss works directly on embedded distances. Therefore, in its common implementation, it needs soft margin treatment with a slack variable formula_0 in its hinge loss-style formulation. It is often used for learning similarity for the purpose of learning embeddings, such as learning to rank, word embeddings, thought vectors, and metric learning.
Consider the task of training a neural network to recognize faces (e.g. for admission to a high security zone). A classifier trained to classify an instance would have to be retrained every time a new person is added to the face database. This can be avoided by posing the problem as a similarity learning problem instead of a classification problem. Here the network is trained (using a contrastive loss) to output a distance which is small if the image belongs to a known person and large if the image belongs to an unknown person. However, if we want to output the closest images to a given image, we want to learn a ranking and not just a similarity. A triplet loss is used in this case.
The loss function can be described by means of the Euclidean distance function
formula_1
where formula_2 is an "anchor input", formula_3 is a "positive input" of the same class as formula_2, formula_4 is a" negative input" of a different class from formula_2, formula_0 is a margin between positive and negative pairs, and formula_5 is an embedding.
This can then be used in a cost function, that is the sum of all losses, which can then be used for minimization of the posed optimization problem
formula_6
The indices are for individual input vectors given as a triplet. The triplet is formed by drawing an anchor input, a positive input that describes the same entity as the anchor entity, and a negative input that does not describe the same entity as the anchor entity. These inputs are then run through the network, and the outputs are used in the loss function.
Comparison and Extensions.
In computer vision tasks such as re-identification, a prevailing belief has been that the triplet loss is inferior to using surrogate losses (i.e., typical classification losses) followed by separate metric learning steps. Recent work showed that for models trained from scratch, as well as pretrained models, a special version of triplet loss doing end-to-end deep metric learning outperforms most other published methods as of 2017.
Additionally, triplet loss has been extended to simultaneously maintain a series of distance orders by optimizing a continuous "relevance degree" with a chain (i.e., "ladder") of distance inequalities. This leads to the "Ladder Loss", which has been demonstrated to offer performance enhancements of visual-semantic embedding in learning to rank tasks.
In Natural Language Processing, triplet loss is one of the loss functions considered for BERT fine-tuning in the SBERT architecture.
Other extensions involve specifying multiple negatives (multiple negatives ranking loss).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\alpha"
},
{
"math_id": 1,
"text": "\\mathcal{L} \\left ( A, P, N \\right ) =\\operatorname{max} \\left (\n {\\| \\operatorname{f} \\left ( A \\right ) - \\operatorname{f} \\left ( P \\right ) \\|}_2\n - {\\| \\operatorname{f} \\left ( A \\right ) - \\operatorname{f} \\left ( N \\right ) \\|}_2\n + \\alpha, 0 \\right )"
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "P"
},
{
"math_id": 4,
"text": "N"
},
{
"math_id": 5,
"text": "\\operatorname{f}"
},
{
"math_id": 6,
"text": "\\mathcal{J} = \\sum_{i=1}^{{}M} \\mathcal{L} \\left ( A ^{(i)}, P ^{(i)}, N ^{(i)} \\right ) "
}
] |
https://en.wikipedia.org/wiki?curid=59317908
|
59318599
|
Windisch–Kolbach unit
|
Unit of measurement
°WK or degrees Windisch-Kolbach is a unit for measuring the diastatic power of malt, named after the German brewer Wilhelm Windisch and the Luxembourg brewer Paul Kolbach. It is a common unit in beer brewing (especially in Europe) that measures the ability of enzymes in malt to reduce starch to sugar (maltose). It is defined as the amount of maltose formed by 100 g of malt in 30 min at 20 °C. Degrees Lintner is a unit used in the United States for the same purpose. The conversion is as follows:
formula_0
formula_1.
334 °WK = 3.014×10−7 Katal
|
[
{
"math_id": 0,
"text": "{}^\\circ\\mbox{Lintner} = \\frac{{}^\\circ\\mbox{WK} + 16}{3.5}"
},
{
"math_id": 1,
"text": "{}^\\circ\\mbox{WK} = \\left ( 3.5 \\cdot {}^\\circ\\mbox{Lintner} \\right ) - 16"
}
] |
https://en.wikipedia.org/wiki?curid=59318599
|
593255
|
Venturi effect
|
Reduced pressure caused by a flow restriction in a tube or pipe
The Venturi effect is the reduction in fluid pressure that results when a moving fluid speeds up as it flows through a constricted section (or choke) of a pipe. The Venturi effect is named after its discoverer, the 18th-century Italian physicist Giovanni Battista Venturi.
The effect has various engineering applications, as the reduction in pressure inside the constriction can be used both for measuring the fluid flow and for moving other fluids (e.g. in a vacuum ejector).
Background.
In inviscid fluid dynamics, an incompressible fluid's velocity must "increase" as it passes through a constriction in accord with the principle of mass continuity, while its static pressure must "decrease" in accord with the principle of conservation of mechanical energy (Bernoulli's principle) or according to the Euler equations. Thus, any gain in kinetic energy a fluid may attain by its increased velocity through a constriction is balanced by a drop in pressure because of its loss in potential energy.
By measuring pressure, the flow rate can be determined, as in various flow measurement devices such as Venturi meters, Venturi nozzles and orifice plates.
Referring to the adjacent diagram, using Bernoulli's equation in the special case of steady, incompressible, inviscid flows (such as the flow of water or other liquid, or low-speed flow of gas) along a streamline, the theoretical pressure drop at the constriction is given by
formula_0
where formula_1 is the density of the fluid, formula_2 is the (slower) fluid velocity where the pipe is wider, and formula_3 is the (faster) fluid velocity where the pipe is narrower (as seen in the figure).
Choked flow.
The limiting case of the Venturi effect is when a fluid reaches the state of choked flow, where the fluid velocity approaches the local speed of sound. When a fluid system is in a state of choked flow, a further decrease in the downstream pressure environment will not lead to an increase in velocity, unless the fluid is compressed.
The mass flow rate for a compressible fluid will increase with increased upstream pressure, which will increase the density of the fluid through the constriction (though the velocity will remain constant). This is the principle of operation of a de Laval nozzle. Increasing source temperature will also increase the local sonic velocity, thus allowing increased mass flow rate, but only if the nozzle area is also increased to compensate for the resulting decrease in density.
Expansion of the section.
The Bernoulli equation is invertible, and pressure should rise when a fluid slows down. Nevertheless, if there is an expansion of the tube section, turbulence will appear, and the theorem will not hold. In all experimental Venturi tubes, the pressure in the entrance is compared to the pressure in the middle section; the output section is never compared with them.
Experimental apparatus.
Venturi tubes.
The simplest apparatus is a tubular setup known as a Venturi tube or simply a Venturi (plural: "Venturis" or occasionally "Venturies"). Fluid flows through a length of pipe of varying diameter. To avoid undue aerodynamic drag, a Venturi tube typically has an entry cone of 30 degrees and an exit cone of 5 degrees.
Venturi tubes are often used in processes where permanent pressure loss is not tolerable and where maximum accuracy is needed in case of highly viscous liquids.
Orifice plate.
Venturi tubes are more expensive to construct than simple orifice plates, and both function on the same basic principle. However, for any given differential pressure, orifice plates cause significantly more permanent energy loss.
Instrumentation and measurement.
Both Venturi tubes and orifice plates are used in industrial applications and in scientific laboratories for measuring the flow rate of liquids.
Flow rate.
A Venturi can be used to measure the volumetric flow rate, formula_4, using Bernoulli's principle.
Since
formula_5
then
formula_6
A Venturi can also be used to mix a liquid with a gas. If a pump forces the liquid through a tube connected to a system consisting of a Venturi to increase the liquid speed (the diameter decreases), a short piece of tube with a small hole in it, and last a Venturi that decreases speed (so the pipe gets wider again), the gas will be sucked in through the small hole because of changes in pressure. At the end of the system, a mixture of liquid and gas will appear. See aspirator and pressure head for discussion of this type of siphon.
Differential pressure.
As fluid flows through a Venturi, the expansion and compression of the fluids cause the pressure inside the Venturi to change. This principle can be used in metrology for gauges calibrated for differential pressures. This type of pressure measurement may be more convenient, for example, to measure fuel or combustion pressures in jet or rocket engines.
The first large-scale Venturi meters to measure liquid flows were developed by Clemens Herschel who used them to measure small and large flows of water and wastewater beginning at the end of the 19th century. While working for the Holyoke Water Power Company, Herschel would develop the means for measuring these flows to determine the water power consumption of different mills on the Holyoke Canal System, first beginning development of the device in 1886, two years later he would describe his invention of the Venturi meter to William Unwin in a letter dated June 5, 1888.
Compensation for temperature, pressure, and mass.
Fundamentally, pressure-based meters measure kinetic energy density. Bernoulli's equation (used above) relates this to mass density and volumetric flow:
formula_7
where constant terms are absorbed into "k". Using the definitions of density (formula_8), molar concentration (formula_9), and molar mass (formula_10), one can also derive mass flow or molar flow (i.e. standard volume flow):
formula_11
However, measurements outside the design point must compensate for the effects of temperature, pressure, and molar mass on density and concentration. The ideal gas law is used to relate actual values to design values:
formula_12
formula_13
Substituting these two relations into the pressure-flow equations above yields the fully compensated flows:
formula_14
"Q", "m", or "n" are easily isolated by dividing and taking the square root. Note that pressure-, temperature-, and mass-compensation is required for every flow, regardless of the end units or dimensions. Also we see the relations:
formula_15
Examples.
The Venturi effect may be observed or used in the following:
References.
<templatestyles src="Reflist/styles.css" />
External links.
[[Category:Fluid dynamics]]
|
[
{
"math_id": 0,
"text": "p_1 - p_2 = \\frac{\\rho}{2} (v_2^2 - v_1^2),"
},
{
"math_id": 1,
"text": "\\rho"
},
{
"math_id": 2,
"text": "v_1"
},
{
"math_id": 3,
"text": "v_2"
},
{
"math_id": 4,
"text": "\\scriptstyle Q"
},
{
"math_id": 5,
"text": "\\begin{align}\n Q &= v_1 A_1 = v_2 A_2 \\\\[3pt]\n p_1 - p_2 &= \\frac{\\rho}{2}\\left(v_2^2 - v_1^2\\right)\n\\end{align}"
},
{
"math_id": 6,
"text": "\n Q = A_1 \\sqrt{\\frac{2}{\\rho} \\cdot \\frac{p_1 - p_2}{\\left(\\frac{A_1}{A_2}\\right)^2 - 1}} =\n A_2 \\sqrt{\\frac{2}{\\rho} \\cdot \\frac{p_1 - p_2}{1 - \\left(\\frac{A_2}{A_1}\\right)^2}}\n"
},
{
"math_id": 7,
"text": "\\Delta P = \\frac{1}{2} \\rho (v_2^2 - v_1^2) = \\frac{1}{2} \\rho \\left(\\left(\\frac{A_1}{A_2}\\right)^2-1\\right) v_1^2 = \\frac{1}{2} \\rho \\left(\\frac{1}{A_2^2}-\\frac{1}{A_1^2}\\right) Q^2 = k\\, \\rho\\, Q^2"
},
{
"math_id": 8,
"text": "m=\\rho V"
},
{
"math_id": 9,
"text": "n=C V"
},
{
"math_id": 10,
"text": "m=M n"
},
{
"math_id": 11,
"text": "\\begin{align}\\Delta P &= k\\, \\rho\\, Q^2 \\\\\n &= k \\frac{1}{\\rho}\\, \\dot{m}^2 \\\\\n &= k \\frac{\\rho}{C^2}\\, \\dot{n}^2 = k \\frac{M}{C}\\, \\dot{n}^2.\n\\end{align}"
},
{
"math_id": 12,
"text": "C = \\frac{P}{RT} = \\frac{\\left(\\frac{P}{P^\\ominus}\\right)}{\\left(\\frac{T}{T^\\ominus}\\right)} C^\\ominus"
},
{
"math_id": 13,
"text": "\\rho = \\frac{MP}{RT} = \\frac{\\left(\\frac{M}{M^\\ominus} \\frac{P}{P^\\ominus}\\right)}{\\left(\\frac{T}{T^\\ominus}\\right)} \\rho^\\ominus."
},
{
"math_id": 14,
"text": "\\begin{align}\\Delta P &= k \\frac{\\left(\\frac{M}{M^\\ominus} \\frac{P}{P^\\ominus}\\right)}{\\left(\\frac{T}{T^\\ominus}\\right)} \\rho^\\ominus\\, Q^2\n &= \\Delta P_{\\max} \\frac{\\left(\\frac{M}{M^\\ominus} \\frac{P}{P^\\ominus}\\right)}{\\left(\\frac{T}{T^\\ominus}\\right)} \\left(\\frac Q{Q_{\\max}}\\right)^2\\\\\n &= k \\frac{\\left(\\frac{T}{T^\\ominus}\\right)}{\\left(\\frac{M}{M^\\ominus} \\frac{P}{P^\\ominus}\\right) \\rho^\\ominus} \\dot{m}^2\n &= \\Delta P_{\\max} \\frac{\\left(\\frac{T}{T^\\ominus}\\right)}{\\left(\\frac{M}{M^\\ominus} \\frac{P}{P^\\ominus}\\right)} \\left(\\frac{\\dot{m}}{\\dot{m}_{\\max}}\\right)^2\\\\\n &= k \\frac{M \\left(\\frac{T}{T^\\ominus}\\right)}{\\left(\\frac{P}{P^\\ominus}\\right) C^\\ominus} \\dot{n}^2\n &= \\Delta P_{\\max} \\frac{\\left(\\frac{M}{M^\\ominus}\\frac{T}{T^\\ominus}\\right)}{\\left(\\frac{P}{P^\\ominus}\\right)} \\left(\\frac{\\dot{n}}{\\dot{n}_{\\max}}\\right)^2.\n\\end{align}"
},
{
"math_id": 15,
"text": "\\begin{align}\\frac{k}{\\Delta P_{\\max}} &= \\frac{1}{\\rho^\\ominus Q_{\\max}^2}\\\\\n &= \\frac{\\rho^\\ominus}{\\dot{m}_{\\max}^2}\\\\\n &= \\frac{{C^\\ominus}^2}{\\rho^\\ominus\\dot{n}_{\\max}^2} = \\frac{C^\\ominus}{M^\\ominus\\dot{n}_{\\max}^2}.\n\\end{align}"
}
] |
https://en.wikipedia.org/wiki?curid=593255
|
59327836
|
Bray–Moss–Libby model
|
Closure model for a scalar field
In premixed turbulent combustion, Bray–Moss–Libby (BML) model is a closure model for a scalar field, built on the assumption that the reaction sheet is infinitely thin compared with the turbulent scales, so that the scalar can be found either at the state of burnt gas or unburnt gas. The model is named after Kenneth Bray, J. B. Moss and Paul A. Libby.
Mathematical description.
Let us define a non-dimensional scalar variable or progress variable formula_0 such that formula_1 at the unburnt mixture and formula_2 at the burnt gas side. For example, if formula_3 is the unburnt gas temperature and formula_4 is the burnt gas temperature, then the non-dimensional temperature can be defined as
formula_5
The progress variable could be any scalar, i.e., we could have chosen the concentration of a reactant as a progress variable. Since the reaction sheet is infinitely thin, at any point in the flow field, we can find the value of formula_0 to be either unity or zero. The transition from zero to unity occurs instantaneously at the reaction sheet. Therefore, the probability density function for the progress variable is given by
formula_6
where formula_7 and formula_8 are the probability of finding unburnt and burnt mixture, respectively and formula_9 is the Dirac delta function. By definition, the normalization condition leads to
formula_10
It can be seen that the mean progress variable,
formula_11
is nothing but the probability of finding burnt gas at location formula_12 and at the time formula_13. The density function is completely described by the mean progress variable, as we can write (suppressing the variables formula_14)
formula_15
Assuming constant pressure and constant molecular weight, ideal gas law can be shown to reduce to
formula_16
where formula_17 is the heat release parameter. Using the above relation, the mean density can be calculated as follows
formula_18
The Favre averaging of the progress variable is given by
formula_19
Combining the two expressions, we find
formula_20
and hence
formula_21
The density average is
formula_22
General density function.
If reaction sheet is not assumed to be thin, then there is a chance that one can find a value for formula_0 in between zero and unity, although in reality, the reaction sheet is mostly thin compared to turbulent scales. Nevertheless, the general form the density function can be written as
formula_23
where formula_24 is the probability of finding the progress variable which is undergoing reaction (where transition from zero to unity is effected). Here, we have
formula_25
where formula_26 is negligible in most regions.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "c"
},
{
"math_id": 1,
"text": "c=0"
},
{
"math_id": 2,
"text": "c=1"
},
{
"math_id": 3,
"text": "T_u"
},
{
"math_id": 4,
"text": "T_b"
},
{
"math_id": 5,
"text": "c=\\frac{T-T_u}{T_b-T_u}."
},
{
"math_id": 6,
"text": "P(c,\\mathbf{x},t) = \\alpha(\\mathbf{x},t)\\delta(c) + \\beta(\\mathbf{x},t)\\delta(1-c)"
},
{
"math_id": 7,
"text": "\\alpha(\\mathbf{x},t)"
},
{
"math_id": 8,
"text": "\\beta(\\mathbf{x},t)"
},
{
"math_id": 9,
"text": "\\delta"
},
{
"math_id": 10,
"text": "\\alpha(\\mathbf{x},t)+\\beta(\\mathbf{x},t)=1."
},
{
"math_id": 11,
"text": "\\bar{c}(\\mathbf{x},t) = \\int_0^1 c P(c,\\mathbf{x},t)\\, dc = \\beta(\\mathbf{x},t)"
},
{
"math_id": 12,
"text": "\\mathbf{x}"
},
{
"math_id": 13,
"text": "t"
},
{
"math_id": 14,
"text": "\\mathbf{x},t"
},
{
"math_id": 15,
"text": "P(c) = (1-\\bar c)\\delta(c) + \\bar c\\delta(1-c)."
},
{
"math_id": 16,
"text": "\\frac{\\rho}{\\rho_u}=\\frac{T_u}{T}=\\frac{1}{1+\\tau c}"
},
{
"math_id": 17,
"text": "\\tau"
},
{
"math_id": 18,
"text": "\\frac{\\bar{\\rho}}{\\rho_u}=1-\\beta + \\frac{\\beta}{1+\\tau}."
},
{
"math_id": 19,
"text": "\\tilde c \\equiv \\frac{\\overline{\\rho c}}{\\bar\\rho} = \\frac{\\rho_u}{\\bar\\rho}\\frac{\\beta}{1+\\tau}."
},
{
"math_id": 20,
"text": "\\bar{c}=\\beta = \\frac{(1+\\tau )\\tilde c}{1+\\tau \\tilde c}"
},
{
"math_id": 21,
"text": "\\alpha = \\frac{1-\\tilde c}{1+\\tau \\tilde c}."
},
{
"math_id": 22,
"text": "\\bar\\rho = \\frac{\\rho_u}{1+\\tau \\tilde c}."
},
{
"math_id": 23,
"text": "P(c,\\mathbf{x},t) = \\alpha(\\mathbf{x},t)\\delta(c) + \\beta(\\mathbf{x},t)\\delta(1-c) + \\gamma(\\mathbf{x},t) f(c,\\mathbf{x},t)"
},
{
"math_id": 24,
"text": "\\gamma(\\mathbf{x},t)"
},
{
"math_id": 25,
"text": "\\alpha(\\mathbf{x},t)+\\beta(\\mathbf{x},t)+\\gamma(\\mathbf{x},t) = 1"
},
{
"math_id": 26,
"text": "\\gamma"
}
] |
https://en.wikipedia.org/wiki?curid=59327836
|
59333989
|
Polynomial creativity
|
In computational complexity theory, polynomial creativity is a theory analogous to the theory of creative sets in recursion theory and mathematical logic. The formula_0-creative sets are a family of formal languages in the complexity class NP whose complements certifiably do not have formula_1-time nondeterministic recognition algorithms. It is generally believed that NP is unequal to co-NP (the class of complements of languages in NP), which would imply more strongly that the complements of all NP-complete languages do not have polynomial-time nondeterministic recognition algorithms. However, for the formula_0-creative sets, the lack of a (more restricted) recognition algorithm can be proven, whereas a proof that NP ≠ co-NP remains elusive.
The formula_0-creative sets are conjectured to form counterexamples to the Berman–Hartmanis conjecture on isomorphism of NP-complete sets. It is NP-complete to test whether an input string belongs to any one of these languages, but no polynomial time isomorphisms between all such languages and other NP-complete languages are known. Polynomial creativity and the formula_0-creative sets were introduced in 1985 by Deborah Joseph and Paul Young, following earlier attempts to define polynomial analogues for creative sets by Ko and Moore.
Definition.
Intuitively, a set is creative when there is a polynomial-time algorithm that creates a counterexample for any candidate fast nondeterministic recognition algorithm for its complement.
The classes of fast nondeterministic recognition algorithms are formalized by Joseph and Young as the sets formula_2 of nondeterministic Turing machine programs formula_3 that, for inputs formula_4 that they accept, have an accepting path with a number of steps that is at most formula_5. This notation should be distinguished with that for the complexity class NP. The complexity class NP is a set of formal languages, while formula_2 is instead a set of programs that accept some of these languages. Every language in NP is recognized by a program in one of the sets formula_2, with a parameter formula_0 that is (up to the factor formula_6 in the bound on the number of steps) the exponent in the polynomial running time of the program.
According to Joseph and Young's theory, a language formula_7 in NP is formula_0-creative if it is possible to find a witness showing that the complement of formula_7 is not recognized by any program in formula_2.
More formally, there should exist a polynomially computable function formula_8 that maps programs in this class to inputs on which they fail. When given a
nondeterministic program formula_3 in formula_2, the function formula_8 should produce an input string formula_9 that either belongs to formula_7 and causes the program to accept formula_4, or does not belong to formula_7 and causes the program to reject formula_4. The function formula_8 is called a "productive function" for formula_7. If this productive function exists, the given program does not produce the behavior on input formula_4 that would be expected of a program for recognizing the complement of formula_7.
Existence.
Joseph and Young construct creative languages by reversing the definitions of these languages: rather than starting with a language and trying to find a productive function for it, they start with a function and construct a language for which it is the productive function. They define a polynomial-time function formula_8 to be "polynomially honest" if its running time is at most a polynomial function of its output length. This disallows, for instance, functions that take polynomial time but produce outputs of less than polynomial length. As they show, every one-to-one polynomially-honest function formula_8 is the productive function for a formula_0-creative language formula_10.
Given formula_8, Joseph and Young define formula_11 to be the set of values formula_12 for nondeterministic programs formula_3 that have an accepting path for formula_12 using at most formula_13 steps. This number of steps (on that input) would be consistent with formula_3 belonging to formula_14. Then formula_10 belongs to NP: given an input formula_12 one can nondeterministically guess both formula_3 and its accepting path, and then verify that the input equals formula_12 and that the path is valid for formula_3.
Language formula_11 is formula_0-creative, with formula_8 as its productive function, because every program formula_3 in formula_2 is mapped by formula_8 to a value formula_12 that is either accepted by formula_3 (and therefore also belongs to formula_10) or rejected by formula_3 (and therefore also does not belong to formula_10).
Completeness.
Every formula_0-creative set with a polynomially honest productive function is NP-complete. For any other language formula_15 in NP, by the definition of NP, one can translate any input formula_4 for formula_15 into a nondeterministic program formula_16 that ignores its own input and instead searches for a witness for formula_4, accepting its input if it finds one and rejecting otherwise. The length of formula_16 is polynomial in the size of formula_4 and a padding argument can be used to make formula_16 long enough (but still polynomial) for its running time to qualify for membership in formula_2. Let formula_8 be the productive function used to define a given formula_0-creative set formula_7, and let formula_17 be the translation from formula_4 to formula_16. Then the composition of formula_17 with formula_8 maps inputs of formula_15 into counterexamples for the algorithms that test those inputs. This composition maps inputs that belong to formula_15 into strings that belong to formula_7, and inputs that do not belong to formula_15 into strings that do not belong to formula_7. Thus, it is a polynomial-time many-one reduction from formula_15 to formula_7. Since formula_7 is (by definition) in NP, and every other language in NP has a reduction to it, it must be NP-complete.
It is also possible to prove more strongly that there exists an invertible parsimonious reduction to the formula_0-creative set.
Application to the Berman–Hartmanis conjecture.
The Berman–Hartmanis conjecture states that there exists a polynomial-time isomorphism between any two NP-complete sets: a function that maps yes-instances of one such set one-to-one into yes-instances of the other, takes polynomial time, and whose inverse function can also be computed in polynomial time. It was formulated by Leonard C. Berman and Juris Hartmanis in 1977, based on the observation that all NP-complete sets known at that time were isomorphic.
An equivalent formulation of the conjecture is that every NP-complete set is "paddable". This means that there exists a polynomial-time and polynomial-time-invertible one-to-one transformation formula_18 from yes-instances formula_4 to larger yes-instances that encode the "irrelevant" information formula_19.
However, it is unknown how to find such a padding transformation for a formula_0-creative language whose productive function is not polynomial-time-invertible. Therefore, if one-way permutations exist, the formula_0-creative languages having these permutations as their productive functions provide candidate counterexamples to the Berman–Hartmanis conjecture.
The (unproven) Joseph–Young conjecture formalizes this reasoning. The conjecture states that there exists a one-way length-increasing function formula_8 such that formula_10 is not paddable. Alan Selman observed that this would imply a simpler conjecture, the "encrypted complete set conjecture": there exists a one-way function formula_8 such that formula_20 (the set of yes-instances for the satisfiability problem) and formula_21 are non-isomorphic.
There exists an oracle relative to which one-way functions exist, both of these conjectures are false, and the Berman–Hartmanis conjecture is true.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "O(n^k)"
},
{
"math_id": 2,
"text": "\\mathrm{NP}^{(k)}"
},
{
"math_id": 3,
"text": "p"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "|p|(|x|^k+1)"
},
{
"math_id": 6,
"text": "|p|"
},
{
"math_id": 7,
"text": "L"
},
{
"math_id": 8,
"text": "f"
},
{
"math_id": 9,
"text": "x=f(p)"
},
{
"math_id": 10,
"text": "K_f^k"
},
{
"math_id": 11,
"text": "K_f^k"
},
{
"math_id": 12,
"text": "f(p)"
},
{
"math_id": 13,
"text": "|p|(|f(p)|^k+1)"
},
{
"math_id": 14,
"text": "\\mathrm{NP}^{(k)}"
},
{
"math_id": 15,
"text": "X"
},
{
"math_id": 16,
"text": "p_x"
},
{
"math_id": 17,
"text": "g"
},
{
"math_id": 18,
"text": "h(x,y)"
},
{
"math_id": 19,
"text": "y"
},
{
"math_id": 20,
"text": "\\mathrm{SAT}"
},
{
"math_id": 21,
"text": "f(\\mathrm{SAT})"
}
] |
https://en.wikipedia.org/wiki?curid=59333989
|
5933798
|
Hyper-Kamiokande
|
Neutrino observatory in Japan
Hyper-Kamiokande (also called Hyper-K or HK) is a neutrino observatory and experiment under construction in Hida, Gifu and in Tokai, Ibaraki in Japan. It is conducted by the University of Tokyo and the High Energy Accelerator Research Organization (KEK), in collaboration with institutes from over 20 countries across six continents. As a successor of the Super-Kamiokande (also Super-K or SK) and T2K experiments, it is designed to search for proton decay and detect neutrinos from natural sources such as the Earth, the atmosphere, the Sun and the cosmos, as well as to study neutrino oscillations of the man-made accelerator neutrino beam. The beginning of data-taking is planned for 2027.
The Hyper-Kamiokande experiment facility will be located in two places:
Physics program.
Accelerator and atmospheric neutrino oscillations.
Neutrino oscillations are a quantum mechanical phenomenon in which neutrinos change their flavour (neutrino flavours states: , , ) while moving, caused by the fact that the neutrino flavour states are a mixture of the neutrino mass states (ν1, ν2, ν3 mass states with masses m1, m2, m3, respectively). The oscillation probabilities depend on the six theoretical parameters:
and two parameters which are chosen for a particular experiment:
Continuing studies done by the T2K experiment, the HK far detector will measure the energy spectra of electron and muon neutrinos in the beam (produced at J-PARC as an almost pure muon neutrino beam) and compare it with the expectation in case of no oscillations, which is initially calculated based on neutrino flux and interaction models and improved by measurements performed by the near and intermediate detectors. For the HK/T2K neutrino beam peak energy (600 MeV) and the J-PARC – HK/SK detector distance (295 km), this corresponds to the first oscillation maximum, for oscillations driven by ∆m232. The J-PARC neutrino beam will run in both neutrino- and antineutrino-enhanced modes separately, meaning that neutrino measurements in each beam mode will provide information about muon (anti)neutrino survival probability P → , P → , and electron (anti)neutrino appearance probability P → , P → , where Pνα → Pνβ is the probability that a neutrino originally of flavour α will be observed later as having flavour β.
Comparison of the appearance probabilities for neutrinos and antineutrinos (P → versus P → ) allows measurement of the δCP phase. δCP ranges from −π to +π (from −180° to +180°), and 0 and ±π correspond to CP symmetry conservation. After 10 years of data taking, HK is expected to confirm at the 5σ confidence level or better if CP symmetry is violated in the neutrino oscillations for 57% of possible δCP values. CP violation is one of the conditions necessary to produce the excess of matter over antimatter at the early universe, which forms now our matter-built universe. Accelerator neutrinos will be used also to enhance the precision of the other oscillation parameters, |∆m232|, θ23 and θ13, as well as for neutrino interaction studies.
In order to determine the neutrino mass ordering (whether the ν3 mass eigenstate is lighter or heavier than both ν1 and ν2), or equivalently the unknown sign of the ∆m232 parameter, neutrino oscillations must be observed in matter. With HK beam neutrinos (295 km, 600 MeV), the matter effect is small. In addition to beam neutrinos, the HK experiment studies atmospheric neutrinos, created by cosmic rays colliding with the Earth's atmosphere, producing neutrinos and other byproducts. These neutrinos are produced at all points on the globe, meaning that HK has access to neutrinos that have travelled through a wide range of distances through matter (from a few hundred metres to the ). These samples of neutrinos can be used to determine the neutrino mass ordering.
Ultimately, a combined beam neutrino and atmospheric neutrino analysis will provide the most sensitivity to the oscillation parameters δCP, |∆m232|, sgn ∆m232, θ23 and θ13.
Neutrino Astronomy and Geoneutrinos.
Core-collapse supernova explosions produce great quantities of neutrinos. For a supernova in the Andromeda galaxy, 10 to 16 neutrino events are expected in the HK far detector. For a galactic supernova at a distance of 10 kpc about 50000 to 94000 neutrino interactions are expected during a few tens of seconds. For Betelgeuse at the distance 0.2 kpc, this rate could reach up to 108 interactions per second and such a high event rate was taken into account in the detector electronics and data acquisition (DAQ) system design, meaning that no data would be lost. Time profiles of the number of events registered in HK and their mean energy would enable testing models of the explosion. Neutrino directional information in the HK far detector can provide an early warning for the electromagnetic supernova observation, and can be used in other multi-messenger observations.
Neutrinos cumulatively produced by supernova explosions throughout the history of the universe are called supernova relic neutrinos (SRN) or diffuse supernova neutrino background (DSNB) and they carry information about star formation history. Because of a low flux (few tens/cm2/sec.), they have not yet been discovered. With ten years of data taking, HK is expected to detect about 40 SRN events in the energy range 16–30 MeV.
For the solar 's, the HK experiment goals are:
Geoneutrinos are produced in decays of radionuclides inside the Earth. Hyper-Kamiokande geoneutrino studies will help assess the Earth's core chemical composition, which is connected with the generation of the geomagnetic field.
Proton Decay.
The decay of a free proton into lighter subatomic particles has never been observed, but it is predicted by some grand unified theories (GUT) and results from baryon number (B) violation. B violation is one of the conditions needed to explain the predominance of matter over antimatter in the universe. The main channels studied by HK are → + which is favoured by many GUT models and → + predicted by theories including supersymmetry.
After ten years of data taking, (in case no decay will be observed) HK is expected to increase the lower limit of the proton mean lifetime from 1.6x1034 to 6.3x1034 years for its most sensitive decay channel ( → + ) and from 0.7x1034 to 2.0x1034 years for the → + channel.
Dark Matter.
Dark matter is a hypothetical, non-luminous form of matter proposed to explain numerous astronomical observations suggesting the existence of additional invisible mass in galaxies. If the dark matter particles interact weakly, they may produce neutrinos through annihilation or decay. Those neutrinos could be visible in the HK detector as an excess of neutrinos from the direction of large gravitational potentials such as the galactic centre, the Sun or the Earth, over an isotropic atmospheric neutrino background.
Experiment Description.
The Hyper-Kamiokande experiment consists of an accelerator neutrino beamline, a set of near detectors, the intermediate detector and the far detector (also called Hyper-Kamiokande).
The far detector by itself will be used for proton decay searches and studies of neutrinos from natural sources. All the above elements will serve for the accelerator neutrino oscillation studies. Before launching the HK experiment, the T2K experiment will finish data taking and HK will take over its neutrino beamline and set of near detectors, while the intermediate and the far detectors have to be constructed anew.
Intermediate Water Cherenkov Detector.
The Intermediate Water Cherenkov Detector (IWCD) will be located at a distance of around from the neutrino production place. It will be a cylinder filled with water of diameter and height with a tall structure instrumented with around 400 multi-PMT modules (mPMTs), each consisting of nineteen diameter PhotoMultiplier Tubes (PMTs) encapsulated in a water-proof vessel. The structure will be moved in a vertical direction by a crane system, providing measurements of neutrino interactions at different off-axis angles (angles to the neutrino beam centre), spanning from 1° at the bottom to 4° at the top, and thus for different neutrino energy spectra.
Combining the results from different off-axis angles, it is possible to extract the results for nearly monoenergetic neutrino spectrum without relying on theoretical models of neutrino interactions to reconstruct neutrino energy. Usage of the same type of detector as the far detector with almost the same angular and momentum acceptance allows comparison of results from these two detectors without relying on detector response simulations. These two facts, independence from the neutrino interaction and detector response models, will enable HK to minimise systematic error in the oscillation analysis. Additional advantages of such a design of the detector is the possibility to search for sterile oscillation patterns for different off-axis angles and to obtain a cleaner sample of electron neutrino interactions, whose fraction is larger for larger off-axis angles.
Hyper-Kamiokande Far Detector.
The Hyper-Kamiokande detector will be built under the peak of Nijuugo Mountain in the Tochibora mine, south from the Super-Kamiokande (SK) detector. Both detectors will be at the same off-axis angle (2.5°) to the neutrino beam centre and at the same distance () from the beam production place in J-PARC.
HK will be a water Cherenkov detector, 5 times larger (258 kton of water) than the SK detector. It will be a cylindrical tank of diameter and height. The tank volume will be divided into the Inner Detector (ID) and the Outer Detector (OD) by a 60 cm-wide inactive cylindrical structure, with its outer edge positioned 1 meter away from vertical and 2 meters away from horizontal tank walls. The structure will optically separate ID from OD and will hold PhotoMultiplier Tubes (PMTs) looking both inwards to the ID and outwards to the OD. In the ID, there will be at least 20000 diameter PhotoMultiplier Tubes (PMT) of R12860 type by Hamamatsu Photonics and approximately 800 multi-PMT modules (mPMTs). Each mPMT module consists of nineteen diameter photomultiplier tubes encapsulated in a water-proof vessel. The OD will be instrumented with at least 3600 diameter PMTs coupled with 0.6x30x30 cm3 wavelength shifting (WLS) plates (plates will collect incident photons and transport them to their coupled PMT) and will serve as a veto to distinguish interactions occurring inside from particles entering from the outside of the detector (mainly cosmic-ray muons).
HK detector construction began in 2020 and the start of data collection is expected in 2027. Studies have also been undertaken on the feasibility and physics benefits of building a second, identical water-Cherenkov tank in South Korea around 1100 km from J-PARC, which would be operational 6 years after the first tank.
History and schedule.
A history of large water Cherenkov detectors in Japan, and long-baseline neutrino oscillation experiments associated with them, excluding HK:
A history of the Hyper-Kamiokande experiment:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "{^3}\\text{He} + \\text{p} \\to {^4}\\text{He} + \\text{e}^{+} + \\operatorname{\\nu}_\\text{e} "
}
] |
https://en.wikipedia.org/wiki?curid=5933798
|
59338
|
Bracket
|
Punctuation mark
A bracket is either of two tall fore- or back-facing punctuation marks commonly used to isolate a segment of text or data from its surroundings. They come in four main pairs of shapes, as given in the box to the right, which also gives their names, that vary between British and American English. "Brackets", without further qualification, are in British English the (...) marks and in American English the [...] marks.
Other minor bracket shapes exist, such as (for example) "slash" or "diagonal" brackets used by linguists to enclose phonemes.
Brackets are typically deployed in symmetric pairs, and an individual bracket may be identified as a 'left' or 'right' bracket or, alternatively, an "opening bracket" or "closing bracket", respectively, depending on the directionality of the context.
In casual writing and in technical fields such as computing or linguistic analysis of grammar, brackets nest, with segments of bracketed material containing embedded within them other further bracketed sub-segments. The number of opening brackets matches the number of closing brackets in such cases.
Various forms of brackets are used in mathematics, with specific mathematical meanings, often for denoting specific mathematical functions and subformulas.
History.
Angle brackets or ⟨ ⟩ were the earliest type of bracket to appear in written English. Erasmus coined the term to refer to the round brackets or () recalling the shape of the crescent moon ().
Most typewriters only had the left and right parentheses. Square brackets appeared with some teleprinters.
Braces (curly brackets) first became part of a character set with the 8-bit code of the IBM 7030 Stretch.
In 1961, ASCII contained parentheses, square, and curly brackets, and also less-than and greater-than signs that could be used as angle brackets.
Typography.
In English, typographers mostly prefer not to set brackets in italics, even when the enclosed text is italic. However, in other languages like German, if brackets enclose text in italics, they are usually also set in italics.
Parentheses or round brackets.
( and ) are "parentheses" (singular "parenthesis" ) in American English, and either "round brackets" or simply "brackets" in British English.
They are also known as "parens" , "circle brackets", or "smooth brackets".
In formal writing, "parentheses" is also used in British English.
Uses of ( ).
Parentheses contain adjunctive material that serves to clarify (in the manner of a gloss) or is aside from the main point.
A comma before or after the material can also be used, though if the sentence contains commas for other purposes, visual confusion may result. A dash before and after the material is also sometimes used.
Parentheses may be used in formal writing to add supplementary information, such as "Senator John McCain (R - Arizona) spoke at length". They can also indicate shorthand for "either singular or plural" for nouns, e.g. "the claim(s)". It can also be used for gender-neutral language, especially in languages with grammatical gender, e.g. "(s)he agreed with his/her physician" (the slash in the second instance, as one alternative is replacing the other, not adding to it).
Parenthetical phrases have been used extensively in informal writing and stream of consciousness literature. Examples include the southern American author William Faulkner (see "Absalom, Absalom!" and ) as well as poet E. E. Cummings.
Parentheses have historically been used where the em dash is currently used in alternatives, such as "parenthesis)(parentheses". Examples of this usage can be seen in editions of "Fowler's Dictionary of Modern English Usage".
Parentheses may be nested (generally with one set (such as this) inside another set). This is not commonly used in formal writing (though sometimes other brackets [especially square brackets] will be used for one or more inner set of parentheses [in other words, secondary {or even tertiary} phrases can be found within the main parenthetical sentence]).
Language.
A parenthesis in rhetoric and linguistics refers to the entire bracketed text, not just to the enclosing marks used (so all the text in this set of round brackets may be described as "a parenthesis"). Taking as an example the sentence "Mrs. Pennyfarthing (What? Yes, that was her name!) was my landlady.", the explanatory phrase between the parentheses is itself called a parenthesis. Again, the parenthesis implies that the meaning and flow of the bracketed phrase is supplemental to the rest of the text and the whole would be unchanged were the parenthesized sentences removed. The term refers to the syntax rather than the enclosure method: the same clause in the form "Mrs. Pennyfarthing – What? Yes, that was her name! – was my landlady" is also a parenthesis. (In non-specialist usage, the term "parenthetical phrase" is more widely understood.)
In phonetics, parentheses are used for indistinguishable or unidentified utterances. They are also seen for silent articulation (mouthing), where the expected phonetic transcription is derived from lip-reading, and with periods to indicate silent pauses, for example or .
Enumerations.
An unpaired right parenthesis is often used as part of a label in an ordered list, such as this one:
<templatestyles src="Template:Blockquote/styles.css" /><poem>
a) educational testing,
b) technical writing and diagrams,
c) market research, and
d) elections.</poem>
Accounting.
Traditionally in accounting, contra amounts are placed in parentheses. A debit balance account in a series of credit balances will have parenthesis and vice versa.
Parentheses in mathematics.
Parentheses are used in mathematical notation to indicate grouping, often inducing a different order of operations. For example: in the usual order of algebraic operations, 4 × 3 + 2 equals 14, since the multiplication is done before the addition. However, 4 × (3 + 2) equals 20, because the parentheses override normal precedence, causing the addition to be done first. Some authors follow the convention in mathematical equations that, when parentheses have one level of nesting, the inner pair are parentheses and the outer pair are square brackets. Example:
<templatestyles src="Block indent/styles.css"/>formula_0
Parentheses in programming languages.
Parentheses are included in the syntaxes of many programming languages. Typically needed to denote an argument; to tell the compiler what data type the Method/Function needs to look for first in order to initialise. In some cases, such as in LISP, parentheses are a fundamental construct of the language. They are also often used for scoping functions and operators and for arrays. In syntax diagrams they are used for grouping, such as in extended Backus–Naur form.
In Mathematica and the Wolfram language, parentheses are used to indicate grouping – for example, with pure anonymous functions.
Taxonomy.
If it is desired to include the subgenus when giving the scientific name of an animal species or subspecies, the subgenus's name is provided in parentheses between the genus name and the specific epithet. For instance, "Polyphylla" ("Xerasiobia") "alba" is a way to cite the species "Polyphylla alba" while also mentioning that it is in the subgenus "Xerasiobia". There is also a convention of citing a subgenus by enclosing it in parentheses after its genus, e.g., "Polyphylla" ("Xerasiobia") is a way to refer to the subgenus "Xerasiobia" within the genus "Polyphylla". Parentheses are similarly used to cite a subgenus with the name of a prokaryotic species, although the International Code of Nomenclature of Prokaryotes (ICNP) requires the use of the abbreviation "subgen". as well, e.g., "Acetobacter" (subgen. "Gluconoacetobacter") "liquefaciens".
Chemistry.
Parentheses are used in chemistry to denote a repeated substructure within a molecule, e.g. HC(CH3)3 (isobutane) or, similarly, to indicate the stoichiometry of ionic compounds with such substructures: e.g. Ca(NO3)2 (calcium nitrate).
This is a notation that was pioneered by Berzelius, who wanted chemical formulae to more resemble algebraic notation, with brackets enclosing groups that could be multiplied (e.g. in 3(AlO2 + 2SO3) the 3 multiplies everything within the parentheses).
In chemical nomenclature, parentheses are used to distinguish structural features and multipliers for clarity, for example in the polymer poly(methyl methacrylate).
Square brackets.
[ and ] are "square brackets" in both British and American English, but are also more simply "brackets" in the latter.
An older name for these brackets is "crotchets".
Uses of [ ].
Square brackets are often used to insert explanatory material or to mark where a [word or] passage was omitted from an original material by someone other than the original author, or to mark modifications in quotations. In transcribed interviews, sounds, responses and reactions that are not words but that can be described are set off in square brackets — "... [laughs] ...".
When quoted material is in any way altered, the alterations are enclosed in square brackets within the quotation to show that the quotation is not exactly as given, or to add an annotation. For example: "The Plaintiff asserted his cause is just, stating,"
<templatestyles src="Template:Blockquote/styles.css" />[m]y causes is ["sic"] just.
In the original quoted sentence, the word "my" was capitalized: it has been modified in the quotation given and the change signalled with brackets. Similarly, where the quotation contained a grammatical error (is/are), the quoting author signalled that the error was in the original with "["sic"]" (Latin for 'thus').
A bracketed ellipsis, [...], is often used to indicate omitted material: "I'd like to thank [several unimportant people] for their tolerance [...]"
Bracketed comments inserted into a quote indicate where the original has been modified for clarity: "I appreciate it [the honor], but I must refuse", and "the future of psionics [see definition] is in doubt". Or one can quote the original statement "I hate to do laundry" with a (sometimes grammatical) modification inserted: He "hate[s] to do laundry".
Additionally, a small letter can be replaced by a capital one, when the beginning of the original printed text is being quoted in another piece of text or when the original text has been omitted for succinctness— for example, when referring to a verbose original: "To the extent that policymakers and elite opinion in general have made use of economic analysis at all, they have, as the saying goes, done so the way a drunkard uses a lamppost: for support, not illumination", can be quoted succinctly as: "[P]olicymakers [...] have made use of economic analysis [...] the way a drunkard uses a lamppost: for support, not illumination." When nested parentheses are needed, brackets are sometimes used as a substitute for the inner pair of parentheses within the outer pair. When deeper levels of nesting are needed, convention is to alternate between parentheses and brackets at each level.
Alternatively, empty square brackets can also indicate omitted material, usually single letter only. The original, "Reading is also a process and it also changes you." can be rewritten in a quote as: It has been suggested that reading can "also change[] you".
In translated works, brackets are used to signify the same word or phrase in the original language to avoid ambiguity.
For example: "He is trained in the way of the open hand [karate]."
Style and usage guides originating in the news industry of the twentieth century, such as the "AP Stylebook", recommend against the use of square brackets because "They cannot be transmitted over news wires." However, this guidance has little relevance outside of the technological constraints of the industry and era.
In linguistics, phonetic transcriptions are generally enclosed within square brackets, whereas phonemic transcriptions typically use paired slashes, according to International Phonetic Alphabet rules. Pipes (| |) are often used to indicate a morphophonemic rather than phonemic representation. Other conventions are double slashes (⫽ ⫽), double pipes (‖ ‖) and curly brackets ({ }).
In lexicography, square brackets usually surround the section of a dictionary entry which contains the etymology of the word the entry defines.
Proofreading.
Brackets (called "move-left symbols" or "move right symbols") are added to the sides of text in proofreading to indicate changes in indentation:
Square brackets are used to denote parts of the text that need to be checked when preparing drafts prior to finalizing a document.
Law.
Square brackets are used in some countries in the citation of law reports to identify parallel citations to non-official reporters. For example:
<templatestyles src="Template:Blockquote/styles.css" />"Chronicle Pub. Co. v Superior Court" (1998) 54 Cal.2d 548, [7 Cal.Rptr. 109]
In some other countries (such as England and Wales), square brackets are used to indicate that the year is part of the citation and parentheses are used to indicate the year the judgment was given. For example:
<templatestyles src="Template:Blockquote/styles.css" />"National Coal Board v England" [1954] AC 403
This case is in the 1954 volume of the Appeal Cases reports, although the decision may have been given in 1953 or earlier. Compare with:
<templatestyles src="Template:Blockquote/styles.css" />(1954) 98 Sol Jo 176
This citation reports a decision from 1954, in volume 98 of the "Solicitors Journal" which may be published in 1955 or later.
They often denote points that have not yet been agreed to in legal drafts and the year in which a report was made for certain case law decisions.
Square brackets in mathematics.
Brackets are used in mathematics in a variety of notations, including standard notations for commutators, the floor function, the Lie bracket, equivalence classes, the Iverson bracket, and matrices.
Square brackets may be used exclusively or in combination with parentheses to represent intervals as "interval notation". For example, [0,5] represents the set of real numbers from 0 to 5 inclusive. Both parentheses and brackets are used to denote a "half-open" interval; [5, 12) would be the set of all real numbers between 5 and 12, including 5 but not 12. The numbers may come as close as they like to 12, including 11.999 and so forth, but 12.0 is not included. In some European countries, the notation [5, 12[ is also used. The endpoint adjoining the square bracket is known as "closed", whereas the endpoint adjoining the parenthesis is known as "open".
In group theory and ring theory, brackets denote the commutator. In group theory, the commutator is commonly defined as . In ring theory, the commutator is defined as .
Chemistry.
Square brackets can also be used in chemistry to represent the concentration of a chemical substance in solution and to denote charge a Lewis structure of an ion (particularly distributed charge in a complex ion), repeating chemical units (particularly in polymers) and transition state structures, among other uses.
Square brackets in programming languages.
Brackets are used in many computer programming languages, primarily for array indexing. But they are also used to denote general tuples, sets and other structures, just as in mathematics. There may be several other uses as well, depending on the language at hand. In syntax diagrams they are used for optional portions, such as in extended Backus–Naur form.
Double brackets ⟦ ⟧.
Double brackets (or white square brackets or Scott brackets), ⟦ ⟧, are used to indicate the "semantic evaluation function" in formal semantics for natural language and denotational semantics for programming languages. In the Wolfram Language, double brackets, either as iterated single brackets ([[) or ligatures (〚) are used for [[Array index|list indexing]].
The brackets stand for a function that maps a linguistic expression to its "denotation" or semantic value. In mathematics, double brackets may also be used to denote [[Interval (mathematics)#Integer intervals|intervals of integers]] or, less often, the [[Floor and ceiling functions|floor function]]. In papyrology, following the [[Leiden Conventions]], they are used to enclose text that has been deleted in antiquity.
Brackets with quills ⁅ ⁆.
Known as "spike parentheses" (), and are used in Swedish [[bilingual dictionary|bilingual dictionaries]] to enclose supplemental constructions.
Curly brackets.
&lbrace; and &rbrace; are "braces" in both American and British English, and also "curly brackets" in the latter.
Uses of { }.
[[File:Curly Bracket Notation.png|thumb|upright=0.5|left|An example of curly brackets used to group sentences together]]
Curly brackets are used by text editors to mark editorial insertions or interpolations.
Braces used to be used to connect multiple lines of poetry, such as triplets in a poem of rhyming couplets, although this usage had gone out of fashion by the 19th century.
Another older use in prose was to eliminate duplication in lists and tables.
Two examples here from [[Charles Hutton]]'s 19th century table table of weights and measures in his "A Course of Mathematics":
As an extension to the [[International Phonetic Alphabet]] (IPA), [[International Phonetic Alphabet#Brackets and transcription delimiters|braces are used for prosodic notation]].
Music.
In music, they are known as "[[Accolade (notation)|accolades]]" or "[[Brace (music)|braces]]", and connect two or more lines (staves) of music that are played simultaneously.
Chemistry.
The use of braces in chemistry is an old notation that has long since been superseded by subscripted numbers.
The chemical formula for water, H2O, was represented as formula_1.
Curly brackets in programming languages.
In many programming languages, curly brackets enclose groups of [[Statement (programming)|statement]]s and create a local [[Scope (computer science)|scope]]. Such languages ([[C (programming language)|C]], C#, C++ and many others) are therefore called [[curly bracket language]]s. They are also used to define structures and [[enumerated type]] in these languages.
In various [[Unix shell]]s, they enclose a group of strings that are used in a process known as "brace expansion", where each successive string in the group is interpolated at that point in the command line to generate the command-line's final form.
The mechanism originated in the [[C shell]] and the string generation mechanism is a simple interpolation that can occur anywhere in a command line and takes no account of existing filenames.
In [[syntax diagram]]s they are used for repetition, such as in [[extended Backus–Naur form]].
In the [[Z notation|Z]] [[formal specification]] language, braces define a set.
Curly brackets in mathematics.
In [[mathematics]] they delimit [[Set (mathematics)|set]]s, in what is called "set notation".
Braces enclose either a literal list of set elements, or a rule that defines the set elements.
For example:
They are often also used to denote the [[Poisson bracket]] between two quantities.
In [[ring theory]], braces denote the [[anticommutator]] where is defined as .
Angle brackets.
〈 and 〉 are "angle brackets" in both American and British English. In computer slang, they are known as "brokets".
Strictly speaking they are distinct from V-shaped "chevrons", as they have (where the typography permits it) a broader span than chevrons, although when printed often no visual distinction is made.
The ASCII less-than and greater-than characters <> are often used for angle brackets. In most cases only those characters are accepted by computer programs, and the Unicode angle brackets are not recognized (for instance, in [[HTML tag]]s). The characters for "single" [[guillemet]]s ‹› are also often used, and sometimes normal guillemets «» when nested angle brackets are needed.
The angle brackets or chevrons at U+27E8 and U+27E9 are for mathematical use and Western languages, whereas U+3008 and U+3009 are for East Asian languages. The chevrons at U+2329 and U+232A are deprecated in favour of the U+3008 and U+3009 East Asian angle brackets. Unicode discourages their use for mathematics and in Western texts, because they are canonically equivalent to the CJK code points U+300x and thus likely to render as double-width symbols. The "less-than" and "greater-than" symbols are often used as replacements for chevrons.
<templatestyles src="Reflist/styles.css" />
Shape.
Angle brackets are larger than [[less-than sign|less-than]] and [[greater-than sign]]s, which in turn are larger than [[guillemet]]s.
[[File:Angle brackets and less+greater signs and half guillemets in different fonts.svg|thumb|left|upright=3|Angle brackets, less-than/greater-than signs and single [[guillemet]]s in fonts [[Cambria (typeface)|Cambria]], [[DejaVu fonts|DejaVu]] Serif, [[Andron (typeface)|Andron]] Mega Corpus, [[Andika (typeface)|Andika]] and [[Everson Mono]]]]
Uses of ⟨ ⟩.
Angle brackets are infrequently used to denote [[Intrapersonal communication|words that are thought]] instead of spoken, such as:
⟨What an unusual flower!⟩
In [[textual criticism]], and hence in many editions of pre-modern works, chevrons denote sections of the text which are illegible or otherwise lost; the editor will often insert their own reconstruction where possible within them.
In [[comic book]]s, chevrons are often used to mark dialogue that has been translated notionally from another language; in other words, if a character is speaking another language, instead of writing in the other language and providing a translation, one writes the translated text within chevrons. Since no foreign language is actually written, this is only "notionally" translated.
In [[linguistics]], angle brackets identify [[grapheme]]s (e.g., letters of an alphabet) or [[orthography]], as in "The English word is spelled ⟨cat⟩." <templatestyles src="Crossreference/styles.css" />
In [[epigraphy]], they may be used for mechanical transliterations of a text into the Latin script.
In [[Quotation mark#Chinese, Japanese and Korean quotation marks|East Asian punctuation]], angle brackets are used as [[quotation mark]]s. Chevron-like symbols are part of standard [[Chinese language|Chinese]], [[Japanese language|Japanese]] and – less frequently – [[Korean language|Korean]] punctuation, where they generally enclose the titles of books, as: 〈 ... 〉 or 《 ... 》 for traditional [[tategaki|vertical printing]] — written in vertical lines — and as 〈 ... 〉 or 《 ... 》 for [[yokogaki|horizontal]] printing — in horizontal.
Angle brackets in mathematics.
Angle brackets (or 'chevrons') are used in [[group theory]] to write [[group presentation]]s, and to denote the [[group generators|subgroup generated]] by a collection of elements. In [[set theory]], chevrons or parentheses are used to denote [[ordered pair]]s and other [[tuple]]s, whereas curly brackets are used for unordered sets.
Physics and mechanics.
In physical sciences and statistical mechanics, angle brackets are used to denote an average ("[[Expected value#Notations|expected value]]") over time or over another continuous parameter. For example:
formula_2
In mathematical physics, especially [[quantum mechanics]], it is common to write the [[inner product]] between elements as , as a short version of , or , where "Ô" is an [[Operator (physics)|operator]]. This is known as "Dirac notation" or "[[bra–ket notation]]", to note vectors from the [[dual space]]s of the Bra ⟨⟩. But there are [[Inner product space#Alternative definitions, notations and remarks|other notations]] used.
In [[continuum mechanics]], chevrons may be used as [[Macaulay brackets]].
Angle brackets in programming languages.
In [[C++]] chevrons (actually less-than and greater-than) are used to surround arguments to [[template (C++)|template]]s. They are also used to surround the names of [[header file#C/C++|header files]]; this usage was inherited from and is also found in [[C (programming language)|C]].
In the [[Z notation|Z]] [[formal specification]] language, chevrons define a sequence.
In [[HTML]], chevrons (actually 'greater than' and 'less than' symbols) are used to bracket meta text. For example codice_0 denotes that the following text should be displayed as bold. Pairs of meta text tags are required – much as brackets themselves are usually in pairs. The end of the bold text segment would be indicated by codice_0. This use is sometimes extended as an informal mechanism for communicating mood or tone in digital formats such as messaging, for example adding "<sighs>" at the end of a sentence.
Other brackets.
Lenticular brackets【】.
Some [[East Asia]]n languages use lenticular brackets 【 】, a combination of square brackets and round brackets called [[wikt:方頭括號#|方頭括號]] ("fāngtóu kuòhào") in [[Chinese language|Chinese]] and ("sumitsuki kakko") in [[Japanese language|Japanese]]. They are used in titles and headings in both Chinese and Japanese. On the Internet, they are used to emphasize a text. In Japanese, they are most frequently seen in dictionaries for quoting Chinese characters and Sino-Japanese loanwords.
Floor ⌊ ⌋ and ceiling ⌈ ⌉ corner brackets.
The floor corner brackets ⌊ and ⌋, the ceiling corner brackets ⌈ and ⌉ (U+2308, U+2309) are used to denote the integer [[floor and ceiling functions]].
Quine corners ⌜⌝ and half brackets ⸤ ⸥ or ⸢ ⸣.
The Quine corners ⌜ and ⌝ have at least two uses in [[mathematical logic]]: either as [[quasi-quotation]], a generalization of quotation marks, or to denote the [[Gödel number]] of the enclosed [[Expression (mathematics)|expression]].
Half brackets are used in English to mark added text, such as in translations: "Bill saw ⸤her⸥".
In editions of [[papyrology|papyrological]] texts, half brackets, ⸤ and ⸥ or ⸢ and ⸣, enclose text which is lacking in the papyrus due to damage, but can be restored by virtue of another source, such as an ancient quotation of the text transmitted by the papyrus. For example, [[Callimachus]] "Iambus" 1.2 reads: ἐκ τῶν ὅκου βοῦν κολλύ⸤βου π⸥ιπρήσκουσιν. A hole in the papyrus has obliterated βου π, but these letters are supplied by an ancient commentary on the poem. Second intermittent sources can be between ⸢ and ⸣. Quine corners are sometimes used instead of half brackets.
Unicode.
Representations of various kinds of brackets in [[Unicode]] and their respective [[List of XML and HTML character entity references|HTML entities]], that are not in the infoboxes in preceding sections, are given below.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" />
External links.
[[Category:Punctuation]]
[[Category:Mathematical notation]]
|
[
{
"math_id": 0,
"text": "[4 \\times (3 + 2)]^2 = 400."
},
{
"math_id": 1,
"text": "\\left . {{H}\\atop{H}} \\right \\} O"
},
{
"math_id": 2,
"text": "\\left\\langle V(t)^2 \\right\\rangle = \\lim_{T\\to\\infty} \\frac{1}{T}\\int_{-\\frac{T}{2}}^{\\frac{T}{2}} V(t)^2\\,{\\rm{d}}t. "
}
] |
https://en.wikipedia.org/wiki?curid=59338
|
59339981
|
Bowl Prechamber Ignition
|
Combustion process designed for Otto cycle engines
Bowl Prechamber Ignition, abbreviated BPI, is a combustion process designed for Otto cycle engines running on an air-fuel mixture leaner than stochiometric formula_0. Its distinguishing feature is a special type of spark plug, capable of reliably igniting very lean air-fuel mixtures. This spark plug is called "prechamber spark plug". The ignition electrodes of this spark plug are housed in a perforated enclosure, the "prechamber". At the engine's compression stroke, some fuel (usually less than 5 % of the total injected fuel) is injected into the piston bowl; this fuel is then forced through the small holes into the prechamber due to the high pressure in the cylinder near top dead centre. Inside the prechamber spark plug, the air-fuel mixture is ignitable by the ignition spark. Flame jets occurring due to the small holes in the prechamber then ignite the air-fuel mixture in the main combustion chamber, that would not catch fire using a regular spark plug.
|
[
{
"math_id": 0,
"text": "(\\lambda > 1)"
}
] |
https://en.wikipedia.org/wiki?curid=59339981
|
593419
|
Mean corpuscular volume
|
Average volume of a red blood cell, which sometimes helps in diagnosis
The mean corpuscular volume, or mean cell volume (MCV), is a measure of the average volume of a red blood corpuscle (or red blood cell). The measure is obtained by multiplying a volume of blood by the proportion of blood that is cellular (the hematocrit), and dividing that product by the number of erythrocytes (red blood cells) in that volume. The mean corpuscular volume is a part of a standard complete blood count.
In patients with anemia, it is the MCV measurement that allows classification as either a microcytic anemia (MCV below normal range), normocytic anemia (MCV within normal range) or macrocytic anemia (MCV above normal range). Normocytic anemia is usually deemed so because the bone marrow has not yet responded with a change in cell volume. It occurs occasionally in acute conditions, namely blood loss and hemolysis.
If the MCV was determined by automated equipment, the result can be compared to RBC morphology on a peripheral blood smear, where a normal RBC is about the size of a normal lymphocyte nucleus. Any deviation would usually be indicative of either faulty equipment or technician error, although there are some conditions that present with high MCV without megaloblastic cells.
For further specification, it can be used to calculate red blood cell distribution width (RDW). The RDW is a statistical calculation made by automated analyzers that reflects the variability in size and shape of the RBCs.
Calculation.
To calculate MCV, the hematocrit (Hct) is divided by the concentration of RBCs ([RBC])
formula_0
Normally, MCV is expressed in femtoliters (fL, or 10−15 L), and [RBC] in millions per microliter (106 / μL). The normal range for MCV is 80–100 fL.
If the hematocrit is expressed as a percentage, the red blood cell concentration as millions per microliter, and the MCV in femtoliters, the formula becomes
formula_1
formula_2
For example, if the Hct = 42.5% and [RBC] = 4.58 million per microliter (4,580,000/μL), then
formula_3
Using implied units,
formula_4
The MCV can be determined in a number of ways by automatic analyzers. In volume-sensitive automated blood cell counters, such as the Coulter counter, the red cells pass one-by-one through a small aperture and generate a signal directly proportional to their volume.
Other automated counters measure red blood cell volume by means of techniques that measure refracted, diffracted, or scattered light.
Interpretation.
The normal reference range is typically 80-100 fL.
High.
In pernicious anemia (macrocytic), MCV can range up to 150 femtolitres. (as are an elevated GGT and an AST/ALT ratio of 2:1). Vitamin B12 and/or folic acid deficiency has also been associated with macrocytic anemia (high MCV numbers).
Low.
The most common causes of microcytic anemia are iron deficiency (due to inadequate dietary intake, gastrointestinal blood loss, or menstrual blood loss), thalassemia, sideroblastic anemia or chronic disease. In iron deficiency anemia (microcytic anemia), it can be as low as 60 to 70 femtolitres. In some cases of thalassemia, the MCV may be low even though the patient is not iron deficient.
Derivation.
The MCV can be conceptualized as the total volume of a group of cells divided by the number of cells. For a real world sized example, imagine you had 10 small jellybeans with a combined volume of 10 μL. The mean volume of a jellybean in this group would be 10 μL / 10 jellybeans = 1 μL / jellybean. A similar calculation works for MCV.
1. Measure the RBC index in cells/μL. Take the reciprocal (1/RBC index) to convert it to μL/cell.
formula_5
2. The 1 μL is only made of a proportion of red cells (e.g. 40%) with the rest of the volume composed of plasma. Multiply by the hematocrit (a unitless quantity) to take this into account.
formula_6
3. Finally, convert the units of μL to fL by multiplying by formula_7. The result would look like this:
formula_8
Note: the shortcut proposed above just makes the units work out: formula_9
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\textit{MCV} = \\frac{\\textit{Hct}}{[\\text{RBC}]} "
},
{
"math_id": 1,
"text": " \\textit{MCV} / \\mathrm{L} = \\frac{\\mathit{Hct\\%}/100}{[\\text{RBCmmL}]\\times (10^6/10^{-6})/\\mathrm{L}^{-1}} "
},
{
"math_id": 2,
"text": " \\textit{MCV} / \\mathrm{fL} = \\textit{MCV} / (10^{-15}\\,\\mathrm{L}) = 10^{15} \\frac{\\mathit{Hct\\%}/100}{[\\text{RBCmmL}]\\times 10^{12}} = \\frac{\\mathit{Hct\\%}\\times 10}{[\\text{RBCmmL}]} "
},
{
"math_id": 3,
"text": " \\textit{MCV} = \\frac{0.425}{4.58 \\cdot 10^6/(10^{-6} \\, \\mathrm{L})} = 92.8 \\cdot 10^{-15} \\, \\mathrm{L} = 92.8 \\, \\mathrm{fL} "
},
{
"math_id": 4,
"text": " \\textit{MCV}/\\textrm{fL} = \\frac{42.5 \\times 10}{4.58} = 92.8 "
},
{
"math_id": 5,
"text": " \\frac{1}{5 \\times 10^{6}}\\ \\mathrm{\\mu L/ cell} = 2 \\times 10^{-7}\\ \\mathrm{\\mu L/cell} "
},
{
"math_id": 6,
"text": " 2 \\times 10^{-7}\\ \\mathrm{\\mu L/cell} \\times 0.4 = 8 \\times 10^{-8}\\ \\mathrm{\\mu L/cell} "
},
{
"math_id": 7,
"text": "10^9"
},
{
"math_id": 8,
"text": " 8 \\times 10^{-8}\\ \\mathrm{\\mu L/ cell} \\times \\frac{10^9\\ \\mathrm{fL}}{1\\ \\mathrm{\\mu L}} = 80\\ \\frac{\\mathrm{fL}}{\\mathrm{cell}} "
},
{
"math_id": 9,
"text": " 10 \\times 40 \\div 5 = 80 "
}
] |
https://en.wikipedia.org/wiki?curid=593419
|
5934489
|
Lumazine synthase
|
Class of enzymes
Lumazine synthase (EC 2.5.1.78, "6,7-dimethyl-8-ribityllumazine synthase", "6,7-dimethyl-8-ribityllumazine synthase 2", "6,7-dimethyl-8-ribityllumazine synthase 1", "lumazine synthase 2", "lumazine synthase 1", "type I lumazine synthase", "type II lumazine synthase", "RIB4", "MJ0303", "RibH", "Pbls", "MbtLS", "RibH1 protein", "RibH2 protein", "RibH1", "RibH2") is an enzyme with systematic name "5-amino-6-(D-ribitylamino)uracil butanedionetransferase". This enzyme catalyses the following chemical reaction
1-deoxy-L-glycero-tetrulose 4-phosphate + 5-amino-6-(D-ribitylamino)uracil formula_0 6,7-dimethyl-8-(D-ribityl)lumazine + 2 H2O + phosphate
This reaction is part of the biosynthesis of riboflavin (vitamin B2). Lumazine synthase is thus found in those organisms (plants, fungi and most microorganisms) which produce riboflavin.
Depending on the species, 5, 10 or 60 copies of the enzyme bind together to form homomers. In the case of 60 copies, the enzyme units form a icosahedral hollow cage. In some bacteria, this cage contains another enzyme involved in the riboflavin synthesis, riboflavin synthase.
These icosahedral cages have been investigated for use in drug delivery or as vaccines, delivering antigens. Using directed evolution, Lumazine synthase has been modified so that it forms larger cages that preferentially package RNA molecules that code for the protein, akin to a virus capsid.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] |
https://en.wikipedia.org/wiki?curid=5934489
|
59345867
|
NP/poly
|
In computational complexity theory, NP/poly is a complexity class, a non-uniform analogue of the class NP of problems solvable in polynomial time by a non-deterministic Turing machine. It is the non-deterministic complexity class corresponding to the deterministic class P/poly.
Definition.
NP/poly is defined as the class of problems solvable in polynomial time by a non-deterministic Turing machine that has access to a polynomial-bounded advice function.
It may equivalently be defined as the class of problems such that, for each instance size formula_0, there is a Boolean circuit of size polynomial in formula_0 that implements a verifier for the problem. That is, the circuit computes a function formula_1 such that an input formula_2 of length formula_0 is a yes-instance for the problem if and only if there exists formula_3 for which formula_1 is true.
Applications.
NP/poly is used in a variation of Mahaney's theorem on the non-existence of sparse NP-complete languages. Mahaney's theorem itself states that the number of yes-instances of length formula_0 of an NP-complete problem cannot be polynomially bounded unless P = NP. According to the variation, the number of yes-instances must be at least formula_4 for some formula_5 and for infinitely many formula_0, unless co-NP is a subset of NP/poly, which (by the Karp–Lipton theorem) would cause the collapse of the polynomial hierarchy.
The same computational hardness assumption that co-NP is not a subset of NP/poly also implies several other results in complexity such as the optimality of certain kernelization techniques.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "f(x,y)"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "y"
},
{
"math_id": 4,
"text": "2^{n^\\epsilon}"
},
{
"math_id": 5,
"text": "\\epsilon>0"
}
] |
https://en.wikipedia.org/wiki?curid=59345867
|
59351
|
Semicolon
|
Punctuation mark (;)
The semicolon ; (or semi-colon) is a symbol commonly used as orthographic punctuation. In the English language, a semicolon is most commonly used to link (in a single sentence) two independent clauses that are closely related in thought, such as when restating the preceding idea with a different expression. When a semicolon joins two or more ideas in one sentence, those ideas are then given equal rank. Semicolons can also be used in place of commas to separate items in a list, particularly when the elements of the list themselves have embedded commas.
The semicolon is one of the least understood of the standard marks, and is not frequently used by many English speakers.
In the QWERTY keyboard layout, the semicolon resides in the unshifted homerow beneath the little finger of the right hand and has become widely used in programming languages as a statement separator or terminator.
History.
In 1496, the semicolon ; is attested in Pietro Bembo's book "De Aetna" printed by Aldo Manuzio. The punctuation also appears in later writings of Bembo. Moreover, it is used in 1507 by Bartolomeo Sanvito, who was close to Manuzio's circle.49
In 1561, Manuzio's grandson, also called Aldo Manuzio, explains the semicolon's use with several examples in "Orthographiae ratio". In particular, Manuzio motivates the need for punctuation ("interpungō") to divide ("distinguō") sentences and thereby make them understandable. The comma, semicolon, colon, and period are seen as steps, ascending from low to high; the semicolon thereby being an intermediate value between the comma , and colon :. Here are four examples used in the book to illustrate this:49
"Publica, privata; sacra, profana; tua, aliena."
Public, private; sacred, profane; thine, another's.
"Ratio docet, si adversa fortuna sit, nimium dolendum non esse; si secunda, moderate laetandum."
Reason teaches, if fortune is adverse, not to complain too much; if favorable, to rejoice in moderation.
"Tu, quid divitiae valeant, libenter spectas; quid virtus, non item."
You, what riches are worth, gladly consider; what virtue (is worth), not so much.
"Etsi ea perturbatio est omnium rerum, ut suae quemque fortunae maxime paeniteat; nemoque sit, quin ubivis, quam ibi, ubi est, esse malit: tamen mihi dubium non est, quin hoc tempore bono viro, Romae esse, miserrimum sit."
Although it is a universal confusion of affairs(,) such that everyone regrets their own fate above all others; and there is no one, who would not rather anywhere else in the world, than there, where he is, prefer to be: yet I have no doubt, at the present time for an honest man, to be in Rome, is the worst form of misery.
Around 1580, Henry Denham starts using the semicolon "with propriety" for English texts, and more widespread usage picks up in the next decades.52
Around 1640, in Ben Jonson's book "The English Grammar", the character ; is described as "somewhat a longer breath" compared to the comma. The aim of the breathing, according to Jonson, is to aid understanding.
In 1644, in Richard Hodges' "The English Primrose", it is written:
At a comma, stop a little;
At a semi-colon, somewhat more;
At a colon, a little more than the former;
At a period, make a full stop;
In 1762, in Robert Lowth's "A Short Introduction to English Grammar", a parallel is drawn between punctuation marks and rest in music:
The Period is a pause in quantity or duration double of the Colon; the Colon is double of the Semicolon; and the Semicolon is double of the Comma. So that they are in the same proportion to one another as the Sembrief, the Minim, the Crotchet, and the Quaver, in Music.
In 1798, in Lindley Murray's "English Grammar", the semicolon is introduced as follows:
The Semicolon is used for dividing a compound sentence into two or more parts, not so closely connected as those which are separated by a comma, nor yet so little dependent on each other, as those which are distinguished by a colon.
The semicolon is sometimes used, when the preceding member of the sentence does not of itself give a complete sense, but depends on the following clause; and sometimes when the sense of that member would be complete without the concluding one; as in the following instances.
Natural languages.
English.
Although terminal marks (i.e. full stops, exclamation marks, and question marks) indicate the end of a sentence, the comma, semicolon, and colon are normally sentence-internal, making them secondary boundary marks. In modern English orthography, the semicolon falls between terminal marks and the comma; its strength is equal to that of the colon.
The most common use of the semicolon is to join two independent clauses without using a conjunction like "and". Semicolons are followed by a lower case letter, unless that letter would ordinarily be capitalised mid-sentence (e.g., the word "I", acronyms/initialisms, or proper nouns). In older English printed texts, colons and semicolons are offset from the preceding word by a non-breaking space, a convention still current in present-day continental French texts. Ideally, the space is less wide than the inter-word spaces. Some guides recommend separation by a hair space. Modern style guides recommend no space before them and one space after. They also typically recommend placing semicolons outside ending quotation marks, although this was not always the case. For example, the first edition of "The Chicago Manual of Style" (1906) recommended placing the semicolon inside ending quotation marks.
Applications of the semicolon in English include:
In a list or sequence, if even one item needs its own internal comma, use of the semicolon as the separator throughout that list is justified, as shown by this example from the California Penal Code:<templatestyles src="Template:Blockquote/styles.css" />A crime or public offense is an act committed or omitted in violation of a law forbidding or commanding it, and to which is annexed, upon conviction, either of the following punishments:
Arabic.
In Arabic, the semicolon is called "fasila manqoota" () which means literally "a dotted comma", and is written inverted ؛. In Arabic, the semicolon has several uses:
Greek and Church Slavonic.
In Greek and Church Slavonic, the question mark looks exactly the way a semicolon looks in English, similar to the question mark used in Latin. To indicate a long pause or to separate sections that already contain commas (the semicolon's purposes in English), Greek uses, but extremely rarely, the interpunct · ().
Church Slavonic with a question mark: гдѣ єсть рождeйсѧ царь їудeйскій; (Where is the one who is born king of the Jews? – )
Greek with a question mark: Τι είναι μια διασύνδεση; (What is an interpunct?)
French.
In French, a semicolon ("point-virgule", literally "dot-comma") is a separation between two full sentences, used where neither a colon nor a comma would be appropriate. The phrase following a semicolon has to be an independent clause, related to the previous one but not explaining it. (When the second clause explains the first one, French consistently uses a colon.)
The dash character is used in French writing too, but not as widely as the semicolon. Usage of these devices (semicolon and dash) varies from author to author.
Literature.
<templatestyles src="Template:Quote_box/styles.css" />
Just as there are writers who worship the semicolon, there are other high stylists who dismiss it — who label it, if you please, middle-class.
Lynne Truss, "Eats, Shoots, and Leaves"
Some authors have avoided and rejected the use of the semicolon throughout their works. Lynne Truss stated:
<templatestyles src="Template:Blockquote/styles.css" />Samuel Beckett spliced his way merrily through such novels as "Molloy" and "Malone Dies", thumbing his nose at the semicolon all the way. James Joyce preferred the colon, as he thought it was more authentically classical. P. G. Wodehouse did an effortlessly marvelous job without it, George Orwell tried to avoid the semicolon completely in "Coming Up for Air" (1939), Martin Amis included just one semicolon in "Money" (1984), and Umberto Eco was congratulated by an academic reader for using zero semicolons in "The Name of the Rose" (1983).
In response to Truss, Ben Macintyre, a columnist in "The Times", wrote:
<templatestyles src="Template:Blockquote/styles.css" />Americans have long regarded the semi-colon with suspicion, as a genteel, self-conscious, neither-one-thing-nor-the other sort of punctuation mark, with neither the butchness of a full colon nor the flighty promiscuity of the comma. Hemingway, Chandler, and Stephen King wouldn't be seen dead in a ditch with a semi-colon (though Truman Capote might). Real men, goes the unwritten rule of American punctuation, don't use semi-colons.
Semicolon use in British fiction has declined by 25% from 1991 to 2021.
Character encoding.
In Unicode, the semicolon is encoded at ; this is the same value as it had in ASCII and ISO 8859-1.
Unicode contains encoding for several other semicolon or semicolon-like characters:
Computing.
Programming.
In computer programming, the semicolon is often used to separate multiple statements (for example, in Perl, Pascal, and SQL; see Pascal: Semicolons as statement separators). In other languages, semicolons are called "terminator"s and are required after every statement (such as in PL/I, Java, and the C family). Today semicolons as terminators has largely won out, but this was a divisive issue in programming languages from the 1960s into the 1980s. An influential and frequently cited study in this debate was , which concluded strongly in favor of semicolon as a terminator: "The most important [result] was that having a semicolon as a statement terminator was better than having a semicolon as a statement separator." The study has been criticized as flawed by proponents of semicolon as a separator, due to participants being familiar with a semicolon-as-terminator language and unrealistically strict grammar. Nevertheless, the debate ended in favor of semicolon as terminator. Therefore, semicolon provides structure to the programming language.
Semicolons are optional in a number of languages, including BCPL, Python, R, Eiffel, and Go, meaning that they are part of the formal grammar for the language but can be inferred in many or all contexts (e.g., by end of line that ends a statement, as in Go and R). As languages can be designed without them, semicolons are considered an unnecessary nuisance by some.
The use of semicolons in control-flow structures and blocks of code is varied – semicolons are generally omitted after a closing brace, but included for a single statement branch of a control structure (the "then" clause), except in Pascal, where a semicolon terminates the entire if...then...else clause (to avoid dangling else) and thus is not allowed between a "then" and the corresponding "else", as this causes unnesting.
This use originates with ALGOL 60 and falls between the comma , – used as a list separator – and the period/full stop . – used to mark the end of the program. The semicolon, as a mark separating statements, corresponds to the ordinary English usage of separating independent clauses and gives the entire program the gross syntax of a single ordinary sentence. Of these other characters, whereas commas have continued to be widely used in programming for lists (and rare other uses, such as the comma operator that separates expressions in C), they are rarely used otherwise, and the period as the end of the program has fallen out of use. The last major use of the comma, semicolon, and period hierarchy is in Erlang (1986), where commas separate expressions; semicolons separate clauses, both for control flow and for function clauses; and periods terminate statements, such as function definitions or module attributes, not the entire program. Drawbacks of having multiple different separators or terminators (compared to a single terminator and single grouping, as in semicolon-and-braces) include mental overhead in selecting punctuation, and overhead in rearranging code, as this requires not only moving lines around, but also updating the punctuation.
In some cases the distinction between a separator and a terminator is strong, such as early versions of Pascal, where a final semicolon yields a syntax error. In other cases a final semicolon is treated either as optional syntax or as being followed by a null statement, which is either ignored or treated as a NOP (no operation or null command); compare trailing commas in lists. In some cases a blank statement is allowed, allowing a sequence of semicolons or the use of a semicolon by itself as the body of a control-flow structure. For example, a blank statement (a semicolon by itself) stands for a NOP in C/C++, which is useful in busy waiting synchronization loops.
APL uses semicolons to separate declarations of local variables61 and to separate axes when indexing multidimensional arrays, for example, codice_0.
Other languages (for instance, some assembly languages and LISP dialects, CONFIG.SYS and INI files) use semicolons to mark the beginning of comments.
Example C code:
int main() {
int x, y;
x = 1; y = 2;
printf("X + Y = %d", x + y);
return 0;
Or in JavaScript:
var x = 1; var y = 2;
alert("X + Y = " + (x + y));
Conventionally, in many languages, each statement is written on a separate line, but this is not typically a requirement of the language. In the above examples, two statements are placed on the same line; this is legal, because the semicolon separates the two statements. Thus programming languages like Java, the C family, Javascript etc. use semicolons to obtain a proper structure in the respective languages.
Data.
The semicolon is often used to separate elements of a string of text. For example, multiple e-mail addresses in the "To" field in some e-mail clients have to be delimited by a semicolon.
In Microsoft Excel, the semicolon is used as a list separator, especially in cases where the decimal separator is a comma, such as codice_1, instead of codice_2.
In Lua, semicolons or commas can be used to separate table elements.
In MATLAB and GNU Octave, the semicolon can be used as a row separator when defining a vector or matrix (whereas a comma separates the columns within a row of a vector or matrix) or to execute a command silently, without displaying the resulting output value in the console.
In HTML, a semicolon is used to terminate a character entity reference, either named or numeric. The declarations of a style attribute in Cascading Style Sheets (CSS) are separated and terminated with semicolons.
The file system of RSX-11 and OpenVMS, Files-11, uses semicolons to indicate a file's version number. The semicolon is permitted in long filenames in the Microsoft Windows file systems NTFS and VFAT, but not in its short names.
In some delimiter-separated values file formats, the semicolon is used as the separator character, as an alternative to comma-separated values.
Mathematics.
In mathematical derivations, a semicolon is used to separate expressions in a sequence, similar to its use in spoken English, and may be considered either punctuation for the mathematical expressions, or as punctuation for the words spoken when reading the expressions. For example, completing the square:
formula_0
formula_1
formula_2
formula_3
In the argument list of a mathematical function formula_4 a semicolon may be used to separate variables from fixed parameters.
In differential geometry and tensor analysis a semicolon preceding an index is used to indicate the covariant derivative of a function with respect to the coordinate associated with that index.
In the calculus of relations, the semicolon is used in infix notation for the composition of relations: formula_5
In piecewise functions, a semicolon or comma may follow the subfunction or subdomain; the formula_6 or formula_7 can be omitted, in which case it seems replaced by the semicolon or comma.
The ; "Humphrey point" is sometimes used as the "decimal point" in duodecimal numbers: 54;612 equals 64.510.
Other uses.
The semicolon is commonly used as parts of emoticons, in order to indicate winking or crying, as in codice_3 and codice_4.
Project Semicolon is the name of an anti-suicide initiative (since the semicolon continues a sentence rather than ending it) which has led to the punctuation mark becoming a highly symbolic and popular tattoo (most commonly done on the wrist). While some consider this to be faith-based, the movement is in general faith-neutral and is inclusive for all people.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources
|
[
{
"math_id": 0,
"text": "~ \\mathrm{~ Given ~}\\; a x^2 + b x + c = 0 \\quad \\mathrm{~ and ~} \\quad a \\ne 0 \\ ; "
},
{
"math_id": 1,
"text": "\\ \\left[ x^2 + 2\\ \\frac{b}{\\ 2a\\ } x + \\left( \\frac{b}{\\ 2a\\ } \\right)^2 \\right] - \\left( \\frac{b}{\\ 2a\\ } \\right)^2 + \\frac{\\ c\\ }{a} = 0 \\qquad \\mathrm{~ for\\ all ~}\\; a \\ne 0, ~~ \\mathrm{~ and\\ any ~}\\; b, c \\ ; "
},
{
"math_id": 2,
"text": "\\ \\left[ x + \\frac{b}{\\ 2a\\ } \\right]^2 = \\left( \\frac{b}{\\ 2a\\ } \\right)^2 - \\frac{\\ c\\ }{a} \\ ; "
},
{
"math_id": 3,
"text": "\\ \\Biggl| x + \\frac{b}{\\ 2a\\ } \\Biggr| = \\sqrt{\\ \\left( \\frac{b}{\\ 2a\\ } \\right)^2 - \\frac{\\ c\\ }{a} ~~} \\qquad \\mathrm{~ if ~} \\quad x + \\frac{b}{\\ 2a\\ } \\in \\mathbb{R} \\quad \\mathrm{~ and ~} \\quad \\left( \\frac{b}{\\ 2a\\ } \\right)^2 - \\frac{\\ c\\ }{a} \\ge 0 ~."
},
{
"math_id": 4,
"text": "\\ f(x_1,\\ x_2,\\ \\dots\\ ;\\ a_1,\\ a_2,\\ \\dots) \\; ,"
},
{
"math_id": 5,
"text": "A;B \\ =\\ \\{(x,z): \\exists y \\ \\ xAy \\ \\land\\ yBz \\} ~."
},
{
"math_id": 6,
"text": "\\text{if}"
},
{
"math_id": 7,
"text": "\\text{for}"
}
] |
https://en.wikipedia.org/wiki?curid=59351
|
5935150
|
Warnier/Orr diagram
|
A Warnier/Orr diagram (also known as a logical construction of a program/system) is a kind of hierarchical flowchart that allows the description of the organization of data and procedures. They were initially developed 1976, in France by Jean-Dominique Warnier and in the United States by Kenneth Orr on the foundation of Boolean algebra. This method aids the design of program structures by identifying the output and processing results and then working backwards to determine the steps and combinations of input needed to produce them. The simple graphic method used in Warnier/Orr diagrams makes the levels in the system evident and the movement of the data between them vivid.
Basic elements.
formula_0Sample Warnier Orr Data diagram illustrating structure of a Wikipedia page.
Warnier/Orr diagrams show the processes and sequences in which they are performed. Each process is defined in a hierarchical manner i.e. it consists of sets of subprocesses, that define it. At each level, the process is shown in bracket that groups its components.
Since a process can have many different subprocesses, Warnier/Orr diagram uses a set of brackets to show each level of the system. Critical factors in software definition and development are iteration or repetition and alternation. Warnier/Orr diagrams show this very well.
Using Warnier/Orr diagrams.
To develop a Warnier/Orr diagram, the analyst works backwards, starting with systems output and using output oriented analysis. On paper, the development moves from the set to the element (from left to right) . First, the intended output or results of the processing are defined. At the next level, shown by inclusion with a bracket, the steps needed to produce the output are defined. Each step in turn is further defined. Additional brackets group the processes required to produce the result on the next level.
Warnier/Orr diagrams offer some distinct advantages to systems experts. They are simple in appearance and easy to understand. Yet they are powerful design tools. They have advantage of showing groupings of processes and the data that must be passed from level to level. In addition, the sequence of working backwards ensures that the system will be result oriented. This method is useful for both data and process definition. It can be used for each independently, or both can be combined on the same diagram.
Constructs in Warnier/Orr diagrams.
There are four basic constructs used on Warnier/Orr diagrams: hierarchy, sequence, repetition, and alternation. There are also two slightly more advanced concepts that are occasionally needed: concurrency and recursion.
Hierarchy.
Hierarchy is the most fundamental of all of the Warnier/Orr constructs. It is simply a nested group of sets and subsets shown as a set of nested brackets. Each bracket on the diagram (depending on how you represent it, the character is usually more like a brace "{" than a bracket "[", but we call them "brackets") represents one level of hierarchy. The hierarchy or structure that is represented on the diagram can show the organization of data or processing. However, both data and processing are never shown on the same diagram.
Sequence.
Sequence is the simplest structure to show on a Warnier/Orr diagram. Within one level of hierarchy, the features listed are shown in the order in which they occur. In other words, the step listed first is the first that will be executed (if the diagram reflects a process), while the step listed last is the last that will be executed. Similarly with data, the data field listed first is the first that is encountered when looking at the data, the data field listed last is the final one encountered.
Repetition.
Repetition is the representation of a classic "loop" in programming terms. It occurs whenever the same set of data occurs over and over again (for a data structure) or whenever the same group of actions is to occur over and over again (for a processing structure). Repetition is indicated by placing a set of numbers inside parentheses beneath the repeating set.
Typically there are two numbers listed in the parentheses, representing the fewest and the most number of times the set will repeat. By convention the first letter of the repeating set is the letter chosen to represent the maximum.
While the minimum bound and maximum bound can technically be anything, they are most often either "(1,n)" as in the example, or "(0,n)." When used to depict processing, the "(1,n)" repetition is classically known as a "DoUntil" loop, while the "(0,n)" repetition is called a "DoWhile" loop. On the Warnier/Orr diagram, however, there is no distinction between the two different types of repetition, other than the minimum bound value.
On occasion, the minimum and maximum bound are predefined and not likely to change: for instance the set "Day" occurs within the set "Month" from 28 to 31 times (since the smallest month has 28 days, the largest months, 31). This is not likely to change. And on occasion, the minimum and maximum are fixed at the same number.
In general, though, it is a bad idea to "hard code" a constant other than "0" or "1" in a number of times clause—the design should be flexible enough to allow for changes in the number of times without changes to the design. For instance, if a company has 38 employees at the time a design is done, hard coding a "38" as the "number of employees" within company would certainly not be as flexible as designing "(1,n)".
The number of times clause is always an operator attached to some set (i.e., the name of some bracket), and is never attached to an element (a diagram feature which does not decompose into smaller features). The reason for this will become more apparent as we continue to work with the diagrams. For now, you will have to accept this as a formation rule for a correct diagram.
Alternation.
Alternation, or selection, is the traditional "decision" process whereby a determination is made to execute one process or another. The Exclusive OR symbol (the plus sign inside the circle) indicates that the sets immediately above and below it are mutually exclusive (if one is present the other is not). This diagram indicates that an Employee is either Management or Non-Management, one Employee cannot be both. It is also permissible to use a "negation bar" above an alternative in a manner similar to engineering notation. The bar is read by simply using the word "not".
Alternatives do not have to be binary as in the previous examples, but may be many-way alternatives.
Concurrency.
Concurrency is one of the two advanced constructs used in the methodology. It is used whenever sequence is unimportant. For instance, years and weeks operate concurrently (or at the same time) within our calendar. The concurrency operator is rarely used in program design (since most languages do not support true concurrent processing anyway), but does come into play when resolving logical and physical data structure clashes.
Recursion.
Recursion is the least used of the constructs. It is used to indicate that a set contains an earlier or a less ordered version of itself. In the classic "bill of materials" problem components contain parts and other sub-components. Sub-components also contain sub-sub-components, and so on. The doubled bracket indicates that the set is recursive. Data structures that are truly recursive are rather rare.
Applications.
One source mentioned Warnier-Orr diagrams (along with Booch diagrams, object diagrams) as an inferior method for model-driven software architecture design.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\text{Wikipedia page} \n\\begin{cases} \n\\text{Top Section} & \\begin{cases} \\text{Introduction} \\\\ \\text{Table of Contents} \\end {cases} \\\\\n\\text{Body} & \\begin{cases} \\underset{(1,n)}\\text{Body Section} & \\begin{cases} \\text{Heading} \\\\ \\text{Text} \\end{cases} \\end{cases} \\\\\n\\text{End Section} & \\begin{cases} \\text{See Also} \\\\ \\text{References} \\\\ \\text{External Links} \\end{cases}\n\n\\end{cases}\n"
}
] |
https://en.wikipedia.org/wiki?curid=5935150
|
59352
|
Slash (punctuation)
|
Slanting line punctuation mark (/)
The slash is a slanting line punctuation mark /. It is also known as a stroke, a solidus, a forward slash and several other historical or technical names. Once used to mark periods and commas, the slash is now used to represent division and fractions, exclusive 'or' and inclusive 'or', and as a date separator.
A slash in the reverse direction \ is known as a backslash.
History.
Slashes may be found in early writing as a variant form of dashes, vertical strokes, etc. The present use of a slash distinguished from such other marks derives from the medieval European virgule (, which was used as a period, scratch comma, and caesura mark. (The first sense was eventually lost to the low dot and the other two developed separately into the comma , and caesura mark ||) Its use as a comma became especially widespread in France, where it was also used to mark the continuation of a word onto the next line of a page, a sense later taken on by the hyphen -. The Fraktur script used throughout Central Europe in the early modern period used a single slash as a scratch comma and a double slash // as a dash. The double slash developed into the double oblique hyphen ⸗ and double hyphen ⹀ or ゠ before being usually simplified into various single dashes.
In the 18th century, the mark was generally known in English as the "oblique". but particularly the less vertical fraction slash. The variant "oblique stroke" was increasingly shortened to "stroke", which became the common British name for the character, although printers and publishing professionals often instead referred to it as an "oblique". In the 19th and early 20th century, it was also widely known as the "shilling mark" or "solidus", from its use as a notation or abbreviation for the shilling. The name "slash" is a recent development, not appearing in Webster's Dictionary until the Third Edition (1961) but has gained wide currency through its use in computing, a context where it is sometimes used in British English in preference to "stroke". Clarifying terms such as "forward slash" have been coined owing to widespread use of Microsoft's DOS and Windows operating systems, which use the backslash extensively.
Usage.
Disjunction and conjunction.
Connecting alternatives.
The slash is commonly used in many languages as a shorter substitute for the conjunction "or", typically with the sense of exclusive or (e.g., Y/N permits yes or no but not both). Its use in this sense is somewhat informal, although it is used in philology to note variants (e.g., "virgula/uirgula") and etymologies (e.g., F. /LL. /L. /PIE. "*wirgā").
Such slashes may be used to avoid taking a position in naming disputes. One example is the Syriac naming dispute, which prompted the US and Swedish censuses to use the respective official designations "Assyrian/Chaldean/Syriac" and "Assyrier/Syrianer" for the ethnic group.
In particular, since the late 20th century, the slash is used to permit more gender-neutral language in place of the traditional masculine or plural gender neutrals. In the case of English, this is usually restricted to degendered pronouns such as "he/she" or "s/he". Most other Indo-European languages include more far-reaching use of grammatical gender. In these, the separate gendered desinences (grammatical suffices) of the words may be given divided by slashes or set off with parentheses. For example, in Spanish, is a son and a is a daughter; some proponents of gender-neutral language advocate the use of or when writing for a general audience or addressing a listener of unknown gender. Less commonly, at sign ⟨@⟩ is used instead: . Similarly, in German and some Scandinavian and Baltic languages, refers to any secretary and to an explicitly female secretary; some advocates of gender neutrality support forms such as for general use. This does not always work smoothly, however: problems arise in the case of words like ('doctor') where the explicitly female form is umlauted and words like ('Chinese person') where the explicitly female form loses the terminal "-e".
Connecting non-contrasting items.
The slash is also used as a shorter substitute for the conjunction "and" or inclusive or (i.e., A or B or both), typically in situations where it fills the role of a hyphen or en dash. For example, the "Hemingway/Faulkner generation" might be used to discuss the era of the Lost Generation inclusive of the people around and affected by both Hemingway and Faulkner. This use is sometimes proscribed, as by "New Hart's Rules", the style guide for the Oxford University Press.
Presenting routes.
The slash, as a form of inclusive or, is also used to punctuate the stages of a route (e.g., Shanghai/Nanjing/Wuhan/Chongqing as stops on a tour of the Yangtze).
Introducing topic shifts.
The word "slash" is also developing as a way to introduce topic shifts or follow-up statements. "Slash" can introduce a follow-up statement, such as, "I really love that hot dog place on Liberty Street. Slash can we go there tomorrow?" It can also indicate a shift to an unrelated topic, as in "JUST SAW ALEX! Slash I just chubbed on oatmeal raisin cookies at north quad and i miss you." The new usage of "slash" appears most frequently in spoken conversation, though it can also appear in writing.
In speech.
Sometimes the word "slash" is used in speech as a conjunction to represent the written role of the character (as if a written slash were being read aloud from text), e.g. "bee slash mosquito protection" for a beekeeper's net hood, and "There's a little bit of nectar slash honey over here, but really it's not a lot." (said by a beekeeper examining in a beehive), and ""Gastornis" slash "Diatryma"" for two supposed genera of prehistoric birds which are now thought to be one genus.
Mathematics.
Fractions.
The fraction slash ⟨⁄⟩ is used between two numbers to indicate a fraction or ratio. Such formatting developed as a way to write the horizontal fraction bar on a single line of text. It is first attested in England and Mexico in the 18th century. This notation is known as an online, solidus, or shilling fraction. Nowadays fractions, unlike inline division, are often given using smaller numbers, superscript, and subscript (e.g., 23⁄43). This notation is responsible for the current form of the percent ⟨%⟩, permille ⟨‰⟩, and permyriad ⟨‱⟩ signs, developed from the horizontal form which represented an early modern corruption of an Italian abbreviation of "per cento".
Many fonts draw the fraction slash (and the division slash) less vertical than the slash. The separate encoding is also intended to permit automatic formatting of the preceding and succeeding digits by glyph substitution with numerator and denominator glyphs (e.g., display of "1, fraction slash, 2" as "½"), though this is not yet supported in many environments or fonts. Because of this lack of support, some authors still use Unicode subscripts and superscripts to compose fractions, and many fonts design these characters for this purpose. In addition, all of the multiples less than 1 of 1⁄n for 2 ≤ n ≤ 6 and n = 8 (e.g. 2⁄3 and 5⁄8), as well as 1⁄7, 1⁄9, and 1⁄10, are in the Unicode Number Forms or Latin-1 Supplement block as precomposed characters.
This notation can also be used when the concept of fractions is extended from numbers to arbitrary rings by the method of localization of a ring.
Division.
The division slash ⟨⟩, equivalent to the division sign ⟨⟩, may be used between two numbers to indicate division. For example, 23 ÷ 43 can also be written as 23 ∕ 43. This use developed from the fraction slash in the late 18th or early 19th century. The formatting was advocated by De Morgan in the mid-19th century.
Quotient of set.
A "quotient of a set" is informally a new set obtained by identifying some elements of the original set. This is denoted as a fraction formula_0 (sometimes even as a built fraction), where the numerator formula_1 is the original set (often equipped with some algebraic structure). What is appropriate as denominator depends on the context.
In the most general case, the denominator is an equivalence relation formula_2 on the original set formula_1, and elements are to be identified in the quotient formula_3 if they are equivalent according to formula_2; this is technically achieved by making formula_3 the set of all equivalence classes of formula_2.
In group theory, the slash is used to mark quotient groups. The general form is formula_4, where formula_5 is the original group and formula_6 is the normal subgroup; this is read "formula_5 mod formula_6", where "mod" is short for "modulo". Formally this is a special case of quotient by an equivalence relation, where formula_7 iff formula_8 for some formula_9. Since many algebraic structures (rings, vector spaces, etc.) in particular are groups, the same style of quotients extend also to these, although the denominator may need to satisfy additional closure properties for the quotient to preserve the full algebraic structure of the original (e.g. for the quotient of a ring to be a ring, the denominator must be an ideal).
When the original set is the set of integers formula_10, the denominator may alternatively be just an integer: formula_11. This is an alternative notation for the set formula_12 of integers modulo "n" (needed because formula_12 is also notation for the very different ring of "n"-adic integers). formula_11 is an abbreviation of formula_13 or formula_14, which both are ways of writing the set in question as a quotient of groups.
Combining slash.
Slashes may also be used as a combining character in mathematical formulae. The most important use of this is that combining a slash with a relation negates it, producing e.g. 'not equal' formula_15 as negation of formula_16 or 'not in' formula_17 as negation of formula_18; these slashed relation symbols are always implicitly defined in terms of the non-slashed base symbol. The graphical form of the negation slash is mostly the same as for a division slash, except in some cases where that would look odd; the negation formula_19 of formula_20 (divides) and negation formula_21 of formula_2 (various meanings) customarily both have their negations slashes less steep and in particular shorter than the usual one.
The Feynman slash notation is an unrelated use of combining slashes, mostly seen in quantum field theory. This kind of combining slash takes a vector base symbol and converts it to a matrix quantity. Technically this notation is a shorthand for contracting the vector with the Dirac gamma matrices, so formula_22; what one gains is not only a more compact formula, but also not having to allocate a letter as the contracted index.
Computing.
The slash, sometimes distinguished as "forward slash", is used in computing in a number of ways, primarily as a separator among levels in a given hierarchy, for example in the path of a filesystem.
File paths.
The slash is used as the path component separator in many computer operating systems (e.g., Unix's pictures/image.png). In Unix and Unix-like systems, such as macOS and Linux, the slash is also used for the volume root directory (e.g., the initial slash in /usr/john/pictures). Confusion of the slash with the backslash ⟨\⟩ largely arises from the use of the latter as the path component separator in the widely used MS-DOS and Microsoft Windows systems.
Networking.
The slash is used in a similar fashion in internet URLs (e.g., http://en.wikipedia.org/wiki/Slash_(punctuation)). Often this portion of such URLs corresponds with files on a Unix server with the same name, and this is where this convention for internet URLs comes from.
The slash in an IP address (e.g., 192.0.2.0/29) indicates the prefix size in CIDR notation. The number of addresses of a subnet may be calculated as 2address size − prefix size, in which the address size is 128 for IPv6 and 32 for IPv4. For example, in IPv4, the prefix size/29 gives: 232–29 = 23 = 8 addresses.
Programming.
The slash is used as a division operator in most programming languages while APL uses it for reduction (fold) and compression (filter). The double slash is used by Rexx as a modulo operator, and Python (starting in version 2.2) uses a double slash for division which rounds (using floor) to an integer. In Raku the double slash is used as a "defined-or" alternative to ||. A dot and slash ⟨./⟩ is used in MATLAB and GNU Octave to indicate an element-by-element division of matrices.
Comments that begin with /* (a slash and an asterisk) and end with */ were introduced in PL/I and subsequently adopted by SAS, C, Rexx, C++, Java, JavaScript, PHP, CSS, and C#. A double slash // is also used by C99, C++, C#, PHP, Java, Swift, and JavaScript to start a single line comment.
In SGML and derived languages such as HTML and XML, a slash is used in closing tags. For example, in HTML, <b> begins a section of bold text and </b> closes it. In XHTML, slashes are also necessary for "self-closing" elements such as the newline command where HTML has simply .
In a style originating in the Digital Equipment Corporation line of operating systems (OS/8, RT-11, TOPS-10, et cetera), Windows, DOS, some CP/M programs, OpenVMS, and OS/2 all use the slash to indicate command-line options. For example, the command dir/w is understood as using the command dir ("directory") with the "wide" option. No space is required between the command and the switch; this was the reason for the choice to use backslashes as the path separator since one would otherwise be unable to run a program in a different directory.
Slashes are used as the standard delimiters for regular expressions, although other characters can be used instead.
IBM JCL uses a double slash to start each line in a batch job stream except for /* and /&.
Programs.
IRC and many in-game chat clients use the slash to mark commands, such as joining and leaving a chat room or sending private messages. For example, in IRC, /join #services is a command to join the channel "services" and /me is a command to format the following message as though it were an action instead of a spoken message. In "Minecraft"'s chat function, the slash is used for executing console and plugin commands. In "Second Life"'s chat function, the slash is used to select the "communications channel", allowing users to direct commands to virtual objects "listening" on different channels. For example, if a virtual house's lights were set to use channel 42, the command "/42 on" would turn them on. In Discord, slash commands are used to send special messages and execute commands, like sending a shrug emoji (¯\_(ツ)_/¯) or a table flip emoji ((╯°□°)╯︵ ┻━┻), or changing one's nickname using "/nick". Slash commands can also be used to use Discord bots.
The Gedcom standard for exchanging computerized genealogical data uses slashes to delimit surnames; an example would be Bill /Smith/ Jr. Slashes around surnames are also used in Personal Ancestral File.
Currency.
The slash (as the "shilling mark" or "solidus") was an abbreviation for the shilling, a former coin of the United Kingdom and its former colonies. Before the decimalisation of currency in Britain, its currency abbreviations (collectively £sd) represented their Latin names, derived from a medieval French modification of the late Roman libra, solidus, and denarius. Thus, one penny less than two pounds was written £1 19s. 11d. During the period when English orthography included the long s, ſ, the ſ came to be written as a single slash. The s. and the d. might therefore be omitted, and "2/6" meant "two shillings and sixpence". Amounts in full pounds, shillings and pence could be written in many different ways, for example: £1 9s 6d, £1.9.6, £1-9-6, and even £1/9/6d (with a slash used "also" to separate pounds and shillings). The same style was also used under the British Raj and early independent India for the predecimalization rupee/anna/pie system.
In five East African countries (Kenya, Tanzania, Uganda, Somalia, and the "de facto" country of Somaliland), where the national currencies are denominated in shillings, the decimal separator is a slash mark (e.g., 2/). Where the minor unit is zero, an equals sign is used (e.g., 5/=).
Dates.
Slashes are a common calendar date separator used across many countries and by some standards such as the Common Log Format used by web servers. Depending on context, it may be in the form Day/Month/Year, Month/Day/Year, or Year/Month/Day. If only two elements are present, they typically denote a day and month in some order. For example, 9/11 is a common American way of writing the date 11 September; Britons write this as 11/9. Owing to the ambiguity across cultures, the practice of using only two elements to denote a date is sometimes proscribed.
Because of the world's many varying conventional date and time formats, ISO 8601 advocates the use of a Year-Month-Day system separated by hyphens (e.g., Victory in Europe Day occurred on 1945-05-08). In the ISO 8601 system, slashes represent date ranges: "1939/1945" represents what is more commonly written in Anglophone countries as "1939–1945". The autumn term of a northern-hemisphere school year might be marked "2010-09-01/12-22".
In English, a range marked by a slash often has a separate meaning from one marked by a dash or hyphen. "24/25 December" would mark the time shared by both days (i.e., the night from Christmas Eve to Christmas morning) rather than the time made up by both days together, which would be written "24–25 December". Similarly, a historical reference to "1066/67" might imply an event occurred during the winter of late 1066 and early 1067, whereas a reference to 1066–67 would cover the entirety of both years. The usage was particularly common in British English during World War II, where such slash dates were used for night-bombing air raids. It is also used by some police forces in the United States.
Numbering.
The slash is used in numbering to note totals. For example, "page 17/35" indicates that the relevant passage is on the 17th page of a 35-page document. Similarly, the marking "#333/500" on a product indicates it is the 333rd out of 500 identical products or out of a batch of 500 such products. For scores on schoolwork, in games, and so on, "85/100" indicates 85 points were attained out of a possible 100.
Slashes are also sometimes used to mark ranges in numbers that already include hyphens or dashes. One example is the ISO treatment of dating. Another is the US Air Force's treatment of aircraft serial numbers, which are normally written to note the fiscal year and aircraft number. For example, "85-1000" notes the thousandth aircraft ordered in fiscal year 1985. To indicate the next fifty subsequent aircraft, a slash is used in place of a hyphen or dash: "85-1001/1050".
Linguistic transcription.
A pair of slashes (as "slants") are used in the transcription of speech to enclose pronunciations (i.e., phonetic transcriptions). For example, the IPA transcription of the English pronunciation of "solidus" is written . Properly, slashes mark broad or phonemic transcriptions, whereas narrow, allophonic transcriptions are enclosed by square brackets. For example, the word "little" may be broadly rendered as but a careful transcription of the velarization of the second L would be written .
In sociolinguistics, a double or triple slash may also be used in the transcription of a traditional sociolinguistic interview or in other type of linguistic elicitation to represent simultaneous speech, interruptions, and certain types of speech disfluencies.
Single and double slashes are often used as typographic substitutes for the click letters ǀ, ǁ.
A diaphonemic transcription may be marked in several ways, e.g. with a pair of slash marks ().
Poetry.
The slash is used in various scansion notations for representing the metrical pattern of a line of verse, typically to indicate a stressed syllable.
Line breaks.
The slash (as a "virgule") offset by spaces to either side is used to mark line breaks when transcribing text from a multi-line format into a single-line one. It is particularly common in quoting poetry, song lyrics, and dramatic scripts, formats where omitting the line breaks risks losing meaningful context. For example, here is a part of Hamlet's soliloquy:
<templatestyles src="Template:Blockquote/styles.css" />
If someone wanted to quote the above soliloquy in a prose paragraph, it is standard to mark the line breaks as follows: "To be, or not to be, that is the question: / Whether 'tis nobler in the mind to suffer / The slings and arrows of outrageous Fortune, / Or to take arms against a sea of troubles, / And by opposing end them..." Less often, virgules are used in marking paragraph breaks when quoting a prose passage. Some style guides, such as "New Hart's", prefer to use a pipe | in place of the slash to mark these line and paragraph breaks.
The virgule may be thinner than a standard slash when typeset. In computing contexts, it may be necessary to use a non-breaking space before the virgule to prevent it from being widowed on the next line.
Abbreviation.
The slash has become standard in several abbreviations. Generally, it is used to mark two-letter initialisms such as A/C (short for "air conditioner"), w/o ("without"), b/w ("black and white" or, less often, "between"), w/e ("whatever" or, less often, "weekend" or "week ending"), i/o ("input/output"), r/w ("read/write"), and n/a ("not applicable"). Other initialisms employing the slash include w/ ("with") and w/r/t ("with regard to"). Such slashed abbreviations are somewhat more common in British English and were more common around the Second World War (as with "S/E" to mean "single-engined"). The abbreviation 24/7 (denoting 24 hours a day, 7 days a week) describes a business that is always open or unceasing activity.
The slash in derived units such as m/s (meters per second) is not an abbreviation slash, but a straight division. It is however in that position read as 'per' rather than e.g. 'over', which can be seen as analogous to units whose symbols are pure abbreviations such as mph (miles per hour), although in abbreviations 'per' is 'p' or dropped entirely (psi, pounds per square inch) rather than a slash.
In the US government, the names of offices within various departments are abbreviated using slashes, starting with the larger office and following with its subdivisions. For example, the Federal Aviation Administration's Office of Commercial Space Transportation is formally abbreviated FAA/AST.
Proofreading.
The slash or vertical bar (as a "separatrix") is used in proofreading to mark the end of margin notes or to separate margin notes from one another. The slash is also sometimes used in various proofreading initialisms, such as l/c and u/c for changes to lower and upper case, respectively.
Fiction.
The slash is used in fan fiction to mark the romantic pairing a piece will focus upon (e.g., a K/S denoted a "Star Trek" story would focus on a sexual relationship between Kirk and Spock), a usage which developed in the 1970s from the earlier friendship pairings marked by ampersands (e.g., K&S). The genre as a whole is now known as slash fiction. Because it is more generally associated with homosexual male relationships, lesbian slash fiction is sometimes distinguished as femslash. In situations where other pairings occur, the genres may be distinguished as m/m, f/f, and so on.
Libraries.
The slash is used under the Anglo-American Cataloguing Rules to separate the title of a work from its statement of responsibility (i.e., the listing of its author, director, etc.). Like a line break, this slash is surrounded by a single space on either side. For example:
The format is used in both card catalogs and online records.
Addresses.
The slash is sometimes used as an abbreviation for building numbers. For example, in some contexts, 8/A Evergreen Gardens specifies Apartment 8 in Building A of the residential complex Evergreen Gardens. In the United States, however, such an address refers to the first division of Apartment 8 and is simply a variant of Apartment 8A or 8-A. Similarly in the United Kingdom, an address such as 12/2 Anywhere Road means flat (or apartment) 2 in the building numbered 12 on Anywhere Road.
The slash is also used in the United States in the postal abbreviation for "care of." For example, Judy Smith c/o Bob Smith could be used when Bob Smith is receiving mail on Judy's behalf. Typically, this would be used in a situation where someone is either out of town, in an institution or hotel, or temporarily staying at another's address.
In Spanish address writings, "c/" is used as the abbreviation of "calle" (or "carrer" in Catalan) meaning "street".
Music.
Slashes are used in musical notation as an alternative to writing out specific notes where it is easier to read than traditional notation or where the player can improvise. They are commonly used to indicate chords either in place of or in combination with traditional notation and for drummers as an indication to continue with the previously indicated style.
Sports.
A slash is used to mark a spare (knocking down all ten pins in two throws) when scoring ten-pin and duckpin bowling.
Text messaging.
In online messaging, a slash might be used to imitate the formatting of a chat command (e.g., writing "/fliptable" as though there were such a command) or the closing tags of languages such as HTML (e.g., writing "/endrant" to end a diatribe or "/s" to mark the preceding text as sarcastic). A pair of slashes is sometimes used as a way to mark italic text, where no special formatting is available (e.g., /italics/).
Before an e-signature.
In legal writing, especially in a pleading, attorneys often sign their name with an S that is enclosed by slashes and preceding the attorney's name. An example would be the following:
<templatestyles src="Template:Blockquote/styles.css" />/s/ Bob Smith Attorney for Plaintiff
As a letter.
The Iraqw language of Tanzania uses the slash as a letter, representing the voiced pharyngeal fricative, as in /ameeni, "woman".
Spacing.
There are usually no spaces either before or after a slash. According to "", a slash is usually written without spacing on either side when it connects single words, letters or symbols. Exceptions are in representing the start of a new line when quoting verse, or a new paragraph when quoting prose. "The Chicago Manual of Style" also allows spaces when either of the separated items is a compound that itself includes a space: "Our New Zealand / Western Australia trip". (Compare use of an en dash used to separate such compounds.) "The Canadian Style: A Guide to Writing and Editing" prescribes: "No space before or after an oblique when used between individual words, letters or symbols; one space before and after the oblique when used between longer groups which contain internal spacing", giving the examples "n/a" and "Language and Society / "Langue et société"".
According to "The Chicago Manual of Style", when typesetting a URL or computer path, line breaks should occur before a slash but not in the text between two slashes.
Encoding.
As a very common character, the slash (as "slant") was originally encoded in ASCII with the decimal code 47 or 0x2F. The same value was used in Unicode, which calls it "solidus" and also adds some more characters:
In XML and HTML, the slash can also be represented with the character entity or the numeric character reference or .
Alternative names.
The slash may also be read out as "and", "or", "and/or", "to", or "cum" in some compounds separated by a slash; "over" or "out of" in fractions, division, and numbering; and "per" or "a(n)" in derived units (as km/h) and prices (as $~/kg), where the division slash stands for "each".
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "S / R"
},
{
"math_id": 1,
"text": "S"
},
{
"math_id": 2,
"text": "\\sim"
},
{
"math_id": 3,
"text": "S/{\\sim}"
},
{
"math_id": 4,
"text": "G/N"
},
{
"math_id": 5,
"text": "G"
},
{
"math_id": 6,
"text": "N"
},
{
"math_id": 7,
"text": "g \\sim h"
},
{
"math_id": 8,
"text": "g = hn"
},
{
"math_id": 9,
"text": "n \\in N"
},
{
"math_id": 10,
"text": "\\mathbb{Z}"
},
{
"math_id": 11,
"text": "\\mathbb{Z}/n"
},
{
"math_id": 12,
"text": "\\mathbb{Z}_n"
},
{
"math_id": 13,
"text": "\\mathbb{Z}/n\\mathbb{Z}"
},
{
"math_id": 14,
"text": "\\mathbb{Z}/(n)"
},
{
"math_id": 15,
"text": "\\neq"
},
{
"math_id": 16,
"text": "="
},
{
"math_id": 17,
"text": "\\notin"
},
{
"math_id": 18,
"text": "\\in"
},
{
"math_id": 19,
"text": "\\nmid"
},
{
"math_id": 20,
"text": "\\mid"
},
{
"math_id": 21,
"text": "\\nsim"
},
{
"math_id": 22,
"text": "A\\!\\!\\!/ = \\gamma^\\mu A_\\mu"
}
] |
https://en.wikipedia.org/wiki?curid=59352
|
59354087
|
Trigonometric Rosen–Morse potential
|
Solvable quantum mechanics potential
The trigonometric Rosen–Morse potential, named after the physicists Nathan Rosen and Philip M. Morse, is among the exactly solvable quantum mechanical potentials.
Definition.
In dimensionless units and modulo additive constants, it is defined as
where formula_0 is a relative distance, formula_1 is an angle rescaling parameter, and formula_2 is so far a matching length parameter. Another parametrization of same potential is
which is the trigonometric version of a one-dimensional hyperbolic potential introduced in molecular physics by Nathan Rosen and Philip M. Morse and given by,
a parallelism that explains the potential's name. The most prominent application concerns the formula_3 parametrization, with formula_4 non-negative integer, and is due to Schrödinger who intended to formulate the hydrogen atom problem on Albert Einstein's closed universe, formula_5, the direct product of a time line with a three-dimensional closed space of positive constant curvature, the hypersphere formula_6, and introduced it on this geometry in his celebrated equation as the counterpart to the Coulomb potential, a mathematical problem briefly highlighted below.
"The formula_7 case: Four-dimensional rigid rotator in inertial quantum motion on the three dimensional hypersphere formula_6"
The hypersphere is a surface in a four-dimensional Euclidean space, formula_8, and is defined as,
where formula_9, formula_10, formula_11, and formula_12 are the Cartesian coordinates of a vector in formula_8, and formula_2 is termed to as hyper-radius. Correspondingly, Laplace operator in formula_8 is given by,
In now switching to polar coordinates,
one finds the Laplace operator expressed as
Here, formula_13 stands for the squared angular momentum operator in four dimensions, while formula_14 is the standard three-dimensional squared angular momentum operator. Considering now the hyper-spherical radius formula_2 as a constant, one encounters the Laplace-Beltrami operator on formula_15 as
With that the free wave equation on formula_15 takes the form
The solutions, formula_16, to this equation are the so-called four-dimensional hyper-spherical harmonics defined as
where formula_17 are the Gegenbauer polynomials. Changing in (10) variables as
one observes that the formula_18 function satisfies the one-dimensional Schrödinger equation with the formula_19 potential according to
The one-dimensional potential in the latter equation, in coinciding with the Rosen–Morse potential in (1) for formula_20 and formula_21, clearly reveals that for integer formula_22 values, the first term of this potential takes its origin from the centrifugal barrier on formula_15. Stated differently, the equation (10), and its version (14) describe inertial (free) quantum motion of a rigid rotator in the four-dimensional Euclidean space, formula_8, such as the H Atom, the positronium, etc. whose "ends" trace the large "circles" (i.e. formula_23 spheres) on formula_15.
Now the question arises whether the second term in (1) could also be related in some way to the formula_15 geometry.
"The formula_3 case: Electric charge confinement on formula_15 and a dipole potential shaped after formula_24"
To the amount the cotangent function solves the Laplace–Beltrami equation on formula_15,
it represents a fundamental solution on formula_15, a reason for which Schrödinger considered it as the counterpart to the Coulomb potential in flat space, by itself a fundamental solution to the formula_25 Laplacian. Due to this analogy, the cotangent function is frequently termed to as "curved Coulomb" potential. Such an interpretation ascribes the cotangent potential to a single charge source, and here lies a severe problem. Namely, while open spaces, as is formula_25, support single charges, in closed spaces single charge can not be defined in a consistent way. Closed spaces are necessarily and inevitably charge neutral meaning that the minimal fundamental degrees of freedom allowed on them are charge dipoles (see Fig. 1).
For this reason, the wave equation
which transforms upon the variable change, formula_26, into the familiar one-dimensional Schrödinger equation with the formula_3 trigonometric Rosen–Morse potential,
in reality describes quantum motion of a charge dipole perturbed by the field due to another charge dipole, and not the motion of a single charge within the field produced by another charge. Stated differently, the two equations (16) and (17) do not describe strictly speaking a Hydrogen Atom on formula_27, but rather quantum motion on formula_15 of a light formula_28 dipole perturbed by the dipole potential of another very heavy dipole, like the H Atom, so that the reduced mass, formula_29, would be of the order of the electron mass and could be neglected in comparison with the energy.
In order to understand this decisive issue, one needs to focus attention to the necessity of ensuring validity on formula_15 of both the Gauss law and the superposition principle for the sake of being capable to formulate electrostatic there. With the cotangent function in (15) as a single-source potential, such can not be achieved. Rather, it is necessary to prove that the cotangent function represents a dipole potential. Such a proof has been delivered in. To understand the line of arguing of it is necessary to go back to the expression for the Laplace operator in (5) and before considering the hyper-radius as a constant, factorize this space into a time line and formula_15. For this purpose, a "time" variable is introduced via the logarithm of the formula_15 radius. Introducing this variable change in (7) amounts to the following Laplacian,
The formula_31 parameter is known as "conformal time", and the whole procedure is referred to as "radial quantization". Charge-static is now built up in setting formula_31=const in (19) and calculating the harmonic function to the remaining piece, the so-called conformal Laplacian, formula_32, on formula_15, which is read off from (19) as
where we have chosen formula_33, equivalently, formula_34.
Then the correct equation to be employed in the calculation of the fundamental solution is
formula_35.
This Green function to
formula_36
has been calculated for example in.
Its values at the respective South and North poles, in turn denoted by formula_37, and formula_38, are reported as
and
From them one can now construct the dipole potential for a fundamental charge formula_39 placed, say, on the North pole, and a fundamental charge of opposite sign, formula_40, placed on the antipodal South pole of formula_15. The associated potentials, formula_41 and formula_42, are then constructed through multiplication of the respective Green function values by the relevant charges as
In now assuming validity of the superposition principle, one encounters a Charge Dipole (CD) potential to emerge at a point formula_44 on formula_15 according to
The electric field to this dipole is obtained in the standard way through differentiation as
and coincides with the precise expression prescribed by the Gauss theorem on formula_15, as explained in. Notice that formula_39 stands for dimension-less charges. In terms of dimensional charges, formula_45, related to formula_39 via
the potential perceived by another charge formula_46, is
For example, in the case of electrostatic, the fundamental charge formula_45 is taken the electron charge, formula_47, in which case the special notation of
is introduced for the so-called fundamental coupling constant of electrodynamics. In effect, one finds
In Fig. 2 we display the dipole potential formula_43 in (30).
With that, the one-dimensional Schrödinger equation that describes on formula_15 the quantum motion of an electric charge dipole perturbed by the trigonometric Rosen–Morse potential, produced by another electric charge dipole, takes the form of
Because of the relationship, formula_48, with formula_49 being the node number of the wave function, one could change labeling of the wave functions, formula_50, to the more familiar in the literature, formula_51.
In eqs. (31)-(32) one recognizes the one-dimensional wave equation with the trigonometric Rosen–Morse potential in (1) for formula_20 and formula_52.
In this way, the cotangent term of the trigonometric Rosen–Morse potential could be derived from the Gauss law on formula_15 in combination with the superposition principle, and could be interpreted as a dipole potential generated by a system consisting of two opposite fundamental charges. The centrifugal formula_19 term of this potential has been generated by the kinetic energy operator on formula_15. In this manner, the complete trigonometric Rosen–Morse potential could be derived from first principles.
Back to Schrödinger's work, the formula_15 hyper-radius for the H Atom has turned out to be very big indeed, and of the order of formula_53. This is by eight orders of magnitudes larger than the H Atom size. The result has been concluded from fitting magnetic dipole elements to hydrogen hyper-fine structure effects (see } and reference therein). The aforementioned radius is sufficiently large to allow approximating the hyper-sphere locally by plane space in which case the existence of single charge still could be justified. In cases in which the hyper spherical radius becomes comparable to the size of the system, the charge neutrality takes over. Such an example will be presented in section 6 below.
Before closing this section, it is in order to bring the exact solutions to the equations (31)-(32), given by
where formula_54 stand for the Romanovski polynomials.
Application to Coulomb fluids.
Coulomb fluids consist of dipolar particles and are modelled by means of direct numerical simulations. It is commonly used to choose cubic cells with periodic boundary conditions in conjunction with Ewald summation techniques. In a more efficient alternative method pursued by, one employs as a simulation cell the hyper spherical surface formula_15 in (4). As already mentioned above, the basic object on formula_15 is the electric charge dipole, termed to as "bi-charge" in fluid dynamics, which can be visualized classically as a rigid "dumbbell" (rigid rotator) of two antipodal charges of opposite signs, formula_55 and formula_56. The potential of a bi-charge is calculated by solving on formula_27 the Poisson equation,
Here, formula_57 is the angular coordinate of a charge formula_45 placed at angular position formula_57, read off from the North pole, while formula_58 stands for the anti-podal to formula_57 angular coordinate of the position, at which the charge of opposite signs is placed in the Southern hemisphere. The solution found,
equals the potential in (30), modulo conventions regarding the charge signs and units. It provides an alternative proof to that delivered by the equations (19)-(30) of the fact that the cotangent function on formula_15 has to be associated with the potential generated by a charge dipole. In contrast, the potentials in the above equations (23), and (24), have been interpreted in as due to so called single "pseudo-charge" sources, where a "pseudo-charge" is understood as the association of a point charge formula_45 with a uniform neutralizing background of a total charge, formula_56.
The pseudo-charge potential, formula_59, solves formula_60.
Therefore, the bi-charge potential is the difference between the potentials of two antipodal pseudo-charges of opposite signs.
Application to color confinement and the physics of quarks.
The confining nature of the cotangent potential in (28) finds an application in a phenomenon known from the physics of strong interaction which refers to the non-observability of free quarks, the constituents of the hadrons. Quarks are considered to possess three fundamental internal degree of freedom, conditionally termed to as "colors", red formula_61, blue formula_62, and green formula_63, while anti-quarks carry the corresponding anti-colors, anti-red formula_64, anti-blue formula_65, or anti-green formula_66, meaning that the non-observability of free quarks is equivalent to the non-observability of free color-charges, and thereby to the "color neutrality" of the hadrons. Quark "colors" are the fundamental degrees of freedom of the Quantum Chromodynamics (QCD), the gauge theory of strong interaction. In contrast to the Quantum Electrodynamics, the gauge theory of the electromagnetic interactions, QCD is a non-Abelian theory which roughly means that the "color" charges, denoted by formula_67, are not constants, but depend on the values, formula_68, of the transferred momentum, giving rise to the so-called, running of the strong coupling constant, formula_69, in which case the Gauss law becomes more involved. However, at low momentum transfer, near the so-called infrared regime, the momentum dependence of the color charge significantly weakens, and in starting approaching a constant value,
drives the Gauss law back to the standard form known from Abelian theories. For this reason, under the condition of color charge constancy, one can attempt to model the color neutrality of hadrons in parallel to the neutrality of Coulomb fluids, namely, by considering quantum color motions on closed surfaces. In particular for the case of the hyper-sphere formula_15, it has been shown in, that a potential, there denoted by formula_70, and obtained from the one in (28) through the replacement,
i.e. the potential
where formula_71 is the number of colors, is the adequate one for the description of the spectra of the light mesons with masses up to formula_72. Especially, the hydrogen like degeneracies have been well captured. This because the potential, in being a harmonic function to the Laplacian on formula_15, has same symmetry as the Laplacian by itself, a symmetry that is defined by the isometry group of formula_15, i.e. by formula_73, the maximal compact group of the conformal group formula_74. For this reason, the potential in (39), as part of formula_75, accounts not only for color confinement, but also for conformal symmetry in the infrared regime of QCD. Within such a picture, a meson is constituted by a quark formula_76-anti-quark formula_77 color dipole in quantum motion on an formula_15 geometry, and gets perturbed by the dipole potential in (39), generated by and other color dipole, such as a gluon formula_63-anti-gluon formula_66, as visualized in Fig. 3.
The formula_15 geometry could be viewed as the unique closed space-like geodesic of a four-dimensional hyperboloid of one sheet, formula_78, foliating outside of the causal Minkowski light-cone the space-like region, assumed to have one more spatial dimension, this in accord with the so-called de Sitter Special Relativity, formula_30. Indeed, potentials, in being instantaneous and not allowing for time orderings, represent virtual, i.e. acausal processes and as such can be generated in one-dimensional wave equations upon proper transformations of virtual quantum motions on surfaces located outside the causal region marked by the Light Cone. Such surfaces can be viewed as geodesics of the surfaces foliating the space like region. Quantum motions on open formula_78 geodesics can give rise to barriers describing resonances transmitted through them. An illustrative example for the application of the color confining dipole potential in (39) to meson spectroscopy is given in Fig. 4. It should be pointed out that the potentials in the above equations (23) and (24) have been alternatively derived in, from Wilson loops with cusps, predicting their magnitude as formula_79, and in accord with (38).
The potential in (39) has furthermore been used in in the Dirac equation on formula_15, and has been shown to predict realistic electromagnetic nucleon form-factors and related constants such as mean square electric-charge and magnetic-dipole radii, proton and nucleon magnetic dipole moments and their ratio, etc.
"Applicability of formula_80 to phase transitions"
The property of the trigonometric Rosen-Morse potential, be it in the parametrization with formula_81 in eq. (32)
which is of interest to electrodynamics, or in the formula_82 parametrization of interest to QCD from the
previous section, qualifies it to studies of phase transitions in systems with electromagnetic or strong interactions on
hyperspherical "boxes" of finite volumes
. The virtue of such studies lies in the
possibility to express the temperature, formula_83, as the inverse, formula_84, to the radius formula_2 of the hypersphere. For this purpose, knowledge on the partition function (statistical mechanics), here denoted by formula_85, of the potential under consideration is needed. In the following we evaluate formula_85 for the case of the Schrödinger equation on
formula_15 with linear energy (here in units of MeV),
where formula_86 is the reduced mass of the two-body system under consideration. The
partition function (statistical mechanics) for this energy spectrum is defined in the standard way as,
Here, the thermodynamic beta is defined as formula_87 with formula_88 standing for the
Boltzmann constant. In evaluating formula_85 it is useful to recall that with the increase of formula_89
the second term on the right hand side in (40) becomes negligible compared to the term proportional formula_90, a
behavior which becomes even more pronounced for the choices, formula_91, and formula_92. In both cases formula_93 is
much smaller compared to the corresponding dimensionless factor, formula_94, multiplying formula_95. For this reason the partition function under investigation might be well approximated by,
Along same lines, the partition function for the formula_96 parametrization corresponding to the Hydrogen atom on formula_15
has been calculated in, where a more sophisticated approximation has been employed. When transcribed to
the current notations and units, the partition function in presents itself as,
The infinite integral has first been treated by
means of partial integration giving,
Then the argument of the exponential under the sign of the integral has been cast as,
thus reaching the following intermediate result,
As a next step the differential has been represented as
an algebraic manipulation which allows to express the partition function in
(46) in terms of the formula_97 function of complex argument according to,
where formula_98 is an arbitrary path on the complex plane starting in zero and ending in
formula_99. For more details and physical interpretations, see.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "r"
},
{
"math_id": 1,
"text": "\\lambda"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "V_{tRM}^{(\\ell+1,b,1)}(\\chi)"
},
{
"math_id": 4,
"text": "\\ell"
},
{
"math_id": 5,
"text": "R^1\\otimes S^3"
},
{
"math_id": 6,
"text": "S^{3}"
},
{
"math_id": 7,
"text": "V_{tRM}^{(\\ell+1,0,1)}(\\chi)"
},
{
"math_id": 8,
"text": "E_4"
},
{
"math_id": 9,
"text": "x_1"
},
{
"math_id": 10,
"text": "x_2"
},
{
"math_id": 11,
"text": "x_3"
},
{
"math_id": 12,
"text": "x_4"
},
{
"math_id": 13,
"text": "{\\mathcal K}^2(\\chi,\\theta,\\varphi)"
},
{
"math_id": 14,
"text": "{\\mathbf L}^2(\\theta,\\varphi)"
},
{
"math_id": 15,
"text": "S^3"
},
{
"math_id": 16,
"text": "Y_{K\\ell m}(\\chi,\\theta,\\varphi)"
},
{
"math_id": 17,
"text": "{\\mathcal G}_n^{\\ell+1}(\\cos\\chi)"
},
{
"math_id": 18,
"text": "\\psi_{K\\ell}(\\chi)"
},
{
"math_id": 19,
"text": "\\csc^2\\chi"
},
{
"math_id": 20,
"text": "a=\\ell+1"
},
{
"math_id": 21,
"text": "b=0"
},
{
"math_id": 22,
"text": "a"
},
{
"math_id": 23,
"text": "S^2"
},
{
"math_id": 24,
"text": "\\alpha Z\\cot\\chi"
},
{
"math_id": 25,
"text": "E_3"
},
{
"math_id": 26,
"text": "\\Psi_{K\\ell m}(\\chi,\\theta,\\varphi)=\\frac{U^{(b)}_{K\\ell}(\\chi)}{\\sin\\chi}Y_{\\ell}^m(\\theta,\\varphi)"
},
{
"math_id": 27,
"text": "S_3"
},
{
"math_id": 28,
"text": "(e^+,e^-)"
},
{
"math_id": 29,
"text": "\\mu"
},
{
"math_id": 30,
"text": "dS_4"
},
{
"math_id": 31,
"text": "\\tau"
},
{
"math_id": 32,
"text": "\\Delta^1_{S^3}(\\chi,\\theta,\\varphi)"
},
{
"math_id": 33,
"text": "\\tau=0"
},
{
"math_id": 34,
"text": "R=1"
},
{
"math_id": 35,
"text": "\n\\left( \\Delta^1_{S^3}(\\tau,\\chi,\\theta,\\varphi)-1\\right)|_{\\tau=0}{\\mathcal G}=\\Delta_{S^3}(R,\\chi,\\theta,\\varphi)|_{R=1} {\\mathcal G}=\\delta -\\frac{1}{2\\pi^2}"
},
{
"math_id": 36,
"text": "\n\\Delta_{S^3}(\\tau,\\chi,\\theta,\\varphi)\n"
},
{
"math_id": 37,
"text": "{\\mathcal G}_{\\pi}(\\chi)"
},
{
"math_id": 38,
"text": "{\\mathcal G}_{0}(\\chi)"
},
{
"math_id": 39,
"text": "{\\mathcal Q}"
},
{
"math_id": 40,
"text": "-{\\mathcal Q}"
},
{
"math_id": 41,
"text": "V_{\\pi}(\\chi)"
},
{
"math_id": 42,
"text": "V_{0}(\\chi)"
},
{
"math_id": 43,
"text": "V_{CD}(\\chi)"
},
{
"math_id": 44,
"text": "\\chi"
},
{
"math_id": 45,
"text": "q"
},
{
"math_id": 46,
"text": "(Zq)/\\sqrt{\\hbar c}"
},
{
"math_id": 47,
"text": "e"
},
{
"math_id": 48,
"text": "K-\\ell=n"
},
{
"math_id": 49,
"text": "n"
},
{
"math_id": 50,
"text": "U^{(b)}_{K\\ell}(\\chi)"
},
{
"math_id": 51,
"text": "U^{(b)}_{\\ell n}(\\chi)"
},
{
"math_id": 52,
"text": "2b=\\alpha Z"
},
{
"math_id": 53,
"text": "10^{-3} cm"
},
{
"math_id": 54,
"text": "R_n^{\\alpha\\beta}(x)"
},
{
"math_id": 55,
"text": "+q"
},
{
"math_id": 56,
"text": "-q"
},
{
"math_id": 57,
"text": "\\chi_0"
},
{
"math_id": 58,
"text": "{\\bar\\chi}_0"
},
{
"math_id": 59,
"text": "V_{\\mbox{psd-ch}} "
},
{
"math_id": 60,
"text": " \\Delta_{S^3} V_{\\mbox{psd-ch}} =\\delta -\\frac{1}{2\\pi^2}"
},
{
"math_id": 61,
"text": "(r)"
},
{
"math_id": 62,
"text": "(b)"
},
{
"math_id": 63,
"text": "(g)"
},
{
"math_id": 64,
"text": "({\\bar r})"
},
{
"math_id": 65,
"text": "({\\bar b})"
},
{
"math_id": 66,
"text": "({\\bar g})"
},
{
"math_id": 67,
"text": "g_s(Q^2)"
},
{
"math_id": 68,
"text": "Q^2"
},
{
"math_id": 69,
"text": "\\alpha_s(Q^2)"
},
{
"math_id": 70,
"text": "V_{CCD}(\\chi)"
},
{
"math_id": 71,
"text": "N_c"
},
{
"math_id": 72,
"text": "\\sim 2500 MeV"
},
{
"math_id": 73,
"text": "SO(4)"
},
{
"math_id": 74,
"text": "SO(2,4)"
},
{
"math_id": 75,
"text": "V_{tRM}^{(\\ell+1,\\alpha_sN_c/2,1)}(\\chi)"
},
{
"math_id": 76,
"text": "(q)"
},
{
"math_id": 77,
"text": "({\\bar q})"
},
{
"math_id": 78,
"text": "{\\mathbf H}_1^4"
},
{
"math_id": 79,
"text": "\\alpha_s N_c/(4\\pi^2)"
},
{
"math_id": 80,
"text": "V_{tRM}^{\\left(\\ell +1,b ,1\\right)}"
},
{
"math_id": 81,
"text": "b=\\alpha Z/2"
},
{
"math_id": 82,
"text": "b=\\alpha_sN_c/2"
},
{
"math_id": 83,
"text": "T"
},
{
"math_id": 84,
"text": "T=1/R"
},
{
"math_id": 85,
"text": "{\\mathcal Z}(R,b)"
},
{
"math_id": 86,
"text": "\\mu c^2"
},
{
"math_id": 87,
"text": "\\beta =(k_BT)^{-1}"
},
{
"math_id": 88,
"text": "k_B"
},
{
"math_id": 89,
"text": "K"
},
{
"math_id": 90,
"text": "(K+1)^2"
},
{
"math_id": 91,
"text": "2b =\\alpha Z"
},
{
"math_id": 92,
"text": "2b=\\alpha_sN_c"
},
{
"math_id": 93,
"text": "b"
},
{
"math_id": 94,
"text": "(\\hbar c)/(2\\mu c^2R)"
},
{
"math_id": 95,
"text": "(\\hbar\nc/R)(K+1)^2"
},
{
"math_id": 96,
"text": "b=\\alpha Z/2 "
},
{
"math_id": 97,
"text": "\\mbox{erf}(u)"
},
{
"math_id": 98,
"text": "\\Gamma"
},
{
"math_id": 99,
"text": "u\\to \\infty"
}
] |
https://en.wikipedia.org/wiki?curid=59354087
|
59354377
|
Greedy geometric spanner
|
In computational geometry, a greedy geometric spanner is an undirected graph whose distances approximate the Euclidean distances among a finite set of points in a Euclidean space. The vertices of the graph represent these points. The edges of the spanner are selected by a greedy algorithm that includes an edge whenever its two endpoints are not connected by a short path of shorter edges. The greedy spanner was first described in the PhD thesis of Gautam Das and conference paper and subsequent journal paper by Ingo Althöfer et al. These sources also credited Marshall Bern (unpublished) with the independent discovery of the same construction.
Greedy geometric spanners have bounded degree, a linear total number of edges, and total weight close to that of the Euclidean minimum spanning tree. Although known construction methods for them are slow, fast approximation algorithms with similar properties are known.
Construction.
The greedy geometric spanner is determined from an input consisting a set of points and a parameter formula_0. The goal is to construct a graph whose shortest path distances are at most formula_1 times the geometric distances between pairs of points. It may be constructed by a greedy algorithm that adds edges one at a time to the graph, starting from an edgeless graph with the points as its vertices. All pairs of points are considered, in sorted (ascending) order by their distances, starting with the closest pair. For each pair formula_2 of points, the algorithm tests whether the graph constructed so far already contains a path from formula_3 to formula_4 with length at most formula_5. If not,
the edge formula_6 with length formula_7 is added to the graph.
By construction, the resulting graph is a geometric spanner with stretch factor at most formula_1.
A naive implementation of this method would take time formula_8 on inputs with formula_9 points. This is because the considerations for each of the formula_10 pairs of points involve an instance of Dijkstra's algorithm to find a shortest path in a graph with formula_11 edges. It uses formula_10 space to store the sorted list of pairs of points. More careful algorithms can construct the same graph in time formula_12, or in space formula_11.
A construction for a variant of the greedy spanner that uses graph clustering to quickly approximate the graph distances runs in time formula_13 in Euclidean spaces of any bounded dimension, and can produce spanners with (approximately) the same properties as the greedy spanners. The same method can be extended to spaces with bounded doubling dimension.
Properties.
The same greedy construction produces spanners in arbitrary metric spaces, but in Euclidean spaces it has good properties some of which do not hold more generally.
The greedy geometric spanner in any metric space always contains the minimum spanning tree of its input, because the greedy construction algorithm follows the same insertion order of edges as Kruskal's algorithm for minimum spanning trees. If the greedy spanner algorithm and Kruskal's algorithm are run in parallel, considering the same pairs of vertices in the same order, each edge added by Kruskal's algorithm will also be added by the greedy spanner algorithm, because the endpoints of the edge will not already be connected by a path. In the limiting case when formula_1 is large enough (e.g. formula_14, where formula_9 is the number of vertices in the graph) the two algorithms produce the same output.
In Euclidean spaces of bounded dimension, for any constant formula_1, the greedy geometric formula_1-spanners on sets of formula_9 points have bounded degree, implying that they also have formula_11 edges. This property does not extend even to well-behaved metric spaces: there exist spaces with bounded doubling dimension where the greedy spanner has unbounded vertex degree. However, in such spaces the number of edges is still formula_11.
Greedy geometric spanners in bounded-dimension Euclidean spaces also have total weight at most a constant times the total weight of the Euclidean minimum spanning tree.
This property remains true in spaces of bounded doubling dimension.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "t\\ge 1"
},
{
"math_id": 1,
"text": "t"
},
{
"math_id": 2,
"text": "(u,v)"
},
{
"math_id": 3,
"text": "u"
},
{
"math_id": 4,
"text": "v"
},
{
"math_id": 5,
"text": "t\\cdot d(u,v)"
},
{
"math_id": 6,
"text": "uv"
},
{
"math_id": 7,
"text": "d(u,v)"
},
{
"math_id": 8,
"text": "O(n^3\\log n)"
},
{
"math_id": 9,
"text": "n"
},
{
"math_id": 10,
"text": "O(n^2)"
},
{
"math_id": 11,
"text": "O(n)"
},
{
"math_id": 12,
"text": "O(n^2\\log n)"
},
{
"math_id": 13,
"text": "O(n\\log n)"
},
{
"math_id": 14,
"text": "t>n"
}
] |
https://en.wikipedia.org/wiki?curid=59354377
|
5936
|
Chemical thermodynamics
|
Study of chemical reactions within the laws of thermodynamics
Chemical thermodynamics is the study of the interrelation of heat and work with chemical reactions or with physical changes of state within the confines of the laws of thermodynamics. Chemical thermodynamics involves not only laboratory measurements of various thermodynamic properties, but also the application of mathematical methods to the study of chemical questions and the "spontaneity" of processes.
The structure of chemical thermodynamics is based on the first two laws of thermodynamics. Starting from the first and second laws of thermodynamics, four equations called the "fundamental equations of Gibbs" can be derived. From these four, a multitude of equations, relating the thermodynamic properties of the thermodynamic system can be derived using relatively simple mathematics. This outlines the mathematical framework of chemical thermodynamics.
History.
In 1865, the German physicist Rudolf Clausius, in his "Mechanical Theory of Heat", suggested that the principles of thermochemistry, e.g. the heat evolved in combustion reactions, could be applied to the principles of thermodynamics. Building on the work of Clausius, between the years 1873-76 the American mathematical physicist Willard Gibbs published a series of three papers, the most famous one being the paper "On the Equilibrium of Heterogeneous Substances". In these papers, Gibbs showed how the first two laws of thermodynamics could be measured graphically and mathematically to determine both the thermodynamic equilibrium of chemical reactions as well as their tendencies to occur or proceed. Gibbs’ collection of papers provided the first unified body of thermodynamic theorems from the principles developed by others, such as Clausius and Sadi Carnot.
During the early 20th century, two major publications successfully applied the principles developed by Gibbs to chemical processes and thus established the foundation of the science of chemical thermodynamics. The first was the 1923 textbook "Thermodynamics and the Free Energy of Chemical Substances" by Gilbert N. Lewis and Merle Randall. This book was responsible for supplanting the chemical affinity with the term free energy in the English-speaking world. The second was the 1933 book "Modern Thermodynamics by the methods of Willard Gibbs" written by E. A. Guggenheim. In this manner, Lewis, Randall, and Guggenheim are considered as the founders of modern chemical thermodynamics because of the major contribution of these two books in unifying the application of thermodynamics to chemistry.
Overview.
The primary objective of chemical thermodynamics is the establishment of a criterion for determination of the feasibility or spontaneity of a given transformation. In this manner, chemical thermodynamics is typically used to predict the energy exchanges that occur in the following processes:
The following state functions are of primary concern in chemical thermodynamics:
Most identities in chemical thermodynamics arise from application of the first and second laws of thermodynamics, particularly the law of conservation of energy, to these state functions.
The three laws of thermodynamics (global, unspecific forms):
1. The energy of the universe is constant.
2. In any spontaneous process, there is always an increase in entropy of the universe.
3. The entropy of a perfect crystal (well ordered) at 0 Kelvin is zero.
Chemical energy.
Chemical energy is the energy that can be released when chemical substances undergo a transformation through a chemical reaction. Breaking and making chemical bonds involves energy release or uptake, often as heat that may be either absorbed by or evolved from the chemical system.
Energy released (or absorbed) because of a reaction between chemical substances ("reactants") is equal to the difference between the energy content of the products and the reactants. This change in energy is called the change in internal energy of a chemical system. It can be calculated from formula_0, the internal energy of formation of the reactant molecules related to the bond energies of the molecules under consideration, and formula_1, the internal energy of formation of the product molecules. The change in internal energy is equal to the heat change if it is measured under conditions of constant volume (at STP condition), as in a closed rigid container such as a bomb calorimeter. However, at constant pressure, as in reactions in vessels open to the atmosphere, the measured heat is usually not equal to the internal energy change, because pressure-volume work also releases or absorbs energy. (The heat change at constant pressure is called the enthalpy change; in this case the widely tabulated enthalpies of formation are used.)
A related term is the heat of combustion, which is the chemical energy released due to a combustion reaction and of interest in the study of fuels. Food is similar to hydrocarbon and carbohydrate fuels, and when it is oxidized, its energy release is similar (though assessed differently than for a hydrocarbon fuel — see food energy).
In chemical thermodynamics, the term used for the chemical potential energy is chemical potential, and sometimes the Gibbs-Duhem equation is used.
Chemical reactions.
In most cases of interest in chemical thermodynamics there are internal degrees of freedom and processes, such as chemical reactions and phase transitions, which create entropy in the universe unless they are at equilibrium or are maintained at a "running equilibrium" through "quasi-static" changes by being coupled to constraining devices, such as pistons or electrodes, to deliver and receive external work. Even for homogeneous "bulk" systems, the free-energy functions depend on the composition, as do all the extensive thermodynamic potentials, including the internal energy. If the quantities { "N""i" }, the number of chemical species, are omitted from the formulae, it is impossible to describe compositional changes.
Gibbs function or Gibbs Energy.
For an unstructured, homogeneous "bulk" system, there are still various "extensive" compositional variables { "N""i" } that "G" depends on, which specify the composition (the amounts of each chemical substance, expressed as the numbers of molecules present or the numbers of moles). Explicitly,
formula_2
For the case where only "PV" work is possible,
formula_3
a restatement of the fundamental thermodynamic relation, in which "μi" is the chemical potential for the "i"-th component in the system
formula_4
The expression for d"G" is especially useful at constant "T" and "P", conditions, which are easy to achieve experimentally and which approximate the conditions in living creatures
formula_5
Chemical affinity.
While this formulation is mathematically defensible, it is not particularly transparent since one does not simply add or remove molecules from a system. There is always a "process" involved in changing the composition; e.g., a chemical reaction (or many), or movement of molecules from one phase (liquid) to another (gas or solid). We should find a notation which does not seem to imply that the amounts of the components ( "N""i" ) can be changed independently. All real processes obey conservation of mass, and in addition, conservation of the numbers of atoms of each kind.
Consequently, we introduce an explicit variable to represent the degree of advancement of a process, a progress variable "ξ" for the "extent of reaction" (Prigogine & Defay, p. 18; Prigogine, pp. 4–7; Guggenheim, p. 37.62), and to the use of the partial derivative ∂"G"/∂"ξ" (in place of the widely used "Δ"G"", since the quantity at issue is not a finite change). The result is an understandable expression for the dependence of d"G" on chemical reactions (or other processes). If there is just one reaction
formula_6
If we introduce the "stoichiometric coefficient" for the "i-th" component in the reaction
formula_7
(negative for reactants), which tells how many molecules of "i" are produced or consumed, we obtain an algebraic expression for the partial derivative
formula_8
where we introduce a concise and historical name for this quantity, the "affinity", symbolized by A, as introduced by Théophile de Donder in 1923.(De Donder; Progogine & Defay, p. 69; Guggenheim, pp. 37, 240) The minus sign ensures that in a spontaneous change, when the change in the Gibbs free energy of the process is negative, the chemical species have a positive affinity for each other. The differential of "G" takes on a simple form that displays its dependence on composition change
formula_9
If there are a number of chemical reactions going on simultaneously, as is usually the case,
formula_10
with a set of reaction coordinates { ξ"j" }, avoiding the notion that the amounts of the components ( "N""i" ) can be changed independently. The expressions above are equal to zero at thermodynamic equilibrium, while they are negative when chemical reactions proceed at a finite rate, producing entropy. This can be made even more explicit by introducing the reaction "rates" d"ξ""j"/d"t". For every "physically independent" "process" (Prigogine & Defay, p. 38; Prigogine, p. 24)
formula_11
This is a remarkable result since the chemical potentials are intensive system variables, depending only on the local molecular milieu. They cannot "know" whether temperature and pressure (or any other system variables) are going to be held constant over time. It is a purely local criterion and must hold regardless of any such constraints. Of course, it could have been obtained by taking partial derivatives of any of the other fundamental state functions, but nonetheless is a general criterion for (−"T" times) the entropy production from that spontaneous process; or at least any part of it that is not captured as external work. (See "Constraints" below.)
We now relax the requirement of a homogeneous "bulk" system by letting the chemical potentials and the affinity apply to any locality in which a chemical reaction (or any other process) is occurring. By accounting for the entropy production due to irreversible processes, the equality for d"G" is now replaced by
formula_12
or
formula_13
Any decrease in the Gibbs function of a system is the upper limit for any isothermal, isobaric work that can be captured in the surroundings, or it may simply be dissipated, appearing as "T" times a corresponding increase in the entropy of the system and its surrounding. Or it may go partly toward doing external work and partly toward creating entropy. The important point is that the "extent of reaction" for a chemical reaction may be coupled to the displacement of some external mechanical or electrical quantity in such a way that one can advance only if the other also does. The coupling may occasionally be "rigid", but it is often flexible and variable.
Solutions.
In solution chemistry and biochemistry, the Gibbs free energy decrease (∂"G"/∂"ξ", in molar units, denoted cryptically by Δ"G") is commonly used as a surrogate for (−"T" times) the global entropy produced by spontaneous chemical reactions in situations where no work is being done; or at least no "useful" work; i.e., other than perhaps ± "P" d"V". The assertion that all "spontaneous reactions have a negative ΔG" is merely a restatement of the second law of thermodynamics, giving it the physical dimensions of energy and somewhat obscuring its significance in terms of entropy. When no useful work is being done, it would be less misleading to use the Legendre transforms of the entropy appropriate for constant "T", or for constant "T" and "P", the Massieu functions −"F/T" and −"G/T", respectively.
Non-equilibrium.
Generally the systems treated with the conventional chemical thermodynamics are either at equilibrium or near equilibrium. Ilya Prigogine developed the thermodynamic treatment of open systems that are far from equilibrium. In doing so he has discovered phenomena and structures of completely new and completely unexpected types. His generalized, nonlinear and irreversible thermodynamics has found surprising applications in a wide variety of fields.
The non-equilibrium thermodynamics has been applied for explaining how ordered structures e.g. the biological systems, can develop from disorder. Even if Onsager's relations are utilized, the classical principles of equilibrium in thermodynamics still show that linear systems close to equilibrium always develop into states of disorder which are stable to perturbations and cannot explain the occurrence of ordered structures.
Prigogine called these systems dissipative systems, because they are formed and maintained by the dissipative processes which take place because of the exchange of energy between the system and its environment and because they disappear if that exchange ceases. They may be said to live in symbiosis with their environment.
The method which Prigogine used to study the stability of the dissipative structures to perturbations is of very great general interest. It makes it possible to study the most varied problems, such as city traffic problems, the stability of insect communities, the development of ordered biological structures and the growth of cancer cells to mention but a few examples.
System constraints.
In this regard, it is crucial to understand the role of walls and other "constraints", and the distinction between "independent" processes and "coupling". Contrary to the clear implications of many reference sources, the previous analysis is not restricted to homogeneous, isotropic bulk systems which can deliver only "P"d"V" work to the outside world, but applies even to the most structured systems. There are complex systems with many chemical "reactions" going on at the same time, some of which are really only parts of the same, overall process. An "independent" process is one that "could" proceed even if all others were unaccountably stopped in their tracks. Understanding this is perhaps a "thought experiment" in chemical kinetics, but actual examples exist.
A gas-phase reaction at constant temperature and pressure which results in an increase in the number of molecules will lead to an increase in volume. Inside a cylinder closed with a piston, it can proceed only by doing work on the piston. The extent variable for the reaction can increase only if the piston moves out, and conversely if the piston is pushed inward, the reaction is driven backwards.
Similarly, a redox reaction might occur in an electrochemical cell with the passage of current through a wire connecting the electrodes. The half-cell reactions at the electrodes are constrained if no current is allowed to flow. The current might be dissipated as Joule heating, or it might in turn run an electrical device like a motor doing mechanical work. An automobile lead-acid battery can be recharged, driving the chemical reaction backwards. In this case as well, the reaction is not an independent process. Some, perhaps most, of the Gibbs free energy of reaction may be delivered as external work.
The hydrolysis of ATP to ADP and phosphate can drive the force-times-distance work delivered by living muscles, and synthesis of ATP is in turn driven by a redox chain in mitochondria and chloroplasts, which involves the transport of ions across the membranes of these cellular organelles. The coupling of processes here, and in the previous examples, is often not complete. Gas can leak slowly past a piston, just as it can slowly leak out of a rubber balloon. Some reaction may occur in a battery even if no external current is flowing. There is usually a coupling coefficient, which may depend on relative rates, which determines what percentage of the driving free energy is turned into external work, or captured as "chemical work", a misnomer for the free energy of another chemical process.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\Delta_{\\rm f}U^{\\rm o}_{\\mathrm {reactants}}"
},
{
"math_id": 1,
"text": "\\Delta_{\\rm f}U^{\\rm o}_{\\mathrm {products}}"
},
{
"math_id": 2,
"text": " G = G(T,P,\\{N_i\\})\\,."
},
{
"math_id": 3,
"text": " \\mathrm{d}G = -S\\, \\mathrm{d}T + V \\, \\mathrm{d}P + \\sum_i \\mu_i \\, \\mathrm{d}N_i \\,"
},
{
"math_id": 4,
"text": " \\mu_i = \\left( \\frac{\\partial G}{\\partial N_i}\\right)_{T,P,N_{j\\ne i},etc. } \\,."
},
{
"math_id": 5,
"text": " (\\mathrm{d}G)_{T,P} = \\sum_i \\mu_i \\, \\mathrm{d}N_i\\,."
},
{
"math_id": 6,
"text": "(\\mathrm{d}G)_{T,P} = \\left( \\frac{\\partial G}{\\partial \\xi}\\right)_{T,P} \\, \\mathrm{d}\\xi.\\,"
},
{
"math_id": 7,
"text": "\\nu_i = \\partial N_i / \\partial \\xi \\,"
},
{
"math_id": 8,
"text": " \\left( \\frac{\\partial G}{\\partial \\xi} \\right)_{T,P} = \\sum_i \\mu_i \\nu_i = -\\mathbb{A}\\,"
},
{
"math_id": 9,
"text": "(\\mathrm{d}G)_{T,P} = -\\mathbb{A}\\, d\\xi \\,."
},
{
"math_id": 10,
"text": "(\\mathrm{d}G)_{T,P} = -\\sum_k\\mathbb{A}_k\\, d\\xi_k \\,."
},
{
"math_id": 11,
"text": " \\mathbb{A}\\ \\dot{\\xi} \\le 0 \\,."
},
{
"math_id": 12,
"text": " \\mathrm{d}G = - S \\, \\mathrm{d}T + V \\, \\mathrm{d}P -\\sum_k\\mathbb{A}_k\\, \\mathrm{d}\\xi_k + \\mathrm{\\delta} W'\\,"
},
{
"math_id": 13,
"text": " \\mathrm{d}G_{T,P} = -\\sum_k\\mathbb{A}_k\\, \\mathrm{d}\\xi_k + \\mathrm{\\delta} W'.\\,"
}
] |
https://en.wikipedia.org/wiki?curid=5936
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.