id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
8177544
Multi-party fair exchange protocol
In cryptography, a multi-party fair exchange protocol is protocol where parties accept to deliver an item if and only if they receive an item in return. Definition. Matthew K. Franklin and Gene Tsudik suggested in 1998 the following classification: See also. Secure multi-party computation References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "\\sigma" }, { "math_id": 2, "text": "\\{1...n\\}" }, { "math_id": 3, "text": "P_i" }, { "math_id": 4, "text": "K_i" }, { "math_id": 5, "text": "P_{\\sigma(i)}" }, { "math_id": 6, "text": "K_{\\sigma^{-1}(i)}" }, { "math_id": 7, "text": "P_{\\sigma^{-1}(i)}" }, { "math_id": 8, "text": "B_{ij}" }, { "math_id": 9, "text": "i" }, { "math_id": 10, "text": "j" }, { "math_id": 11, "text": "P_j" } ]
https://en.wikipedia.org/wiki?curid=8177544
817771
Dilatant
Material in which viscosity increases with the rate of shear strain A dilatant (, ) (also termed shear thickening) material is one in which viscosity increases with the rate of shear strain. Such a "shear thickening fluid", also known by the initialism "STF", is an example of a non-Newtonian fluid. This behaviour is usually not observed in pure materials, but can occur in suspensions. A dilatant is a non-Newtonian fluid where the shear viscosity increases with applied shear stress. This behavior is only one type of deviation from Newton's law of viscosity, and it is controlled by such factors as particle size, shape, and distribution. The properties of these suspensions depend on Hamaker theory and Van der Waals forces and can be stabilized electrostatically or sterically. Shear thickening behavior occurs when a colloidal suspension transitions from a stable state to a state of flocculation. A large portion of the properties of these systems are due to the surface chemistry of particles in dispersion, known as colloids. This can readily be seen with a mixture of cornstarch and water (sometimes called oobleck), which acts in counterintuitive ways when struck or thrown against a surface. Sand that is completely soaked with water also behaves as a dilatant material — this is the reason why when walking on wet sand, a dry area appears directly underfoot. Rheopecty is a similar property in which viscosity increases with cumulative stress or agitation over time. The opposite of a dilatant material is a pseudoplastic. Definitions. There are two types of deviation from Newton's law that are observed in real systems. The most common deviation is shear thinning behavior, where the viscosity of the system decreases as the shear rate is increased. The second deviation is shear thickening behavior where, as the shear rate is increased, the viscosity of the system also increases. This behavior is observed because the system crystallizes under stress and behaves more like a solid than a solution. Thus, the viscosity of a shear-thickening fluid is dependent on the shear rate. The presence of suspended particles often affects the viscosity of a solution. In fact, with the right particles, even a Newtonian fluid can exhibit non-Newtonian behavior. An example of this is cornstarch in water and is included in below. The parameters that control shear thickening behavior are: particle size and particle size distribution, particle volume fraction, particle shape, particle-particle interaction, continuous phase viscosity, and the type, rate, and time of deformation. In addition to these parameters, all shear thickening fluids are stabilized suspensions and have a volume fraction of solid that is relatively high. Viscosity of a solution as a function of shear rate is given by the power-law equation, formula_0 where η is the viscosity, "K" is a material-based constant, and "γ̇" is the applied shear rate. Dilatant behavior occurs when n is greater than 1. Below is a table of viscosity values for some common materials. Stabilized suspensions. A suspension is composed of a fine, particulate phase dispersed throughout a differing, heterogeneous phase. Shear-thickening behavior is observed in systems with a solid, particulate phase dispersed within a liquid phase. These solutions are different from a Colloid in that they are unstable; the solid particles in dispersion are sufficiently large for sedimentation, causing them to eventually settle. Whereas the solids dispersed within a colloid are smaller and will not settle. There are multiple methods for stabilizing suspensions, including electrostatics and sterics. In an unstable suspension, the dispersed, particulate phase will come out of solution in response to forces acting upon the particles, such as gravity or Hamaker attraction. The magnitude of the effect these forces have on pulling the particulate phase out of solution is proportional to the size of the particulates; for a large particulate, the gravitational forces are greater than the particle-particle interactions, whereas the opposite is true for small particulates. Shear thickening behavior is typically observed in suspensions of small, solid particulates, indicating that the particle-particle Hamaker attraction is the dominant force. Therefore, stabilizing a suspension is dependent upon introducing a counteractive repulsive force. Hamaker theory describes the attraction between bodies, such as particulates. It was realized that the explanation of Van der Waals forces could be upscaled from explaining the interaction between two molecules with induced dipoles to macro-scale bodies by summing all the intermolecular forces between the bodies. Similar to Van der Waals forces, Hamaker theory describes the magnitude of the particle-particle interaction as inversely proportional to the square of the distance. Therefore, many stabilized suspensions incorporate a long-range repulsive force that is dominant over Hamaker attraction when the interacting bodies are at a sufficient distance, effectively preventing the bodies from approaching one another. However, at short distances, the Hamaker attraction dominates, causing the particulates to coagulate and fall out of solution. Two common long-range forces used in stabilizing suspensions are electrostatics and sterics. Electrostatically stabilized suspensions. Suspensions of similarly charged particles dispersed in a liquid electrolyte are stabilized through an effect described by the Helmholtz double layer model. The model has two layers. The first layer is the charged surface of the particle, which creates an electrostatic field that affects the ions in the electrolyte. In response, the ions create a diffuse layer of equal and opposite charge, effectively rendering the surface charge neutral. However, the diffuse layer creates a potential surrounding the particle that differs from the bulk electrolyte. The diffuse layer serves as the long-range force for stabilization of the particles. When particles near one another, the diffuse layer of one particle overlaps with that of the other particle, generating a repulsive force. The following equation provides the energy between two colloids as a result of the Hamaker interactions and electrostatic repulsion. formula_1 where: Sterically stabilized suspensions. Different from electrostatics, sterically stabilized suspensions rely on the physical interaction of polymer chains attached to the surface of the particles to keep the suspension stabilized; the adsorbed polymer chains act as a spacer to keep the suspended particles separated at a sufficient distance to prevent the Hamaker attraction from dominating and pulling the particles out of suspension. The polymers are typically either grafted or adsorbed onto the surface of the particle. With grafted polymers, the backbone of the polymer chain is covalently bonded to the particle surface. Whereas an adsorbed polymer is a copolymer composed of lyophobic and lyophilic region, where the lyophobic region non-covalently adheres to the particle surface and the lyophilic region forms the steric boundary or spacer. Theories behind shear thickening behavior. Dilatancy in a colloid, or its ability to order in the presence of shear forces, is dependent on the ratio of interparticle forces. As long as interparticle forces such as Van der Waals forces dominate, the suspended particles remain in ordered layers. However, once shear forces dominate, particles enter a state of flocculation and are no longer held in suspension; they begin to behave like a solid. When the shear forces are removed, the particles spread apart and once again form a stable suspension. Shear thickening behavior is highly dependent upon the volume fraction of solid particulate suspended within the liquid. The higher the volume fraction, the less shear required to initiate the shear thickening behavior. The shear rate at which the fluid transitions from a Newtonian flow to a shear thickening behavior is known as the critical shear rate. Order to disorder transition. When shearing a concentrated stabilized solution at a relatively low shear rate, the repulsive particle-particle interactions keep the particles in an ordered, layered, equilibrium structure. However, at shear rates elevated above the critical shear rate, the shear forces pushing the particles together overcome the repulsive particle-particle interactions, forcing the particles out of their equilibrium positions. This leads to a disordered structure, causing an increase in viscosity. The critical shear rate here is defined as the shear rate at which the shear forces pushing the particles together are equivalent to the repulsive particle interactions. Hydroclustering. When the particles of a stabilized suspension transition from an immobile state to mobile state, small groupings of particles form hydroclusters, increasing the viscosity. These hydroclusters are composed of particles momentarily compressed together, forming an irregular, rod-like chain of particles akin to a logjam or traffic jam. In theory the particles have extremely small interparticle gaps, rendering this momentary, transient hydrocluster as incompressible. It is possible that additional hydroclusters will form through aggregation. Examples. Corn starch and water (oobleck). Cornstarch is a common thickening agent used in cooking. It is also a very good example of a shear-thickening system. When a force is applied to a 1:1.25 mixture of water and cornstarch, the mixture acts as a solid and resists the force. Silica and polyethylene glycol. Silica nano-particles are dispersed in a solution of polyethylene glycol. The silica particles provide a high-strength material when flocculation occurs. This allows it to be used in applications such as liquid body armor and brake pads. Applications. Traction control. Dilatant materials have certain industrial uses due to their shear-thickening behavior. For example, some all-wheel drive systems use a viscous coupling unit full of dilatant fluid to provide power transfer between front and rear wheels. On high-traction road surfacing, the relative motion between primary and secondary drive wheels is the same, so the shear is low and little power is transferred. When the primary drive wheels start to slip, the shear increases, causing the fluid to thicken. As the fluid thickens, the torque transferred to the secondary drive wheels increases proportionally, until the maximum amount of power possible in the fully thickened state is transferred. (See also limited-slip differential, some types of which operate on the same principle.) To the operator, this system is entirely passive, engaging all four wheels to drive when needed and dropping back to two wheel drive once the need has passed. This system is generally used for on-road vehicles rather than off-road vehicles, since the maximum viscosity of the dilatant fluid limits the amount of torque that can be passed across the coupling. Body armor. Various corporate and government entities are researching the application of shear-thickening fluids for use as body armor. Such a system could allow the wearer flexibility for a normal range of movement, yet provide rigidity to resist piercing by bullets, stabbing knife blows, and similar attacks. The principle is similar to that of mail armor, though body armor using a dilatant would be much lighter. The dilatant fluid would disperse the force of a sudden blow over a wider area of the user's body, reducing the blunt force trauma. However, the dilatant would not provide any additional protection against slow attacks, such as a slow but forceful stab, which would allow flow to occur. In one study, standard Kevlar fabric was compared to a composite armor of Kevlar and a proprietary shear-thickening fluid. The results showed that the Kevlar/fluid combination performed better than the pure-Kevlar material, despite having less than one-third the Kevlar thickness. Four examples of dilatant materials being used in personal protective equipment are Armourgel, D3O, ArtiLage (Artificial Cartilage foam) and "Active Protection System" manufactured by Dow Corning. In 2002, researchers at the U.S. Army Research Laboratory and University of Delaware began researching the use of liquid armor, or a shear-thickening fluid in body armor. Researchers demonstrated that high-strength fabrics such as Kevlar can be made more bulletproof and stab-resistant when impregnated with the fluid. The goal of the “liquid armor” technology is to create a new material that is low-cost and lightweight while still offering equivalent or superior ballistic properties compared to current Kevlar fabric. For their work on liquid armor, Dr. Eric Wetzel, an ARL mechanical engineer, and his team were awarded the 2002 Paul A. Siple Award, the Army’s highest award for scientific achievement, at the Army Science Conference. The company D3O invented a non-Newtonian–based material that has seen wide adaptation across a broad range of standard and custom applications, including motorcycle and extreme-sports protective gear, industrial work wear, military applications, and impact protection for electronics. The materials allow flexibility during normal wear but become stiff and protective when strongly impacted. While some products are marketed directly, much of their manufacturing capability goes to selling and license the material to other companies for use in their own lines of protective products. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\eta = K \\dot{\\gamma}^{n - 1}," }, { "math_id": 1, "text": " V = \\pi R\\left(\\frac{-H}{12\\pi h^2} + \\frac{64 C k_\\text{B} T \\Gamma^2 e^\\kappa h}{\\kappa^2}\\right)," }, { "math_id": 2, "text": " \\Gamma, " }, { "math_id": 3, "text": " \\kappa, " } ]
https://en.wikipedia.org/wiki?curid=817771
8178038
Schizophyllum commune
Species of edible fungus <templatestyles src="Template:Taxobox/core/styles.css" /> Schizophyllum commune is a species of fungus in the genus "Schizophyllum". The mushroom resembles undulating waves of tightly packed corals or loose Chinese fan. Gillies or split-gills vary from creamy yellow to pale white in colour. The cap is small, wide with a dense yet spongey body texture. It is known as the split-gill mushroom because of the unique, longitudinally divided nature of the namesake gills on the underside of the cap. This mushroom is found throughout the world. It is found in the wild on decaying trees after rainy seasons followed by dry spells where the mushrooms are naturally collected. Description. "Schizophyllum commune" is usually described as a morphological species of global distribution, but some research has suggested that it may be a species complex encompassing several cryptic species of more narrow distribution, as typical of many mushroom-forming Basidiomycota. The caps are wide with white or grayish hairs. They grow in shelf-like arrangements, without stalks. The gills, which produce basidiospores on their surface, split when the mushroom dries out, earning this mushroom the common name split gill. It is common in rotting wood. The mushrooms can remain dry for decades and then revived with moisture. It has a tetrapolar mating system with each cell containing two mating-type loci (called A and B) that govern different aspects the mating process, leading to 4 possible phenotypes after cell fusion. Each locus codes for a mating type "(a" or "b") and each type is multi-allelic: the A locus has 9 alleles for the "a" type and an estimated 32 for its "b" type, and the B locus has 9 alleles each for both its "a" and "b" types. When combined this gives an estimated formula_0 potential mating type specificities, each of which can mate with formula_1 other mating types. While all mating types can initially fuse with any other mating type, a fertile fruitbody and subsequent spores will result only if both the A and B loci of the merging cells are compatible. If neither the A nor B are compatible the result is normal monokarytic mycelium, and if only one of A or B are compatible, the result is either two mycelia growing in opposite directions (only A compatible) or a "flat" phenotype with no mycelia (only B compatible). Hydrophobin was first isolated from "Schizophyllum commune". Genetics. The genome of "Schizophyllum commune" was sequenced in 2010. Edibility. The species was regarded as nonpoisonous by Orson K. Miller Jr. and Hope H. Miller, who considered it to be inedible due to its smallness and toughness. Because the mushrooms absorb moisture, they can expand during digestion. However, some sources indicate that it contains antitumor and antiviral components. As of 2006, it was widely consumed in Mexico and elsewhere in the tropics. The preference for tough, rubbery mushrooms in the tropics was explained as a consequence of the fact that tender, fleshy mushrooms quickly rot in the hot humid conditions there, making their marketing problematic. In Northeast India, in the state Manipur, it is known as "kanglayen" and one of the favourite ingredients for Manipuri-style pancakes called "paaknam". In Mizoram, the local name is "pasi" ("pa" means "mushroom", "si" means "tiny") and it is one of the highest rated edible mushrooms among the Mizo community. As a pathogen. It may be a common cause of fungal infections and related diseases, most commonly that of the lungs. They have also been reported to cause sinusitis and allergic reactions. Etymology. "Schizophyllum" is derived from [the Greek] "Schíza" meaning split because of the appearance of radial, centrally split, gill like folds; "commune" means common or shared ownership or ubiquitous. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "9 \\times 32 \\times 9 \\times 9 = 23328" }, { "math_id": 1, "text": "(9 \\times 32-1) \\times (9 \\times 9-1) = 22960" } ]
https://en.wikipedia.org/wiki?curid=8178038
8178295
Dispersive prism
Device used to disperse light In optics, a dispersive prism is an optical prism that is used to disperse light, that is, to separate light into its spectral components (the colors of the rainbow). Different wavelengths (colors) of light will be deflected by the prism at different angles. This is a result of the prism material's index of refraction varying with wavelength (dispersion). Generally, longer wavelengths (red) undergo a smaller deviation than shorter wavelengths (blue). The dispersion of white light into colors by a prism led Sir Isaac Newton to conclude that white light consisted of a mixture of different colors. Triangular prisms are the most common type of dispersive prism. Other types of dispersive prism exist that have more than two optical interfaces; some of them combine refraction with total internal reflection. Principle. Light changes speed as it moves from one medium to another (for example, from air into the glass of the prism). This speed change causes the light to be refracted and to enter the new medium at a different angle (Huygens principle). The degree of bending of the light's path depends on the angle that the incident beam of light makes with the surface, and on the ratio between the refractive indices of the two media (Snell's law). The refractive index of many materials (such as glass) varies with the wavelength or color of the light used, a phenomenon known as "dispersion". This causes light of different colors to be refracted differently and to leave the prism at different angles, creating an effect similar to a rainbow. This can be used to separate a beam of white light into its constituent spectrum of colors. Prisms will generally disperse light over a much larger frequency bandwidth than diffraction gratings, making them useful for broad-spectrum spectroscopy. Furthermore, prisms do not suffer from complications arising from overlapping spectral orders, which all gratings have. A usual disadvantage of prisms is lower dispersion than a well-chosen grating can achieve. Prisms are sometimes used for the internal reflection at the surfaces rather than for dispersion. If light inside the prism hits one of the surfaces at a sufficiently steep angle, total internal reflection occurs and "all" of the light is reflected. This makes a prism a useful substitute for a mirror in some situations. Deviation angle and dispersion. Thick prism. Ray angle deviation and dispersion through a prism can be determined by tracing a sample ray through the element and using Snell's law at each interface. For the prism shown at right, the indicated angles are given by formula_0. All angles are positive in the direction shown in the image. For a prism in air formula_1. Defining formula_2, the deviation angle formula_3 is given by formula_4 Thin prism approximation. If the angle of incidence formula_5 and prism apex angle formula_6 are both small, formula_7 and formula_8 if the angles are expressed in radians. This allows the nonlinear equation in the deviation angle formula_3 to be approximated by formula_9 The deviation angle depends on wavelength through "n", so for a thin prism the deviation angle varies with wavelength according to formula_10. Multiple prisms. Aligning multiple prisms in series can enhance the dispersion greatly, or vice versa, allow beam manipulation with suppressed dispersion. As shown above, the dispersive behaviour of each prism depends strongly on the angle of incidence, which is determined by the presence of surrounding prisms. Therefore, the resulting dispersion is not a simple sum of individual contributions (unless all prisms can be approximated as thin ones). Choice of optical material for optimum dispersion. Although the refractive index is dependent on the wavelength in every material, some materials have a much more powerful wavelength dependence (are much more dispersive) than others. Unfortunately, high-dispersion regions tend to be spectrally close to regions where the material becomes opaque. Crown glasses such as BK7 have a relatively small dispersion (and can be used roughly between 330 and 2500 nm), while flint glasses have a much stronger dispersion for visible light and hence are more suitable for use as dispersive prisms, but their absorption sets on already around 390 nm. Fused quartz, sodium chloride and other optical materials are used at ultraviolet and infrared wavelengths where normal glasses become opaque. The top angle of the prism (the angle of the edge between the input and output faces) can be widened to increase the spectral dispersion. However it is often chosen so that both the incoming and outgoing light rays hit the surface at around the Brewster angle; beyond the Brewster angle reflection losses increase greatly and angle of view is reduced. Most frequently, dispersive prisms are equilateral (apex angle of 60 degrees). History. Like many basic geometric terms, the word "prism" () was first used in Euclid's "Elements". Euclid defined the term in Book XI as "a solid figure contained by two opposite, equal and parallel planes, while the rest are parallelograms", however the nine subsequent propositions that used the term included examples of triangular-based prisms (i.e. with sides which were not parallelograms). This inconsistency caused confusion amongst later geometricians. René Descartes had seen light separated into the colors of the rainbow by glass or water, though the source of the color was unknown. Isaac Newton's 1666 experiment of bending white light through a prism demonstrated that all the colors already existed in the light, with different color "corpuscles" fanning out and traveling with different speeds through the prism. It was only later that Young and Fresnel combined Newton's particle theory with Huygens' wave theory to explain how color arises from the spectrum of light. Newton arrived at his conclusion by passing the red color from one prism through a second prism and found the color unchanged. From this, he concluded that the colors must already be present in the incoming light – thus, the prism did not create colors, but merely separated colors that are already there. He also used a lens and a second prism to recompose the spectrum back into white light. This experiment has become a classic example of the methodology introduced during the scientific revolution. The results of the experiment dramatically transformed the field of metaphysics, leading to John Locke's primary vs secondary quality distinction. Newton discussed prism dispersion in great detail in his book "Opticks". He also introduced the use of more than one prism to control dispersion. Newton's description of his experiments on prism dispersion was qualitative. A quantitative description of multiple-prism dispersion was not needed until multiple prism laser beam expanders were introduced in the 1980s. Grisms (grating prisms). A diffraction grating may be ruled onto one face of a prism to form an element called a "grism". Spectrographs are extensively used in astronomy to observe the spectra of stars and other astronomical objects. Insertion of a grism in the collimated beam of an astronomical imager transforms that camera into a spectrometer, since the beam still continues in approximately the same direction when passing through it. The deflection of the prism is constrained to exactly cancel the deflection due to the diffraction grating at the spectrometer's central wavelength. A different sort of spectrometer component called an immersed grating also consists of a prism with a diffraction grating ruled on one surface. However, in this case the grating is used in reflection, with light hitting the grating from "inside" the prism before being totally internally reflected back into the prism (and leaving from a different face). The reduction of the light's wavelength inside the prism results in an increase of the resulting spectral resolution by the ratio of the prism's refractive index to that of air. With either a grism or immersed grating, the primary source of spectral dispersion is the grating. Any effect due to chromatic dispersion from the prism itself is incidental, as opposed to actual prism-based spectrometers. In popular culture. An artist's rendition of a dispersive prism is seen on the cover of Pink Floyd's "The Dark Side of the Moon", one of the best-selling albums of all time. Somewhat unrealistically, the iconic graphic shows a divergent ray of white light passing the prism, separating into its spectrum only after leaving the prism's rear facet. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\begin{align}\n \\theta'_0 &= \\, \\text{arcsin} \\Big( \\frac{n_0}{n_1} \\, \\sin \\theta_0 \\Big) \\\\\n \\theta_1 &= \\alpha - \\theta'_0 \\\\\n \\theta'_1 &= \\, \\text{arcsin} \\Big( \\frac{n_1}{n_2} \\, \\sin \\theta_1 \\Big) \\\\\n \\theta_2 &= \\theta'_1 - \\alpha\n\\end{align}" }, { "math_id": 1, "text": "n_0=n_2 \\simeq 1" }, { "math_id": 2, "text": "n=n_1" }, { "math_id": 3, "text": "\\delta" }, { "math_id": 4, "text": "\\delta = \\theta_0 + \\theta_2 = \\theta_0 + \\text{arcsin} \\Big( n \\, \\sin \\Big[\\alpha - \\text{arcsin} \\Big( \\frac{1}{n} \\, \\sin \\theta_0 \\Big) \\Big] \\Big) - \\alpha" }, { "math_id": 5, "text": "\\theta_0" }, { "math_id": 6, "text": "\\alpha" }, { "math_id": 7, "text": "\\sin \\theta \\approx \\theta" }, { "math_id": 8, "text": "\\text{arcsin} x \\approx x" }, { "math_id": 9, "text": "\\delta \\approx \\theta_0 - \\alpha + \\Big( n \\, \\Big[ \\Big(\\alpha - \\frac{1}{n} \\, \\theta_0 \\Big) \\Big] \\Big) = \\theta_0 - \\alpha + n \\alpha - \\theta_0 = (n - 1) \\alpha \\ ." }, { "math_id": 10, "text": "\\delta (\\lambda) \\approx [ n (\\lambda) - 1 ] \\alpha " } ]
https://en.wikipedia.org/wiki?curid=8178295
8179925
Beam propagation method
The beam propagation method (BPM) is an approximation technique for simulating the propagation of light in slowly varying optical waveguides. It is essentially the same as the so-called parabolic equation (PE) method in underwater acoustics. Both BPM and the PE were first introduced in the 1970s. When a wave propagates along a waveguide for a large distance (larger compared with the wavelength), rigorous numerical simulation is difficult. The BPM relies on approximate differential equations which are also called the one-way models. These one-way models involve only a first order derivative in the variable z (for the waveguide axis) and they can be solved as "initial" value problem. The "initial" value problem does not involve time, rather it is for the spatial variable z. The original BPM and PE were derived from the slowly varying envelope approximation and they are the so-called paraxial one-way models. Since then, a number of improved one-way models are introduced. They come from a one-way model involving a square root operator. They are obtained by applying rational approximations to the square root operator. After a one-way model is obtained, one still has to solve it by discretizing the variable z. However, it is possible to merge the two steps (rational approximation to the square root operator and discretization of z) into one step. Namely, one can find rational approximations to the so-called one-way propagator (the exponential of the square root operator) directly. The rational approximations are not trivial. Standard diagonal Padé approximants have trouble with the so-called evanescent modes. These evanescent modes should decay rapidly in z, but the diagonal Padé approximants will incorrectly propagate them as propagating modes along the waveguide. Modified rational approximants that can suppress the evanescent modes are now available. The accuracy of the BPM can be further improved, if you use the energy-conserving one-way model or the single-scatter one-way model. Principles. BPM is generally formulated as a solution to Helmholtz equation in a time-harmonic case, formula_0 with the field written as, formula_1. Now the spatial dependence of this field is written according to any one TE or TM polarizations formula_2, with the envelope formula_3 following a slowly varying approximation, formula_4 Now the solution when replaced into the Helmholtz equation follows, formula_5 With the aim to calculate the field at all points of space for all times, we only need to compute the function formula_6 for all space, and then we are able to reconstruct formula_7. Since the solution is for the time-harmonic Helmholtz equation, we only need to calculate it over one time period. We can visualize the fields along the propagation direction, or the cross section waveguide modes. Numerical methods. Both "spatial domain" methods, and "frequency (spectral) domain" methods are available for the numerical solution of the discretized master equation. Upon discretization into a grid, (using various centralized difference, Crank–Nicolson method, FFT-BPM etc.) and field values rearranged in a causal fashion, the field evolution is computed through iteration, along the propagation direction. The spatial domain method computes the field at the next step (in the propagation direction) by solving a linear equation, whereas the spectral domain methods use the powerful forward/inverse DFT algorithms. Spectral domain methods have the advantage of stability even in the presence of nonlinearity (from refractive index or medium properties), while spatial domain methods can possibly become numerically unstable. Applications. BPM is a quick and easy method of solving for fields in integrated optical devices. It is typically used only in solving for intensity and modes within shaped (bent, tapered, terminated) waveguide structures, as opposed to scattering problems. These structures typically consist of isotropic optical materials, but the BPM has also been extended to be applicable to simulate the propagation of light in general anisotropic materials such as liquid crystals. This allows one to analyze e.g. the polarization rotation of light in anisotropic materials, the tunability of a directional coupler based on liquid crystals or the light diffraction in LCD pixels. Limitations of BPM. The Beam Propagation Method relies on the slowly varying envelope approximation, and is inaccurate for the modelling of discretely or fastly varying structures. Basic implementations are also inaccurate for the modelling of structures in which light propagates in large range of angles and for devices with high refractive-index contrast, commonly found for instance in silicon photonics. Advanced implementations, however, mitigate some of these limitations allowing BPM to be used to accurately model many of these cases, including many silicon photonics structures. The BPM method can be used to model bi-directional propagation, but the reflections need to be implemented iteratively which can lead to convergence issues.
[ { "math_id": 0, "text": "\n(\\nabla^2 + k_0^2n^2)\\psi = 0\n" }, { "math_id": 1, "text": "E(x,y,z,t)=\\psi(x,y)\\exp(-j\\omega t)" }, { "math_id": 2, "text": "\\psi(x,y) = A(x,y)\\exp(+jk_o\\nu y)\n" }, { "math_id": 3, "text": "A(x,y)\n" }, { "math_id": 4, "text": "\n\\frac{\\partial^2( A(x,y) )}{\\partial y^2} = 0\n" }, { "math_id": 5, "text": "\n\\left[\\frac{\\partial^2 }{\\partial x^2} + k_0^2(n^2 - \\nu^2) \\right]A(x,y) = \\pm 2 jk_0 \\nu \\frac{\\partial A_k(x,y)}{\\partial y}\n" }, { "math_id": 6, "text": "A(x,y)" }, { "math_id": 7, "text": "\\psi(x,y)" } ]
https://en.wikipedia.org/wiki?curid=8179925
81854
Central venous catheter
A tubular device placed in a large vein used to administer medicines A central venous catheter (CVC), also known as a central line (c-line), central venous line, or central venous access catheter, is a catheter placed into a large vein. It is a form of venous access. Placement of larger catheters in more centrally located veins is often needed in critically ill patients, or in those requiring prolonged intravenous therapies, for more reliable vascular access. These catheters are commonly placed in veins in the neck (internal jugular vein), chest (subclavian vein or axillary vein), groin (femoral vein), or through veins in the arms (also known as a PICC line, or peripherally inserted central catheters). Central lines are used to administer medication or fluids that are unable to be taken by mouth or would harm a smaller peripheral vein, obtain blood tests (specifically the "central venous oxygen saturation"), administer fluid or blood products for large volume resuscitation, and measure central venous pressure. The catheters used are commonly 15–30 cm in length, made of silicone or polyurethane, and have single or multiple lumens for infusion. Medical uses. The following are the major indications for the use of central venous catheters: There are no absolute contraindications to the use of central venous catheters. Relative contraindications include: coagulopathy, trauma or local infection at the placement site, or suspected proximal vascular injury. However, there are risks and complications associated with the placement of central lines, which are addressed below. Complications. Central line insertion may cause several complications. The benefit expected from their use should outweigh the risk of those complications. Pneumothorax. The incidence of pneumothorax is highest with subclavian vein catheterization due to its anatomic proximity to the apex of the lung. In the case of catheterization of the internal jugular vein, the risk of pneumothorax is minimized by the use of ultrasound guidance. For experienced clinicians, the incidence of pneumothorax is about 1.5–3.1%. The National Institute for Health and Care Excellence (UK) and other medical organizations recommend the routine use of ultrasonography to minimize complications. If a pneumothorax is suspected, an upright chest x-ray should be obtained. An upright chest x-ray is preferred because free air will migrate to the apex of the lung, where it is easily visualized. Of course, this is not always possible, particularly in critically ill patients in the intensive care unit. Radiographs obtained in the supine position fail to detect 25–50% of pneumothoraces. Instead, bedside ultrasound is a superior method of detection in those too ill to obtain upright imaging. Vascular perforation. Perforation of vasculature by a catheter is a feared and potentially life-threatening complication of central lines. Fortunately, the incidence of these events is exceedingly rare, especially when lines are placed with ultrasound guidance. Accidental cannulation of the carotid artery is a potential complication of placing a central line in the internal jugular vein. This occurs at a rate of approximately 1% when ultrasound guidance is used. However, it has a reported incidence of 0.5–11% when an anatomical approach is used. If the carotid is accidentally cannulated and a catheter is inserted into the artery, the catheter should be left in place and a vascular surgeon should be notified because removing it can be fatal. Catheter-related bloodstream infections. All catheters can introduce bacteria into the bloodstream. This can result in serious infections that can be fatal in up to 25% of cases. The problem of central line-associated bloodstream infections (CLABSI) has gained increasing attention in recent years. They cause a great deal of morbidity (harm) and deaths, and increase health care costs. Those who have a CLABSI have a 2.75 times increased risk of dying compared to those who do not. CLABSI is also associated with longer intensive care unit and hospital stays, at 2.5 and 7.5 days respectively when other illness related factors are adjusted for. Microbes can gain access to the bloodstream via a central catheter a number of ways. Rarely, they are introduced by contaminated infusions. They might also gain access to the lumen of the catheter through break points such as hubs. However, the method by which most organisms gain access is by migrating from the skin along the portion of the catheter tracking through subcutaneous tissue until they reach the portion of the catheter in the vein. Additionally, bacteria present in the blood may attach to the surface of the catheter, transforming it into a focus of infection. If a central line infection is suspected in a person, blood cultures are taken from both the catheter and a vein elsewhere in the body. If the culture from the central line grows bacteria much earlier (>2 hours) than the other vein site, the line is likely infected. Quantitative blood culture is even more accurate, but this method is not widely available. Antibiotics are nearly always given as soon as a patient is suspected to have a catheter-related bloodstream infection. However, this must occur after blood cultures are drawn, otherwise the culprit organism may not be identified. The most common organisms causing these infections are coagulase negative staphylococci such as "staphylococcus epidermidis". Infections resulting in bacteremia from "Staphylococcus aureus" require removal of the catheter and antibiotics. If the catheter is removed without giving antibiotics, 38% of people may still develop endocarditis. Evidence suggests that there may not be any benefit associated with giving antibiotics before a long-term central venous catherter is inserted in cancer patients and this practice may not prevent gram positive catheter-related infections. However, for people who require long-term central venous catheters who are at a higher risk of infection, for example, people with cancer who at are risk of neutropenia due to their chemotherapy treatment or due to the disease, flushing the catheter with a solution containing an antibiotic and heparin may reduce catheter-related infections. In a clinical practice guideline, the American Centers for Disease Control and Prevention recommends against routine culturing of central venous lines upon their removal. The guideline makes several other recommendations to prevent line infections. To prevent infection, stringent cleaning of the catheter insertion site is advised. Povidone-iodine solution is often used for such cleaning, but chlorhexidine appears to be twice as effective as iodine. Routine replacement of lines makes no difference in preventing infection. The CDC makes many recommendations regarding risk reduction for infection of CVCs, including: Using checklists, which detail the step by step process (including sterile techniques) of catheter placement has been shown to reduce catheter related bloodstream infections. This is often done with an observer reviewing the checklist as the operator places the central catheter. Having central line catheter kits (or a central line cart), which carry all of the necessary equipment needed for placing the central venous catheter, has also been shown to reduce central line related bloodstream infections. Patient specific risk factors for the development of catheter-related bloodstream infections include placing or maintaining a central catheter in those who are immunocompromised, neutropenic, malnourished, have severe burns, have a body mass index greater than 40 (obesity) or if a person has a prolonged hospital stay before catheter insertion. Premature infants also have a greater risk of catheter-related bloodstream infections as compared to those born at term. Provider factors that increase the risk of catheter-related bloodstream infections include inserting the catheter under emergency conditions, not adhering to sterile technique, multiple manipulations of the catheter or hub, and maintaining the catheter for longer than is indicated. Occlusion. Venous catheters may occasionally become occluded by kinks in the catheter, backwash of blood into the catheter leading to thrombosis, or infusion of insoluble materials that form precipitates. However, thrombosis is the most common cause of central line occlusion, occurring in up to 25% of catheters. CVCs are a risk factor for forming blood clots (venous thrombosis) including upper extremity deep vein thrombosis. It is thought this risk stems from activation of clotting substances in the blood by trauma to the vein during placement. The risk of blood clots is higher in a person with cancer, as cancer is also a risk factor for blood clots. As many as two thirds of cancer patients with central lines show evidence of catheter-associated thrombosis. However, most cases (more than 95%) of catheter-associated thrombosis go undetected. Most symptomatic cases are seen with placement of femoral vein catheters (3.4%) or peripherally inserted central catheters (3%). Anti-clotting drugs such as heparin and fondaparinux have been shown to decrease the incidence of blood clots, specifically deep vein thrombosis, in a person with cancer with central lines. Additionally, studies suggest that short term use of CVCs in the subclavian vein is less likely to be associated with blood clots than CVCs placed in the femoral vein in non-cancer patients. In the case of non-thrombotic occlusion (e.g. formation of precipitates), dilute acid can be used to restore patency to the catheter. A solution of 0.1N hydrochloric acid is commonly used. Infusates that contain a significant amount of lipids such as total parenteral nutrition (TPN) or propofol are also prone to occlusion over time. In this setting, patency can often be restored by infusing a small amount of 70% ethanol. Misplacement. CVC misplacement is more common when the anatomy of the person is different or difficult due to injury or past surgery. CVCs can be mistakenly placed in an artery during insertion (for example, the carotid artery or vertebral artery when placed in the neck or common femoral artery when placed in the groin). This error can be quickly identified by special tubing that can show the pressure of the catheter (arteries have a higher pressure than veins). In addition, sending blood samples for acidity, oxygen, and carbon dioxide content (pH, pO2, pCO2 respectively) "(l.e.: blood-gas analysis)" can show the characteristics of an artery (higher pH/pO2, lower pCO2) or vein (lower pH/pO2, higher pCO2). During subclavian vein central line placement, the catheter can be accidentally pushed into the internal jugular vein on the same side instead of the superior vena cava. A chest x-ray is performed after insertion to rule out this possibility. The tip of the catheter can also be misdirected into the contralateral (opposite side) subclavian vein in the neck, rather than into the superior vena cava. Venous air embolism. Entry of air into venous circulation has the potential to cause a venous air embolism. This is a rare complication of CVC placement – however, it can be lethal. The volume and the rate of air entry determine the effect an air embolus will have on a patient. This process can become fatal when at least 200–300 milliliters of air is introduced within a few seconds. The consequences of this include: acute embolic stroke (from air that passes through a patent foramen ovale), pulmonary edema, and acute right heart failure (from trapped air in the right ventricle) which can lead to cardiogenic shock. The clinical presentation of a venous air embolism may be silent. In those who are symptomatic, the most common symptoms are sudden-onset shortness of breath and cough. If the presentation is severe, the patient may become rapidly hypotensive and have an altered level of consciousness due to cardiogenic shock. Symptoms of an acute stroke may also be seen. Echocardiography can be used to visualize air that has become trapped in the chambers of the heart. If a large air embolism is suspected, a syringe can be attached to the catheter cap and pulled pack in an attempt to remove the air from circulation. The patient can also be placed in the left lateral decubitus position. It is thought that this position helps relieve air that has become trapped in the right ventricle. Catheter-related thrombosis. Catheter-related thrombosis (CRT) is the development of a blood clot related to long-term use of CVCs. It mostly occurs in the upper extremities and can lead to further complications, such as pulmonary embolism, post-thrombotic syndrome, and vascular compromise. Symptoms include pain, tenderness to palpation, swelling, edema, warmth, erythema, and development of collateral vessels in the surrounding area. However, most CRTs are asymptomatic, and prior catheter infections increase the risk for developing a CRT. Routine flushings may help to prevent catheter thrombosis. If there is catheter obstruction, thrombolytic drugs can be used if the obstruction is caused by clots or fibrin deposition. Anticoagulant treatment is indicated if the obstruction is caused by thrombus formation. There is inadequate evidence whether heparin saline flush is better than normal saline flush to maintain central venous catheter patency and prevent occlusion. Insertion. Before insertion, the patient is first assessed by reviewing relevant labs and indication for CVC placement, in order to minimize risks and complications of the procedure. Next, the area of skin over the planned insertion site is cleaned. A local anesthetic is applied if necessary. The location of the vein is identified by landmarks or with the use of a small ultrasound device. A hollow needle is advanced through the skin until blood is aspirated. The color of the blood and the rate of its flow help distinguish it from arterial blood (suggesting that an artery has been accidentally punctured). Within North America and Europe, ultrasound use now represents the gold standard for central venous access and skills, with diminishing use of landmark techniques. Recent evidence shows that ultrasound-guidance for subclavian vein catheterization leads to a reduction in adverse events. The line is then inserted using the Seldinger technique: a blunt guidewire is passed through the needle, then the needle is removed. A dilating device may be passed over the guidewire to expand the tract. Finally, the central line itself is then passed over the guidewire, which is then removed. All the lumens of the line are aspirated (to ensure that they are all positioned inside the vein) and flushed with either saline or heparin. A chest X-ray may be performed afterwards to confirm that the line is positioned inside the superior vena cava and no pneumothorax was caused inadvertently. On anteroposterior X-rays, a catheter tip between 55 and 29 mm below the level of the carina is regarded as acceptable placement. Electromagnetic tracking can be used to verify tip placement and provide guidance during insertion, obviating the need for the X-ray afterwards. Catheter flow. Hagen–Poiseuille equation. The Hagen–Poiseuille equation describes the properties of flow through a rigid tube. The equation is shown below: formula_0 The equation shows that flow rate (Q) through a rigid tube is a function of the inner radius (r), the length of the tube (L), and the viscosity of the fluid (μ). The flow is directly related the fourth power of the inner radius of the tube, and inversely related to the length of the tube and viscosity of the fluid. This equation can be used to understand the following vital observations regarding venous catheters: that the inner radius of a catheter has a much greater impact on flow rate than catheter length or fluid viscosity, and that for rapid infusion, a shorter, large bore catheter is optimal because it will provide the greatest flow rate. Types. There are several types of central venous catheters; these can be further subdivided by site (where the catheter is inserted into the body) as well as the specific type of catheter used. By site. Percutaneous central venous catheter (CVC). A percutaneous central venous catheter, or CVC, is inserted directly through the skin. The internal or external jugular, subclavian, or femoral vein is used. It is most commonly used in critically ill patients. The CVC can be used for days to weeks, and the patient must remain in the hospital. It is usually held in place with sutures or a manufactured securement device. Commonly used catheters include Quinton catheters. Peripherally inserted central catheters (PICC). A peripherally inserted central catheter, or PICC line (pronounced "pick"), is a central venous catheter inserted into a vein in the arm (via the basilic or cephalic veins) rather than a vein in the neck or chest. The basilic vein is usually a better target for cannulation than the cephalic vein because it is larger and runs a straighter course through the arm. The tip of the catheter is positioned in the superior vena cava. PICC lines are smaller in diameter than central lines since they are inserted in smaller peripheral veins, and they are much longer than central venous catheters (50–70 cm vs. 15–30 cm). Therefore, the rate of fluid flow through PICC lines is considerably slower than other central lines, rendering them unsuitable for rapid, large volume fluid resuscitation. PICCs can easily occlude and may not be used with phenytoin. PICC lines may also result in venous thrombosis and stenosis, and should therefore be used cautiously in patients with chronic kidney disease in case an arteriovenous fistula might one day need to be created for hemodialysis. However, PICC lines are desirable for several reasons. They can provide venous access for up to one year. The patient may go home with a PICC. They avoid the complications of central line placement (e.g. pneumothorax, accidental arterial cannulation), and they are relatively easy to place under ultrasound guidance and cause less discomfort than central lines. PICC lines may be inserted at the bedside, in a home or radiology setting. It is held in place with sutures or a manufactured securement device. Subcutaneous or tunneled central venous catheter. Tunneled catheters are passed under the skin from the insertion site to a separate exit site. The catheter and its attachments emerge from underneath the skin. The exit site is typically located in the chest, making the access ports less visible than catheters that protrude directly from the neck. Passing the catheter under the skin helps to prevent infection and provides stability. Insertion is a surgical procedure, in which the catheter is tunneled subcutaneously under the skin in the chest area before it enters the SVC. Commonly used tunneled catheters include Hickman, and Groshong, or Broviac catheters and may be referred to by these names as well. A tunneled catheter may remain inserted for months to years. These CVCs have a low infection rate due to a Dacron cuff, an antimicrobial cuff surrounding the catheter near the entry site, which is coated in antimicrobial solution and holds the catheter in place after two to three weeks of insertion. Implanted central venous catheter (ICVC, port a cath). An implanted central venous catheter, also called a port a "cath" or "port-a-cath", is similar to a tunneled catheter, but is left entirely under the skin and is accessible via a port . Medicines are injected through the skin into the catheter. Some implanted ports contain a small reservoir that can be refilled in the same way. After being filled, the reservoir slowly releases the medicine into the bloodstream. Surgically implanted infusion ports are placed below the clavicle (infraclavicular fossa), with the catheter threaded into the heart (right atrium) through a large vein. Once implanted, the port is accessed via a "gripper" non-coring Huber-tipped needle (PowerLoc is one brand, common sizes are length; 19 and 20 gauge. The needle assembly includes a short length of tubing and cannula) inserted directly through the skin. The clinician and patient may elect to apply a topical anesthetic before accessing the port. Ports can be used for medications, chemotherapy, and blood sampling. As ports are located completely under the skin, they are easier to maintain and have a lower risk of infection than CVC or PICC catheters. An implanted port is less obtrusive than a tunneled catheter or PICC line, requires little daily care, and has less impact on the patient's day-to-day activities. Port access requires specialized equipment and training. Ports are typically used on patients requiring periodic venous access over an extended course of therapy, then flushed regularly until surgically removed. If venous access is required on a frequent basis over a short period, a catheter having external access is more commonly used. Catheter types. Triple-lumen catheter. The most commonly used catheter for central venous access is the triple lumen catheter. They are preferred (particularly in the ICU) for their three infusion channels that allow for multiple therapies to be administered simultaneously. They are sized using the French scale, with the 7 French size commonly used in adults. These catheters typically have one 16 gauge channel and two 18 gauge channels. Contrary to the French scale, the larger the gauge number, the smaller the catheter diameter. Although these catheters possess one 16 gauge port, the flow is considerably slower than one would expect through a 16 gauge peripheral IV due to the longer length of the central venous catheter (see section on "catheter flow" above). It is important to note that the use of multiple infusion channels does not increase the risk of catheter-related blood stream infections. Hemodialysis catheter. Hemodialysis catheters are large diameter catheters (up to 16 French or 5.3mm) capable of flow rates of 200–300 ml/min, which is necessary to maintain the high flow rates of hemodialysis. There are two channels: one is used to carry the patient's blood to the dialysis machine, while the other is used to return blood back to the patient. These catheters are typically placed in the internal jugular vein. Introducer sheaths. Introducer sheaths are large catheters (8–9 French) that are typically placed to facilitate the passage of temporary vascular devices such as a pulmonary artery catheter or transvenous pacemaker. The introducer sheath is placed first, and the device is then threaded through the sheath and into the vessel. These catheters can also serve as stand-alone devices for rapid infusion given their large diameter and short length. When paired with a pressurized infusion system, flow rates of 850 ml/min have been achieved. Routine catheter care. The catheter is held in place by an adhesive dressing, suture, or staple which is covered by an occlusive dressing. Regular flushing with saline or a heparin-containing solution keeps the line open and prevents blood clots. There is no evidence that heparin is better than saline at preventing blood clots. Certain lines are impregnated with antibiotics, silver-containing substances (specifically silver sulfadiazine) and/or chlorhexidine to reduce infection risk. Specific types of long-term central lines are the Hickman catheters, which require clamps to make sure that the valve is closed, and Groshong catheters, which have a valve that opens as fluid is withdrawn or infused and remains closed when not in use. Hickman lines also have a "cuff" under the skin, to prevent bacterial migration. The cuff also causes tissue ingrowth into the device for long term securement. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "Q = \\Delta P * (\\pi r^4/8\\mu L)" } ]
https://en.wikipedia.org/wiki?curid=81854
8185549
History of logarithms
Development of the mathematical function The history of logarithms is the story of a correspondence (in modern terms, a group isomorphism) between multiplication on the positive real numbers and addition on the real number line that was formalized in seventeenth century Europe and was widely used to simplify calculation until the advent of the digital computer. The Napierian logarithms were published first in 1614. E. W. Hobson called it "one of the very greatest scientific discoveries that the world has seen." Henry Briggs introduced common (base 10) logarithms, which were easier to use. Tables of logarithms were published in many forms over four centuries. The idea of logarithms was also used to construct the slide rule, which became ubiquitous in science and engineering until the 1970s. A breakthrough generating the natural logarithm was the result of a search for an expression of area against a rectangular hyperbola, and required the assimilation of a new function into standard mathematics. Napier's wonderful invention. The method of logarithms was publicly propounded for the first time by John Napier in 1614, in his book entitled "Mirifici Logarithmorum Canonis Descriptio" ("Description of the Wonderful Canon of Logarithms"). The book contains fifty-seven pages of explanatory matter and ninety pages of tables of trigonometric functions and their natural logarithms. These tables greatly simplified calculations in spherical trigonometry, which are central to astronomy and celestial navigation and which typically include products of sines, cosines and other functions. Napier described other uses, such as solving ratio problems, as well. John Napier wrote a separate volume describing how he constructed his tables, but held off publication to see how his first book would be received. John died in 1617. His son, Robert, published his father's book, "Mirifici Logarithmorum Canonis Constructio" ("Construction of the Wonderful Canon of Logarithms"), with additions by Henry Briggs, in 1619 in Latin and then in 1620 in English. Napier conceived the logarithm as the relationship between two particles moving along a line, one at constant speed and the other at a speed proportional to its distance from a fixed endpoint. While in modern terms, the logarithm function can be explained simply as the inverse of the exponential function or as the integral of 1/"x", Napier worked decades before calculus was invented, the exponential function was understood, or coordinate geometry was developed by Descartes. Napier pioneered the use of a decimal point in numerical calculation, something that did not become commonplace until the next century. Napier's new method for computation gained rapid acceptance. Johannes Kepler praised it; Edward Wright, an authority on navigation, translated Napier's "Descriptio" into English the next year. Briggs extended the concept to the more convenient base 10. Common logarithm. As the common log of ten is one, of a hundred is two, and a thousand is three, the concept of common logarithms is very close to the decimal-positional number system. The common log is said to have base 10, but base 10,000 is ancient and still common in East Asia. In his book "The Sand Reckoner", Archimedes used the myriad as the base of a number system designed to count the grains of sand in the universe. As was noted in 2000: In antiquity Archimedes gave a recipe for reducing multiplication to addition by making use of geometric progression of numbers and relating them to an arithmetic progression. In 1616 Henry Briggs visited John Napier at Edinburgh in order to discuss the suggested change to Napier's logarithms. The following year he again visited for a similar purpose. During these conferences the alteration proposed by Briggs was agreed upon, and on his return from his second visit to Edinburgh, in 1617, he published the first chiliad of his logarithms. In 1624, Briggs published his "Arithmetica Logarithmica", in folio, a work containing the logarithms of thirty thousand natural numbers to fourteen decimal places (1-20,000 and 90,001 to 100,000). This table was later extended by Adriaan Vlacq, but to 10 places, and by Alexander John Thompson to 20 places in 1952. Briggs was one of the first to use finite-difference methods to compute tables of functions. He also completed a table of logarithmic sines and tangents for the hundredth part of every degree to fourteen decimal places, with a table of natural sines to fifteen places and the tangents and secants for the same to ten places, all of which were printed at Gouda in 1631 and published in 1633 under the title of "Trigonometria Britannica"; this work was probably a successor to his 1617 "Logarithmorum Chilias Prima" ("The First Thousand Logarithms"), which gave a brief account of logarithms and a long table of the first 1000 integers calculated to the 14th decimal place. Natural logarithm. In 1649, Alphonse Antonio de Sarasa, a former student of Grégoire de Saint-Vincent, related logarithms to the quadrature of the hyperbola, by pointing out that the area "A"("t") under the hyperbola from "x" 1 to "x" "t" satisfies formula_0 At first the reaction to Saint-Vincent's hyperbolic logarithm was a continuation of studies of quadrature as in Christiaan Huygens (1651) and James Gregory (1667). Subsequently, an industry of making logarithms arose as "logaritmotechnia", the title of works by Nicholas Mercator (1668), Euclid Speidell (1688), and John Craig (1710). By use of the geometric series with its conditional radius of convergence, an alternating series called the Mercator series expresses the logarithm function over the interval (0,2). Since the series is negative in (0,1), the "area under the hyperbola" must be considered negative there, so a signed measure, instead of purely positive area, determines the hyperbolic logarithm. Historian Tom Whiteside described the transition to the analytic function as follows: By the end of the 17th century we can say that much more than being a calculating device suitably well-tabulated, the logarithm function, very much on the model of the hyperbola-area, had been accepted into mathematics. When, in the 18th century, this geometric basis was discarded in favour of a fully analytical one, no extension or reformulation was necessary – the concept of "hyperbola-area" was transformed painlessly into "natural logarithm". Leonhard Euler treated a logarithm as an exponent of a certain number called the base of the logarithm. He noted that the number 2.71828, and its reciprocal, provided a point on the hyperbola "xy" = 1 such that an area of one square unit lies beneath the hyperbola, right of (1,1) and above the asymptote of the hyperbola. He then called the logarithm, with this number as base, the "natural logarithm". As noted by Howard Eves, "One of the anomalies in the history of mathematics is the fact that logarithms were discovered before exponents were in use." Carl B. Boyer wrote, "Euler was among the first to treat logarithms as exponents, in the manner now so familiar." Pioneers of logarithms. Predecessors. The Babylonians sometime in 2000–1600 BC may have invented the quarter square multiplication algorithm to multiply two numbers using only addition, subtraction and a table of quarter squares. Thus, such a table served a similar purpose to tables of logarithms, which also allow multiplication to be calculated using addition and table lookups. However, the quarter-square method could not be used for division without an additional table of reciprocals (or the knowledge of a sufficiently simple algorithm to generate reciprocals). Large tables of quarter squares were used to simplify the accurate multiplication of large numbers from 1817 onwards until this was superseded by the use of computers. The Indian mathematician Virasena worked with the concept of ardhaccheda: the number of times a number of the form 2n could be halved. For exact powers of 2, this equals the binary logarithm, but it differs from the logarithm for other numbers and it gives 2-adic order rather than the logarithm. Michael Stifel published "Arithmetica integra" in Nuremberg in 1544, which contains a table of integers and powers of 2 that has been considered an early version of a table of binary logarithms. In the 16th and early 17th centuries an algorithm called prosthaphaeresis was used to approximate multiplication and division. This used the trigonometric identity formula_1 or similar to convert the multiplications to additions and table lookups. However, logarithms are more straightforward and require less work. It can be shown using Euler's formula that the two techniques are related. Bürgi. The Swiss mathematician Jost Bürgi constructed a table of progressions which can be considered a table of antilogarithms independently of John Napier, whose publication (1614) was known by the time Bürgi published at the behest of Johannes Kepler. We know that Bürgi had some way of simplifying calculations around 1588, but most likely this way was the use of prosthaphaeresis, and not the use of his table of progressions which probably goes back to about 1600. Indeed, Wittich, who was in Kassel from 1584 to 1586, brought with him knowledge of prosthaphaeresis, a method by which multiplications and divisions can be replaced by additions and subtractions of trigonometrical values. This procedure achieves the same as the logarithms will a few years later. Napier. The method of logarithms was first publicly propounded by John Napier in 1614, in a book titled "Mirifici Logarithmorum Canonis Descriptio". Johannes Kepler, who used logarithm tables extensively to compile his "Ephemeris" and therefore dedicated it to Napier, remarked: <templatestyles src="Template:Blockquote/styles.css" />... the accent in calculation led Justus Byrgius [Joost Bürgi] on the way to these very logarithms many years before Napier's system appeared; but ... instead of rearing up his child for the public benefit he deserted it in the birth. Napier imagined a point P travelling across a line segment P0 to Q. Starting at P0, with a certain initial speed, P travels at a speed proportional to its distance to Q, causing P to never reach Q. Napier juxtaposed this figure with that of a point L travelling along an unbounded line segment, starting at L0, and with a constant speed equal to that of the initial speed of point P. Napier defined the distance from L0 to L as the logarithm of the distance from P to Q. By repeated subtractions Napier calculated (1 − 10−7)"L" for "L" ranging from 1 to 100. The result for "L"=100 is approximately 0.99999 = 1 − 10−5. Napier then calculated the products of these numbers with 107(1 − 10−5)"L" for "L" from 1 to 50, and did similarly with 0.9998 ≈ (1 − 10−5)20 and 0.9 ≈ 0.99520. These computations, which occupied 20 years, allowed him to give, for any number "N" from 5 to 10 million, the number "L" that solves the equation formula_2 Napier first called "L" an "artificial number", but later introduced the word "logarithm" to mean a number that indicates a ratio: ("logos") meaning proportion, and ("arithmos") meaning number. In modern notation, the relation to natural logarithms is: formula_3 where the very close approximation corresponds to the observation that formula_4 The invention was quickly and widely met with acclaim. The works of Bonaventura Cavalieri (Italy), Edmund Wingate (France), Xue Fengzuo (China), and Johannes Kepler's "Chilias logarithmorum" (Germany) helped spread the concept further. Euler. Around 1730, Leonhard Euler defined the exponential function and the natural logarithm by formula_5 In his 1748 textbook Introduction to the Analysis of the Infinite, Euler published the now-standard approach to logarithms via an inverse function: In chapter 6, "On exponentials and logarithms", he begins with a constant base "a" and discusses the transcendental function formula_6 Then its inverse is the logarithm: "z" = log"a" "y". Tables of logarithms. Mathematical tables containing common logarithms (base-10) were extensively used in computations prior to the advent of computers and calculators, not only because logarithms convert problems of multiplication and division into much easier addition and subtraction problems, but for an additional property that is unique to base-10 and proves useful: Any positive number can be expressed as the product of a number from the interval [1,10) and an integer power of 10. This can be envisioned as shifting the decimal separator of the given number to the left yielding a positive, and to the right yielding a negative exponent of 10. Only the logarithms of these "normalized" numbers (approximated by a certain number of digits), which are called mantissas, need to be tabulated in lists to a similar precision (a similar number of digits). These mantissas are all positive and enclosed in the interval [0,1). The common logarithm of any given positive number is then obtained by adding its mantissa to the common logarithm of the second factor. This logarithm is called the "characteristic" of the given number. Since the common logarithm of a power of 10 is exactly the exponent, the characteristic is an integer number, which makes the common logarithm exceptionally useful in dealing with decimal numbers. For numbers less than 1, the characteristic makes the resulting logarithm negative, as required. See common logarithm for details on the use of characteristics and mantissas. Early tables. Michael Stifel published "Arithmetica integra" in Nuremberg in 1544 which contains a table of integers and powers of 2 that has been considered an early version of a logarithmic table. The first published table of logarithms was in John Napier's 1614, "Mirifici Logarithmorum Canonis Descriptio". The book contained fifty-seven pages of explanatory matter and ninety pages of tables of trigonometric functions and their natural logarithms. The English mathematician Henry Briggs visited Napier in 1615, and proposed a re-scaling of Napier's logarithms to form what is now known as the common or base-10 logarithms. Napier delegated to Briggs the computation of a revised table, and they later published, in 1617, "Logarithmorum Chilias Prima" ("The First Thousand Logarithms"), which gave a brief account of logarithms and a table for the first 1000 integers calculated to the 14th decimal place. In 1624, Briggs' "Arithmetica Logarithmica" appeared in folio as a work containing the logarithms of 30,000 natural numbers to fourteen decimal places (1-20,000 and 90,001 to 100,000). This table was later extended by Adriaan Vlacq, but to 10 places, and by Alexander John Thompson to 20 places in 1952. Briggs was one of the first to use finite-difference methods to compute tables of functions. Vlacq's table was later found to contain 603 errors, but "this cannot be regarded as a great number, when it is considered that the table was the result of an original calculation, and that more than 2,100,000 printed figures are liable to error." An edition of Vlacq's work, containing many corrections, was issued at Leipzig in 1794 under the title "Thesaurus Logarithmorum Completus" by Jurij Vega. François Callet's seven-place table (Paris, 1795), instead of stopping at 100,000, gave the eight-place logarithms of the numbers between 100,000 and 108,000, in order to diminish the errors of interpolation, which were greatest in the early part of the table, and this addition was generally included in seven-place tables. The only important published extension of Vlacq's table was made by Edward Sang in 1871, whose table contained the seven-place logarithms of all numbers below 200,000. Briggs and Vlacq also published original tables of the logarithms of the trigonometric functions. Briggs completed a table of logarithmic sines and logarithmic tangents for the hundredth part of every degree to fourteen decimal places, with a table of natural sines to fifteen places and the tangents and secants for the same to ten places, all of which were printed at Gouda in 1631 and published in 1633 under the title of "Trigonometria Britannica". Tables logarithms of trigonometric functions simplify hand calculations where a function of an angle must be multiplied by another number, as is often the case. Besides the tables mentioned above, a great collection, called "Tables du Cadastre," was constructed under the direction of Gaspard de Prony, by an original computation, under the auspices of the French republican government of the 1790s. This work, which contained the logarithms of all numbers up to 100,000 to nineteen places, and of the numbers between 100,000 and 200,000 to twenty-four places, exists only in manuscript, "in seventeen enormous folios," at the Observatory of Paris. It was begun in 1792, and "the whole of the calculations, which to secure greater accuracy were performed in duplicate, and the two manuscripts subsequently collated with care, were completed in the short space of two years." Cubic interpolation could be used to find the logarithm of any number to a similar accuracy. For different needs, logarithm tables ranging from small handbooks to multi-volume editions have been compiled: Slide rule. The slide rule was invented around 1620–1630, shortly after John Napier's publication of the concept of the logarithm. Edmund Gunter of Oxford developed a calculating device with a single logarithmic scale; with additional measuring tools it could be used to multiply and divide. The first description of this scale was published in Paris in 1624 by Edmund Wingate (c.1593–1656), an English mathematician, in a book entitled "L'usage de la reigle de proportion en l'arithmetique & geometrie". The book contains a double scale, logarithmic on one side, tabular on the other. In 1630, William Oughtred of Cambridge invented a circular slide rule, and in 1632 combined two handheld Gunter rules to make a device that is recognizably the modern slide rule. Like his contemporary at Cambridge, Isaac Newton, Oughtred taught his ideas privately to his students. Also like Newton, he became involved in a vitriolic controversy over priority, with his one-time student Richard Delamain and the prior claims of Wingate. Oughtred's ideas were only made public in publications of his student William Forster in 1632 and 1653. In 1677, Henry Coggeshall created a two-foot folding rule for timber measure, called the Coggeshall slide rule, expanding the slide rule's use beyond mathematical inquiry. In 1722, Warner introduced the two- and three-decade scales, and in 1755 Everard included an inverted scale; a slide rule containing all of these scales is usually known as a "polyphase" rule. In 1815, Peter Mark Roget invented the log log slide rule, which included a scale displaying the logarithm of the logarithm. This allowed the user to directly perform calculations involving roots and exponents. This was especially useful for fractional powers. In 1821, Nathaniel Bowditch, described in the "American Practical Navigator" a "sliding rule" that contained scales trigonometric functions on the fixed part and a line of log-sines and log-tans on the slider used to solve navigation problems. In 1845, Paul Cameron of Glasgow introduced a Nautical Slide-Rule capable of answering navigation questions, including right ascension and declination of the sun and principal stars. Modern form. A more modern form of slide rule was created in 1859 by French artillery lieutenant Amédée Mannheim, "who was fortunate in having his rule made by a firm of national reputation and in having it adopted by the French Artillery." It was around this time that engineering became a recognized profession, resulting in widespread slide rule use in Europe–but not in the United States. There Edwin Thacher's cylindrical rule took hold after 1881. The duplex rule was invented by William Cox in 1891, and was produced by Keuffel and Esser Co. of New York. Impact. Writing in 1914 on the 300th anniversary of Napier's tables, E. W. Hobson described logarithms as "providing a great labour-saving instrument for the use of all those who have occasion to carry out extensive numerical calculations" and comparing it in importance to the "Indian invention" of our decimal number system. Napier's improved method of calculation was soon adopted in Britain and Europe. Kepler dedicated his 1620 "Ephereris" to Napier, congratulating him on his invention and its benefits to astronomy. Edward Wright, an authority on celestial navigation, translated Napier's Latin "Descriptio" into English in 1615, shortly after its publication. Briggs extended the concept to the more convenient base 10, or common logarithm. “Probably no work has ever influenced science as a whole, and mathematics in particular, so profoundly as this modest little book [the Descriptio]. It opened the way for the abolition, once and for all, of the infinitely laborious, nay, nightmarish, processes of long division and multiplication, of finding the power and the root of numbers.” The logarithm function remains a staple of mathematical analysis, but printed tables of logarithms gradually diminished in importance in the twentieth century as mechanical calculators and, later, electronic pocket calculators and computers took over computations that required high accuracy. The introduction of hand-held scientific calculators in the 1970s ended the era of slide rules. Logarithmic scale graphs are widely used to display data with a wide range. The decibel, a logarithmic unit, is also widely used. The current, 2002, edition of "The American Practical Navigator" (Bowditch) still contains tables of logarithms and logarithms of trigonometric functions. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "A(tu) = A(t) + A(u)." }, { "math_id": 1, "text": "\\cos\\alpha\\cos\\beta = \\frac12[\\cos(\\alpha+\\beta) + \\cos(\\alpha-\\beta)]" }, { "math_id": 2, "text": "N=10^7 (1-10^{-7})^L. " }, { "math_id": 3, "text": "L = \\log_{(1-10^{-7})} \\left( \\frac{N}{10^7} \\right) \\approx 10^7 \\log_{1/e} \\left( \\frac{N}{10^7} \\right) = -10^7 \\log_e \\left( \\frac{N}{10^7} \\right)," }, { "math_id": 4, "text": "(1-10^{-7})^{10^7} \\approx \\frac{1}{e}. " }, { "math_id": 5, "text": "\n\\begin{align}\ne^x & = \\lim_{n \\rightarrow \\infty} \\left( 1 + \\frac x n \\right)^n, \\\\[6pt]\n\\ln(x) & = \\lim_{n \\rightarrow \\infty} n(x^{1/n} - 1).\n\\end{align}\n" }, { "math_id": 6, "text": "y = a^z ." } ]
https://en.wikipedia.org/wiki?curid=8185549
81863
Proportionality (mathematics)
Property of two varying quantities with a constant ratio In mathematics, two sequences of numbers, often experimental data, are proportional or directly proportional if their corresponding elements have a constant ratio. The ratio is called coefficient of proportionality (or proportionality constant) and its reciprocal is known as constant of normalization (or normalizing constant). Two sequences are inversely proportional if corresponding elements have a constant product, also called the coefficient of proportionality. This definition is commonly extended to related varying quantities, which are often called "variables". This meaning of "variable" is not the common meaning of the term in mathematics (see variable (mathematics)); these two different concepts share the same name for historical reasons. Two functions formula_0 and formula_1 are "proportional" if their ratio formula_2 is a constant function. If several pairs of variables share the same direct proportionality constant, the equation expressing the equality of these ratios is called a proportion, e.g., = = ⋯ = "k" (for details see Ratio). Proportionality is closely related to "linearity". Direct proportionality. Given an independent variable "x" and a dependent variable "y", "y" is directly proportional to "x" if there is a positive constant "k" such that: formula_3 The relation is often denoted using the symbols "∝" (not to be confused with the Greek letter alpha) or "~", with exception of Japanese texts, where "~" is reserved for intervals: formula_4 (or formula_5) For formula_6 the proportionality constant can be expressed as the ratio: formula_7 It is also called the constant of variation or constant of proportionality. Given such a constant "k", the proportionality relation ∝ with proportionality constant "k" between two sets "A" and "B" is the equivalence relation defined by formula_8 A direct proportionality can also be viewed as a linear equation in two variables with a "y"-intercept of 0 and a slope of "k" > 0, which corresponds to linear growth. Inverse proportionality. Two variables are inversely proportional (also called varying inversely, in inverse variation, in inverse proportion) if each of the variables is directly proportional to the multiplicative inverse (reciprocal) of the other, or equivalently if their product is a constant. It follows that the variable "y" is inversely proportional to the variable "x" if there exists a non-zero constant "k" such that formula_9 or equivalently, formula_10. Hence the constant "k" is the product of "x" and "y". The graph of two variables varying inversely on the Cartesian coordinate plane is a rectangular hyperbola. The product of the "x" and "y" values of each point on the curve equals the constant of proportionality ("k"). Since neither "x" nor "y" can equal zero (because "k" is non-zero), the graph never crosses either axis. Direct and inverse proportion contrast as follows: in direct proportion the variables increase or decrease together. With inverse proportion, an increase in one variable is associated with a decrease in the other. For instance, in travel, a constant speed dictates a direct proportion between distance and time travelled; in contrast, for a given distance (the constant), the time of travel is inversely proportional to speed: "s" × "t" = "d". Hyperbolic coordinates. The concepts of "direct" and "inverse" proportion lead to the location of points in the Cartesian plane by hyperbolic coordinates; the two coordinates correspond to the constant of direct proportionality that specifies a point as being on a particular ray and the constant of inverse proportionality that specifies a point as being on a particular hyperbola. Computer encoding. The Unicode characters for proportionality are the following: Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "f(x)" }, { "math_id": 1, "text": "g(x)" }, { "math_id": 2, "text": "\\frac{f(x)}{g(x)}" }, { "math_id": 3, "text": "y = kx" }, { "math_id": 4, "text": "y \\propto x" }, { "math_id": 5, "text": "y \\sim x" }, { "math_id": 6, "text": "x \\ne 0" }, { "math_id": 7, "text": " k = \\frac{y}{x}" }, { "math_id": 8, "text": "\\{(a, b) \\in A \\times B : a = k b\\}." }, { "math_id": 9, "text": "y = \\frac{k}{x}" }, { "math_id": 10, "text": "xy = k" } ]
https://en.wikipedia.org/wiki?curid=81863
8186879
Ganea conjecture
Ganea's conjecture is a now disproved claim in algebraic topology. It states that formula_0 for all formula_1, where formula_2 is the Lusternik–Schnirelmann category of a topological space "X", and "S""n" is the "n"-dimensional sphere. The inequality formula_3 holds for any pair of spaces, formula_4 and formula_5. Furthermore, formula_6, for any sphere formula_7, formula_1. Thus, the conjecture amounts to formula_8. The conjecture was formulated by Tudor Ganea in 1971. Many particular cases of this conjecture were proved, and Norio Iwase gave a counterexample to the general case in 1998. In a follow-up paper from 2002, Iwase gave an even stronger counterexample, with "X" a closed smooth manifold. This counterexample also disproved a related conjecture, which stated that formula_9 for a closed manifold formula_10 and formula_11 a point in formula_10. A minimum dimensional counterexample to the conjecture was constructed by Don Stanley and Hugo Rodríguez Ordóñez in 2010. This work raises the question: For which spaces "X" is the Ganea condition, formula_12, satisfied? It has been conjectured that these are precisely the spaces "X" for which formula_2 equals a related invariant, formula_13 Furthermore, cat(X * S^n) = cat(X ⨇ S^n ⨧ Im Y + X Re X + Y) = 1 Im(X, Y), 1 Re(X, Y).
[ { "math_id": 0, "text": " \\operatorname{cat}(X \\times S^n)=\\operatorname{cat}(X) +1" }, { "math_id": 1, "text": "n>0" }, { "math_id": 2, "text": "\\operatorname{cat}(X)" }, { "math_id": 3, "text": " \\operatorname{cat}(X \\times Y) \\le \\operatorname{cat}(X) +\\operatorname{cat}(Y) " }, { "math_id": 4, "text": "X" }, { "math_id": 5, "text": "Y" }, { "math_id": 6, "text": "\\operatorname{cat}(S^n)=1" }, { "math_id": 7, "text": "S^n" }, { "math_id": 8, "text": " \\operatorname{cat}(X \\times S^n)\\ge\\operatorname{cat}(X) +1" }, { "math_id": 9, "text": " \\operatorname{cat}(M \\setminus \\{p\\})=\\operatorname{cat}(M) -1 , " }, { "math_id": 10, "text": "M" }, { "math_id": 11, "text": "p" }, { "math_id": 12, "text": "\\operatorname{cat}(X\\times S^n) = \\operatorname{cat}(X) + 1" }, { "math_id": 13, "text": "\\operatorname{Qcat}(X)." } ]
https://en.wikipedia.org/wiki?curid=8186879
8187273
Schur test
In mathematical analysis, the Schur test, named after German mathematician Issai Schur, is a bound on the formula_0 operator norm of an integral operator in terms of its Schwartz kernel (see Schwartz kernel theorem). Here is one version. Let formula_1 be two measurable spaces (such as formula_2). Let formula_3 be an integral operator with the non-negative Schwartz kernel formula_4, formula_5, formula_6: formula_7 If there exist real functions formula_8 and formula_9 and numbers formula_10 such that formula_11 for almost all formula_12 and formula_13 for almost all formula_14, then formula_3 extends to a continuous operator formula_15 with the operator norm formula_16 Such functions formula_17, formula_18 are called the Schur test functions. In the original version, formula_3 is a matrix and formula_19. Common usage and Young's inequality. A common usage of the Schur test is to take formula_20 Then we get: formula_21 This inequality is valid no matter whether the Schwartz kernel formula_4 is non-negative or not. A similar statement about formula_22 operator norms is known as Young's inequality for integral operators: if formula_23 where formula_24 satisfies formula_25, for some formula_26, then the operator formula_27 extends to a continuous operator formula_28, with formula_29 Proof. Using the Cauchy–Schwarz inequality and inequality (1), we get: formula_30 Integrating the above relation in formula_31, using Fubini's Theorem, and applying inequality (2), we get: formula_32 It follows that formula_33 for any formula_34.
[ { "math_id": 0, "text": "L^2\\to L^2" }, { "math_id": 1, "text": "X,\\,Y" }, { "math_id": 2, "text": "\\mathbb{R}^n" }, { "math_id": 3, "text": "\\,T" }, { "math_id": 4, "text": "\\,K(x,y)" }, { "math_id": 5, "text": "x\\in X" }, { "math_id": 6, "text": "y\\in Y" }, { "math_id": 7, "text": "T f(x)=\\int_Y K(x,y)f(y)\\,dy." }, { "math_id": 8, "text": "\\,p(x)>0" }, { "math_id": 9, "text": "\\,q(y)>0" }, { "math_id": 10, "text": "\\,\\alpha,\\beta>0" }, { "math_id": 11, "text": " (1)\\qquad \\int_Y K(x,y)q(y)\\,dy\\le\\alpha p(x) " }, { "math_id": 12, "text": "\\,x" }, { "math_id": 13, "text": " (2)\\qquad \\int_X p(x)K(x,y)\\,dx\\le\\beta q(y)" }, { "math_id": 14, "text": "\\,y" }, { "math_id": 15, "text": "T:L^2\\to L^2" }, { "math_id": 16, "text": " \\Vert T\\Vert_{L^2\\to L^2} \\le\\sqrt{\\alpha\\beta}." }, { "math_id": 17, "text": "\\,p(x)" }, { "math_id": 18, "text": "\\,q(y)" }, { "math_id": 19, "text": "\\,\\alpha=\\beta=1" }, { "math_id": 20, "text": "\\,p(x)=q(y)=1." }, { "math_id": 21, "text": "\n\\Vert T\\Vert^2_{L^2\\to L^2}\\le\n\\sup_{x\\in X}\\int_Y|K(x,y)| \\, dy\n\\cdot\n\\sup_{y\\in Y}\\int_X|K(x,y)| \\, dx.\n" }, { "math_id": 22, "text": "L^p\\to L^q" }, { "math_id": 23, "text": "\\sup_x\\Big(\\int_Y|K(x,y)|^r\\,dy\\Big)^{1/r} + \\sup_y\\Big(\\int_X|K(x,y)|^r\\,dx\\Big)^{1/r}\\le C," }, { "math_id": 24, "text": "r" }, { "math_id": 25, "text": "\\frac 1 r=1-\\Big(\\frac 1 p-\\frac 1 q\\Big)" }, { "math_id": 26, "text": "1\\le p\\le q\\le\\infty" }, { "math_id": 27, "text": "Tf(x)=\\int_Y K(x,y)f(y)\\,dy" }, { "math_id": 28, "text": "T:L^p(Y)\\to L^q(X)" }, { "math_id": 29, "text": "\\Vert T\\Vert_{L^p\\to L^q}\\le C." }, { "math_id": 30, "text": "\n\\begin{align} |Tf(x)|^2=\\left|\\int_Y K(x,y)f(y)\\,dy\\right|^2\n&\\le \\left(\\int_Y K(x,y)q(y)\\,dy\\right) \n\\left(\\int_Y \\frac{K(x,y)f(y)^2}{q(y)} dy\\right)\\\\\n&\\le\\alpha p(x)\\int_Y \\frac{K(x,y)f(y)^2}{q(y)} \\, dy.\n\\end{align}\n" }, { "math_id": 31, "text": "x" }, { "math_id": 32, "text": " \\Vert T f\\Vert_{L^2}^2 \n\\le \\alpha \\int_Y \\left(\\int_X p(x)K(x,y)\\,dx\\right) \\frac{f(y)^2}{q(y)} \\, dy\n\\le\\alpha\\beta \\int_Y f(y)^2 dy =\\alpha\\beta\\Vert f\\Vert_{L^2}^2. " }, { "math_id": 33, "text": "\\Vert T f\\Vert_{L^2}\\le\\sqrt{\\alpha\\beta}\\Vert f\\Vert_{L^2}" }, { "math_id": 34, "text": "f\\in L^2(Y)" } ]
https://en.wikipedia.org/wiki?curid=8187273
8187487
Wick's theorem
Theorem for reducing high-order derivatives Wick's theorem is a method of reducing high-order derivatives to a combinatorics problem. It is named after Italian physicist Gian-Carlo Wick. It is used extensively in quantum field theory to reduce arbitrary products of creation and annihilation operators to sums of products of pairs of these operators. This allows for the use of Green's function methods, and consequently the use of Feynman diagrams in the field under study. A more general idea in probability theory is Isserlis' theorem. In perturbative quantum field theory, Wick's theorem is used to quickly rewrite each time ordered summand in the Dyson series as a sum of normal ordered terms. In the limit of asymptotically free ingoing and outgoing states, these terms correspond to Feynman diagrams. Definition of contraction. For two operators formula_0 and formula_1 we define their contraction to be formula_2 where formula_3 denotes the normal order of an operator formula_4. Alternatively, contractions can be denoted by a line joining formula_0 and formula_1, like formula_5. We shall look in detail at four special cases where formula_0 and formula_1 are equal to creation and annihilation operators. For formula_6 particles we'll denote the creation operators by formula_7 and the annihilation operators by formula_8 formula_9. They satisfy the commutation relations for bosonic operators formula_10, or the anti-commutation relations for fermionic operators formula_11 where formula_12 denotes the Kronecker delta and formula_13 denotes the identity operator. We then have formula_14 formula_15 formula_16 formula_17 where formula_18. These relationships hold true for bosonic operators or fermionic operators because of the way normal ordering is defined. Examples. We can use contractions and normal ordering to express any product of creation and annihilation operators as a sum of normal ordered terms. This is the basis of Wick's theorem. Before stating the theorem fully we shall look at some examples. Suppose formula_8 and formula_7 are bosonic operators satisfying the commutation relations: formula_19 formula_20 formula_21 where formula_18, formula_22 denotes the commutator, and formula_12 is the Kronecker delta. We can use these relations, and the above definition of contraction, to express products of formula_8 and formula_7 in other ways. formula_23 Example 1. Note that we have not changed formula_24 but merely re-expressed it in another form as formula_25 formula_26 formula_27 formula_28 formula_29 formula_30 formula_31 Example 3. In the last line we have used different numbers of formula_32 symbols to denote different contractions. By repeatedly applying the commutation relations it takes a lot of work to express formula_33 in the form of a sum of normally ordered products. It is an even lengthier calculation for more complicated products. Luckily Wick's theorem provides a shortcut. Statement of the theorem. A product of creation and annihilation operators formula_34 can be expressed as formula_35 In other words, a string of creation and annihilation operators can be rewritten as the normal-ordered product of the string, plus the normal-ordered product after all single contractions among operator pairs, plus all double contractions, etc., plus all full contractions. Applying the theorem to the above examples provides a much quicker method to arrive at the final expressions. A warning: In terms on the right hand side containing multiple contractions care must be taken when the operators are fermionic. In this case an appropriate minus sign must be introduced according to the following rule: rearrange the operators (introducing minus signs whenever the order of two fermionic operators is swapped) to ensure the contracted terms are adjacent in the string. The contraction can then be applied (See "Rule C" in Wick's paper). Example: If we have two fermions (formula_36) with creation and annihilation operators formula_37 and formula_38 (formula_39) then formula_40 Note that the term with contractions of the two creation operators and of the two annihilation operators is not included because their contractions vanish. Proof. We use induction to prove the theorem for bosonic creation and annihilation operators. The formula_36 base case is trivial, because there is only one possible contraction: formula_41 In general, the only non-zero contractions are between an annihilation operator on the left and a creation operator on the right. Suppose that Wick's theorem is true for formula_42 operators formula_43, and consider the effect of adding an "N"th operator formula_0 to the left of formula_43 to form formula_44. By Wick's theorem applied to formula_42 operators, we have: formula_45 formula_0 is either a creation operator or an annihilation operator. If formula_0 is a creation operator, all above products, such as formula_46, are already normal ordered and require no further manipulation. Because formula_0 is to the left of all annihilation operators in formula_47, any contraction involving it will be zero. Thus, we can add all contractions involving formula_0 to the sums without changing their value. Therefore, if formula_0 is a creation operator, Wick's theorem holds for formula_47. Now, suppose that formula_48 is an annihilation operator. To move formula_48 from the left-hand side to the right-hand side of all the products, we repeatedly swap formula_0 with the operator immediately right of it (call it formula_49), each time applying formula_50 to account for noncommutativity. Once we do this, all terms will be normal ordered. All terms added to the sums by pushing formula_0 through the products correspond to additional contractions involving formula_0. Therefore, if formula_0 is an annihilation operator, Wick's theorem holds for formula_47. We have proved the base case and the induction step, so the theorem is true. By introducing the appropriate minus signs, the proof can be extended to fermionic creation and annihilation operators. The theorem applied to fields is proved in essentially the same way. Wick's theorem applied to fields. The correlation function that appears in quantum field theory can be expressed by a contraction on the field operators: formula_51 where the operator formula_52 are the amount that do not annihilate the vacuum state formula_53. Which means that formula_54. This means that formula_55 is a contraction over formula_56. Note that the contraction of a time-ordered string of two field operators is a C-number. In the end, we arrive at Wick's theorem: The T-product of a time-ordered free fields string can be expressed in the following manner: formula_57 formula_58 Applying this theorem to S-matrix elements, we discover that normal-ordered terms acting on vacuum state give a null contribution to the sum. We conclude that "m" is even and only completely contracted terms remain. formula_59 formula_60 where "p" is the number of interaction fields (or, equivalently, the number of interacting particles) and "n" is the development order (or the number of vertices of interaction). For example, if formula_61 This is analogous to the corresponding Isserlis' theorem in statistics for the moments of a Gaussian distribution. Note that this discussion is in terms of the usual definition of normal ordering which is appropriate for the vacuum expectation values (VEV's) of fields. (Wick's theorem provides as a way of expressing VEV's of "n" fields in terms of VEV's of two fields.) There are any other possible definitions of normal ordering, and Wick's theorem is valid irrespective. However Wick's theorem only simplifies computations if the definition of normal ordering used is changed to match the type of expectation value wanted. That is we always want the expectation value of the normal ordered product to be zero. For instance in thermal field theory a different type of expectation value, a thermal trace over the density matrix, requires a different definition of normal ordering. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\hat{A}" }, { "math_id": 1, "text": "\\hat{B}" }, { "math_id": 2, "text": "\\hat{A}^\\bullet\\, \\hat{B}^\\bullet \\equiv \\hat{A}\\,\\hat{B}\\, - \\mathopen{:} \\hat{A}\\,\\hat{B} \\mathclose{:}" }, { "math_id": 3, "text": "\\mathopen{:} \\hat{O} \\mathclose{:}" }, { "math_id": 4, "text": "\\hat{O}" }, { "math_id": 5, "text": "\\overset{\\sqcap}{\\hat{A}\\hat{B}}" }, { "math_id": 6, "text": "N" }, { "math_id": 7, "text": "\\hat{a}_i^\\dagger" }, { "math_id": 8, "text": "\\hat{a}_i" }, { "math_id": 9, "text": "(i=1,2,3,\\ldots,N)" }, { "math_id": 10, "text": "[\\hat{a}_i,\\hat{a}_j^\\dagger]=\\delta_{ij} \\hat{\\mathbf 1}" }, { "math_id": 11, "text": "\\{\\hat{a}_i,\\hat{a}_j^\\dagger\\}=\\delta_{ij} \\hat{\\mathbf 1}" }, { "math_id": 12, "text": "\\delta_{ij}" }, { "math_id": 13, "text": "\\hat{ \\mathbf{1}}" }, { "math_id": 14, "text": "\\hat{a}_i^\\bullet \\,\\hat{a}_j^\\bullet = \\hat{a}_i \\,\\hat{a}_j \\,- \\mathopen{:}\\,\\hat{a}_i\\, \\hat{a}_j\\,\\mathclose{:}\\, = 0" }, { "math_id": 15, "text": "\\hat{a}_i^{\\dagger\\bullet}\\, \\hat{a}_j^{\\dagger\\bullet} = \\hat{a}_i^\\dagger\\, \\hat{a}_j^\\dagger \\,-\\,\\mathopen{:}\\hat{a}_i^\\dagger\\,\\hat{a}_j^\\dagger\\,\\mathclose{:}\\, = 0" }, { "math_id": 16, "text": "\\hat{a}_i^{\\dagger\\bullet}\\, \\hat{a}_j^\\bullet = \\hat{a}_i^\\dagger\\, \\hat{a}_j \\,- \\mathopen{:}\\,\\hat{a}_i^\\dagger \\,\\hat{a}_j\\, \\mathclose{:}\\,= 0" }, { "math_id": 17, "text": "\\hat{a}_i^\\bullet \\,\\hat{a}_j^{\\dagger\\bullet}= \\hat{a}_i\\, \\hat{a}_j^\\dagger \\,- \\mathopen{:}\\,\\hat{a}_i\\,\\hat{a}_j^\\dagger \\,\\mathclose{:}\\, = \\delta_{ij} \\hat{\\mathbf 1}" }, { "math_id": 18, "text": "i,j = 1,\\ldots,N" }, { "math_id": 19, "text": "\\left [\\hat{a}_i^\\dagger, \\hat{a}_j^\\dagger \\right] = 0 " }, { "math_id": 20, "text": "\\left [\\hat{a}_i, \\hat{a}_j \\right] = 0 " }, { "math_id": 21, "text": "\\left [\\hat{a}_i, \\hat{a}_j^\\dagger \\right ] = \\delta_{ij} \\hat{\\mathbf 1} " }, { "math_id": 22, "text": "\\left[ \\hat{A}, \\hat{B} \\right] \\equiv \\hat{A} \\hat{B} - \\hat{B} \\hat{A}" }, { "math_id": 23, "text": "\\hat{a}_i \\,\\hat{a}_j^\\dagger = \\hat{a}_j^\\dagger \\,\\hat{a}_i + \\delta_{ij} = \\hat{a}_j^\\dagger \\,\\hat{a}_i + \\hat{a}_i^\\bullet \\,\\hat{a}_j^{\\dagger\\bullet} =\\,\\mathopen{:}\\,\\hat{a}_i\\, \\hat{a}_j^\\dagger \\,\\mathclose{:} + \\hat{a}_i^\\bullet \\,\\hat{a}_j^{\\dagger\\bullet} " }, { "math_id": 24, "text": "\\hat{a}_i \\,\\hat{a}_j^\\dagger" }, { "math_id": 25, "text": "\\,\\mathopen{:}\\,\\hat{a}_i\\, \\hat{a}_j^\\dagger \\,\\mathclose{:} + \\hat{a}_i^\\bullet \\,\\hat{a}_j^{\\dagger\\bullet} " }, { "math_id": 26, "text": "\\hat{a}_i \\,\\hat{a}_j^\\dagger \\, \\hat{a}_k= (\\hat{a}_j^\\dagger \\,\\hat{a}_i + \\delta_{ij})\\hat{a}_k = \\hat{a}_j^\\dagger \\,\\hat{a}_i\\, \\hat{a}_k + \\delta_{ij}\\hat{a}_k = \\hat{a}_j^\\dagger \\,\\hat{a}_i\\,\\hat{a}_k + \\hat{a}_i^\\bullet \\,\\hat{a}_j^{\\dagger\\bullet} \\hat{a}_k =\\,\\mathopen{:}\\,\\hat{a}_i\\, \\hat{a}_j^\\dagger \\hat{a}_k \\,\\mathclose{:} + \\mathclose{:} \\,\\hat{a}_i^\\bullet \\,\\hat{a}_j^{\\dagger\\bullet} \\,\\hat{a}_k \\mathclose{:} " }, { "math_id": 27, "text": "\\hat{a}_i \\,\\hat{a}_j^\\dagger \\, \\hat{a}_k \\,\\hat{a}_l^\\dagger= (\\hat{a}_j^\\dagger \\,\\hat{a}_i + \\delta_{ij})(\\hat{a}_l^\\dagger\\,\\hat{a}_k + \\delta_{kl})" }, { "math_id": 28, "text": " = \\hat{a}_j^\\dagger \\,\\hat{a}_i\\, \\hat{a}_l^\\dagger\\, \\hat{a}_k + \\delta_{kl}\\hat{a}_j^\\dagger \\,\\hat{a}_i + \\delta_{ij}\\hat{a}_l^\\dagger\\hat{a}_k + \\delta_{ij} \\delta_{kl} " }, { "math_id": 29, "text": " = \\hat{a}_j^\\dagger (\\hat{a}_l^\\dagger\\, \\hat{a}_i + \\delta_{il}) \\hat{a}_k + \\delta_{kl}\\hat{a}_j^\\dagger \\,\\hat{a}_i + \\delta_{ij}\\hat{a}_l^\\dagger\\hat{a}_k + \\delta_{ij} \\delta_{kl} " }, { "math_id": 30, "text": "= \\hat{a}_j^\\dagger \\hat{a}_l^\\dagger\\, \\hat{a}_i \\hat{a}_k + \\delta_{il} \\hat{a}_j^\\dagger \\, \\hat{a}_k + \\delta_{kl}\\hat{a}_j^\\dagger \\,\\hat{a}_i + \\delta_{ij}\\hat{a}_l^\\dagger\\hat{a}_k + \\delta_{ij} \\delta_{kl} " }, { "math_id": 31, "text": "= \\,\\mathopen{:}\\hat{a}_i \\,\\hat{a}_j^\\dagger \\, \\hat{a}_k \\,\\hat{a}_l^\\dagger\\,\\mathclose{:} + \\mathopen{:}\\,\\hat{a}_i^\\bullet \\,\\hat{a}_j^\\dagger \\, \\hat{a}_k \\,\\hat{a}_l^{\\dagger\\bullet}\\,\\mathclose{:}+\\mathopen{:}\\,\\hat{a}_i \\,\\hat{a}_j^\\dagger \\, \\hat{a}_k^\\bullet \\,\\hat{a}_l^{\\dagger\\bullet}\\,\\mathclose{:}+\\mathopen{:}\\,\\hat{a}_i^\\bullet \\,\\hat{a}_j^{\\dagger\\bullet} \\, \\hat{a}_k \\,\\hat{a}_l^\\dagger\\,\\mathclose{:}+ \\,\\mathopen{:}\\hat{a}_i^\\bullet \\,\\hat{a}_j^{\\dagger\\bullet} \\, \\hat{a}_k^{\\bullet\\bullet}\\,\\hat{a}_l^{\\dagger\\bullet\\bullet} \\mathclose{:} " }, { "math_id": 32, "text": "^\\bullet" }, { "math_id": 33, "text": "\\hat{a}_i \\,\\hat{a}_j^\\dagger \\, \\hat{a}_k \\,\\hat{a}_l^\\dagger" }, { "math_id": 34, "text": "\\hat{A} \\hat{B} \\hat{C} \\hat{D} \\hat{E} \\hat{F}\\ldots " }, { "math_id": 35, "text": "\n\\begin{align}\n\\hat{A} \\hat{B} \\hat{C} \\hat{D} \\hat{E} \\hat{F}\\ldots &= \\mathopen{:} \\hat{A} \\hat{B} \\hat{C} \\hat{D} \\hat{E} \\hat{F}\\ldots \\mathclose{:} \\\\\n&\\quad + \\sum_\\text{singles} \\mathopen{:} \\hat{A}^\\bullet \\hat{B}^\\bullet \\hat{C} \\hat{D} \\hat{E} \\hat{F} \\ldots \\mathclose{:} \\\\\n&\\quad + \\sum_\\text{doubles} \\mathopen{:} \\hat{A}^\\bullet \\hat{B}^{\\bullet\\bullet} \\hat{C}^{\\bullet\\bullet} \\hat{D}^\\bullet \\hat{E} \\hat{F} \\ldots \\mathclose{:} \\\\\n&\\quad + \\ldots\n\\end{align}\n" }, { "math_id": 36, "text": "N=2" }, { "math_id": 37, "text": "\\hat{f}_i^\\dagger" }, { "math_id": 38, "text": "\\hat{f}_i" }, { "math_id": 39, "text": "i=1,2" }, { "math_id": 40, "text": " \\begin{align}\n\\hat{f}_1 \\,\\hat{f}_2 \\, \\hat{f}_1^\\dagger \\,\\hat{f}_2^\\dagger \\,= {} & \\,\\mathopen{:} \\hat{f}_1 \\,\\hat{f}_2 \\, \\hat{f}_1^\\dagger \\,\\hat{f}_2^\\dagger \\, \\mathclose{:} \\\\[5pt]\n& - \\,\\hat{f}_1^\\bullet \\, \\hat{f}_1^{\\dagger\\bullet} \\, \\,\\mathopen{:} \\hat{f}_2 \\, \\hat{f}_2^\\dagger \\, \\mathclose{:} + \\,\\hat{f}_1^\\bullet \\, \\hat{f}_2^{\\dagger\\bullet} \\, \\,\\mathopen{:} \\hat{f}_2 \\, \\hat{f}_1^\\dagger \\, \\mathclose{:} +\\, \\hat{f}_2^\\bullet \\, \\hat{f}_1^{\\dagger\\bullet} \\, \\,\\mathopen{:} \\hat{f}_1 \\,\\hat{f}_2^\\dagger \\, \\mathclose{:} - \\hat{f}_2^\\bullet \\,\\hat{f}_2^{\\dagger\\bullet} \\, \\,\\mathopen{:} \\hat{f}_1 \\, \\hat{f}_1^\\dagger \\, \\mathclose{:} \\\\[5pt]\n& -\\hat{f}_1^{\\bullet\\bullet} \\, \\hat{f}_1^{\\dagger\\bullet\\bullet} \\, \\hat{f}_2^\\bullet \\, \\hat{f}_2^{\\dagger\\bullet} \\, + \\hat{f}_1^{\\bullet\\bullet} \\,\\hat{f}_2^{\\dagger\\bullet\\bullet}\\, \\hat{f}_2^\\bullet \\, \\hat{f}_1^{\\dagger\\bullet} \\, \\end{align} " }, { "math_id": 41, "text": "\\hat{A}\\hat{B} = \\mathopen{:}\\hat{A}\\hat{B}\\mathclose{:} + (\\hat{A}\\,\\hat{B}\\, - \\mathopen{:} \\hat{A}\\,\\hat{B} \\mathclose{:}) = \\mathopen{:}\\hat{A}\\hat{B}\\mathclose{:} + \\hat{A}^\\bullet\\hat{B}^\\bullet" }, { "math_id": 42, "text": "N-1" }, { "math_id": 43, "text": "\\hat{B} \\hat{C} \\hat{D} \\hat{E} \\hat{F}\\ldots" }, { "math_id": 44, "text": "\\hat{A}\\hat{B}\\hat{C}\\hat{D}\\hat{E} \\hat{F}\\ldots" }, { "math_id": 45, "text": "\n\\begin{align}\n\\hat{A} \\hat{B} \\hat{C} \\hat{D} \\hat{E} \\hat{F}\\ldots &= \\hat{A} \\mathopen{:}\\hat{B} \\hat{C} \\hat{D} \\hat{E} \\hat{F}\\ldots \\mathclose{:} \\\\\n&\\quad + \\hat{A} \\sum_\\text{singles} \\mathopen{:} \\hat{B}^\\bullet \\hat{C}^\\bullet \\hat{D} \\hat{E} \\hat{F} \\ldots \\mathclose{:} \\\\\n&\\quad + \\hat{A} \\sum_\\text{doubles} \\mathopen{:} \\hat{B}^\\bullet \\hat{C}^{\\bullet\\bullet} \\hat{D}^{\\bullet\\bullet} \\hat{E}^\\bullet \\hat{F} \\ldots \\mathclose{:} \\\\\n&\\quad + \\hat{A} \\ldots\n\\end{align}\n" }, { "math_id": 46, "text": "\\hat{A}\\mathopen{:}\\hat{B} \\hat{C} \\hat{D} \\hat{E} \\hat{F}\\ldots \\mathclose{:}" }, { "math_id": 47, "text": "\\hat{A}\\hat{B}\\hat{C}\\hat{D}\\hat{E}\\hat{F}\\ldots" }, { "math_id": 48, "text": "\\hat {A}" }, { "math_id": 49, "text": "\\hat{X}" }, { "math_id": 50, "text": "\\hat{A}\\hat{X} = \\mathopen{:}\\hat{A}\\hat{X}\\mathclose{:} + \\hat{A}^\\bullet\\hat{X}^\\bullet" }, { "math_id": 51, "text": "\\mathcal C(x_1, x_2)=\\left \\langle 0 \\mid \\mathcal T\\phi_i(x_1)\\phi_i(x_2)\\mid 0\\right \\rangle= \\langle 0 \\mid \\overline{\\phi_i(x_1)\\phi_i(x_2)}\\mid 0 \\rangle=i\\Delta_F(x_1-x_2)\n=i\\int{\\frac{d^4k}{(2\\pi)^4}\\frac{e^{-ik(x_1-x_2)}}{(k^2-m^2)+i\\epsilon}}," }, { "math_id": 52, "text": "\\overline{\\phi_i(x_1)\\phi_i(x_2)}" }, { "math_id": 53, "text": "|0\\rangle" }, { "math_id": 54, "text": "\\overline{AB}=\\mathcal TAB-\\mathopen{:}\\mathcal TAB\\mathclose{:} " }, { "math_id": 55, "text": "\\overline{AB}" }, { "math_id": 56, "text": "\\mathcal TAB " }, { "math_id": 57, "text": "\\mathcal T\\prod_{k=1}^m\\phi(x_k)=\\mathopen{:}\\mathcal T\\prod\\phi_i(x_k)\\mathclose{:}+\\sum_{\\alpha,\\beta}\\overline{\\phi(x_\\alpha)\\phi(x_\\beta)}\\mathopen{:}\\mathcal T\\prod_{k\\not=\\alpha,\\beta}\\phi_i(x_k)\\mathclose{:}+{}\n" }, { "math_id": 58, "text": "\\mathcal\n{}+\\sum_{(\\alpha,\\beta),(\\gamma,\\delta)}\\overline{\\phi(x_\\alpha)\\phi(x_\\beta)}\\;\\overline{\\phi(x_\\gamma)\\phi(x_\\delta)}\\mathopen{:}\\mathcal T\\prod_{k\\not=\\alpha,\\beta,\\gamma,\\delta}\\phi_i(x_k)\\mathclose{:}+\\cdots.\n" }, { "math_id": 59, "text": "F_m^i(x)=\\left \\langle 0 \\mid \\mathcal T\\phi_i(x_1)\\phi_i(x_2)\\mid 0\\right \\rangle=\\sum_\\mathrm{pairs}\\overline{\\phi(x_1)\\phi(x_2)}\\cdots\n\\overline{\\phi(x_{m-1})\\phi(x_m})" }, { "math_id": 60, "text": "G_p^{(n)}=\\left \\langle 0 \\mid \\mathcal T\\mathopen{:}v_i(y_1)\\mathclose{:}\\dots\\mathopen{:}v_i(y_n)\\mathclose{:}\\phi_i(x_1)\\cdots \\phi_i(x_p)\\mid0\\right \\rangle" }, { "math_id": 61, "text": "v=gy^4 \\Rightarrow \\mathopen{:}v_i(y_1)\\mathclose{:}=\\mathopen{:}\\phi_i(y_1)\\phi_i(y_1)\\phi_i(y_1)\\phi_i(y_1)\\mathclose{:}" } ]
https://en.wikipedia.org/wiki?curid=8187487
81884
Newton's law of cooling
Physical law relating heat loss to temperature difference In the study of heat transfer, Newton's law of cooling is a physical law which states that the rate of heat loss of a body is directly proportional to the difference in the temperatures between the body and its environment. The law is frequently qualified to include the condition that the temperature difference is small and the nature of heat transfer mechanism remains the same. As such, it is equivalent to a statement that the heat transfer coefficient, which mediates between heat losses and temperature differences, is a constant. In heat conduction, Newton's Law is generally followed as a consequence of Fourier's law. The thermal conductivity of most materials is only weakly dependent on temperature, so the constant heat transfer coefficient condition is generally met. In convective heat transfer, Newton's Law is followed for forced air or pumped fluid cooling, where the properties of the fluid do not vary strongly with temperature, but it is only approximately true for buoyancy-driven convection, where the velocity of the flow increases with temperature difference. In the case of heat transfer by thermal radiation, Newton's law of cooling holds only for very small temperature differences. When stated in terms of temperature differences, Newton's law (with several further simplifying assumptions, such as a low Biot number and a temperature-independent heat capacity) results in a simple differential equation expressing temperature-difference as a function of time. The solution to that equation describes an exponential decrease of temperature-difference over time. This characteristic decay of the temperature-difference is also associated with Newton's law of cooling. Historical background. Isaac Newton published his work on cooling anonymously in 1701 as "Scala graduum Caloris" in "Philosophical Transactions". Newton did not originally state his law in the above form in 1701. Rather, using today's terms, Newton noted after some mathematical manipulation that "the rate of temperature change" of a body is proportional to the difference in temperatures between the body and its surroundings. This final simplest version of the law, given by Newton himself, was partly due to confusion in Newton's time between the concepts of heat and temperature, which would not be fully disentangled until much later. In 2020, Maruyama and Moriya repeated Newton's experiments with modern apparatus, and they applied modern data reduction techniques. In particular, these investigators took account of thermal radiation at high temperatures (as for the molten metals Newton used), and they accounted for buoyancy effects on the air flow. By comparison to Newton's original data, they concluded that his measurements (from 1692 to 1693) had been "quite accurate". Relationship to mechanism of cooling. Convection cooling is sometimes said to be governed by "Newton's law of cooling." When the heat transfer coefficient is independent, or relatively independent, of the temperature difference between object and environment, Newton's law is followed. The law holds well for forced air and pumped liquid cooling, where the fluid velocity does not rise with increasing temperature difference. Newton's law is most closely obeyed in purely conduction-type cooling. However, the heat transfer coefficient is a function of the temperature difference in natural convective (buoyancy driven) heat transfer. In that case, Newton's law only approximates the result when the temperature difference is relatively small. Newton himself realized this limitation. A correction to Newton's law concerning convection for larger temperature differentials by including an exponent, was made in 1817 by Dulong and Petit. (These men are better-known for their formulation of the Dulong–Petit law concerning the molar specific heat capacity of a crystal.) Another situation that does not obey Newton's law is radiative heat transfer. Radiative cooling is better described by the Stefan–Boltzmann law in which the heat transfer rate varies as the difference in the 4th powers of the absolute temperatures of the object and of its environment. Mathematical formulation of Newton's law. The statement of Newton's law used in the heat transfer literature puts into mathematics the idea that "the rate of heat loss of a body is proportional to the difference in temperatures between the body and its surroundings". For a temperature-independent heat transfer coefficient, the statement is: formula_0 where In global parameters by integrating on the surface area the heat flux, it can be also stated as: formula_6 where If the heat transfer coefficient and the temperature difference are uniform along the heat transfer surface, the above formula simplifies to: formula_10. The heat transfer coefficient "h" depends upon physical properties of the fluid and the physical situation in which convection occurs. Therefore, a single usable heat transfer coefficient (one that does not vary significantly across the temperature-difference ranges covered during cooling and heating) must be derived or found experimentally for every system that is to be analyzed. Formulas and correlations are available in many references to calculate heat transfer coefficients for typical configurations and fluids. For laminar flows, the heat transfer coefficient is usually smaller than in turbulent flows because turbulent flows have strong mixing within the boundary layer on the heat transfer surface. Note the heat transfer coefficient changes in a system when a transition from laminar to turbulent flow occurs. The Biot number. The Biot number, a dimensionless quantity, is defined for a body as formula_11 where The physical significance of Biot number can be understood by imagining the heat flow from a hot metal sphere suddenly immersed in a pool to the surrounding fluid. The heat flow experiences two resistances: the first outside the surface of the sphere, and the second within the solid metal (which is influenced by both the size and composition of the sphere). The ratio of these resistances is the dimensionless Biot number. If the thermal resistance at the fluid/sphere interface exceeds that thermal resistance offered by the interior of the metal sphere, the Biot number will be less than one. For systems where it is much less than one, the interior of the sphere may be presumed always to have the same temperature, although this temperature may be changing, as heat passes into the sphere from the surface. The equation to describe this change in (relatively uniform) temperature inside the object, is the simple exponential one described in Newton's law of cooling expressed in terms of temperature difference (see below). In contrast, the metal sphere may be large, causing the characteristic length to increase to the point that the Biot number is larger than one. In this case, temperature gradients within the sphere become important, even though the sphere material is a good conductor. Equivalently, if the sphere is made of a thermally insulating (poorly conductive) material, such as wood or styrofoam, the interior resistance to heat flow will exceed that at the fluid/sphere boundary, even with a much smaller sphere. In this case, again, the Biot number will be greater than one. Values of the Biot number smaller than 0.1 imply that the heat conduction inside the body is much faster than the heat convection away from its surface, and temperature gradients are negligible inside of it. This can indicate the applicability (or inapplicability) of certain methods of solving transient heat transfer problems. For example, a Biot number less than 0.1 typically indicates less than 5% error will be present when assuming a lumped-capacitance model of transient heat transfer (also called lumped system analysis). Typically, this type of analysis leads to simple exponential heating or cooling behavior ("Newtonian" cooling or heating) since the internal energy of the body is directly proportional to its temperature, which in turn determines the rate of heat transfer into or out of it. This leads to a simple first-order differential equation which describes heat transfer in these systems. Having a Biot number smaller than 0.1 labels a substance as "thermally thin," and temperature can be assumed to be constant throughout the material's volume. The opposite is also true: A Biot number greater than 0.1 (a "thermally thick" substance) indicates that one cannot make this assumption, and more complicated heat transfer equations for "transient heat conduction" will be required to describe the time-varying and non-spatially-uniform temperature field within the material body. Analytic methods for handling these problems, which may exist for simple geometric shapes and uniform material thermal conductivity, are described in the article on the heat equation. Application of Newton's law of transient cooling. Simple solutions for transient cooling of an object may be obtained when the internal thermal resistance within the object is small in comparison to the resistance to heat transfer away from the object's surface (by external conduction or convection), which is the condition for which the Biot number is less than about 0.1. This condition allows the presumption of a single, approximately uniform temperature inside the body, which varies in time but not with position. (Otherwise the body would have many different temperatures inside it at any one time.) This single temperature will generally change exponentially as time progresses (see below). The condition of low Biot number leads to the so-called lumped capacitance model. In this model, the internal energy (the amount of thermal energy in the body) is calculated by assuming a constant heat capacity. In that case, the internal energy of the body is a linear function of the body's single internal temperature. The lumped capacitance solution that follows assumes a constant heat transfer coefficient, as would be the case in forced convection. For free convection, the lumped capacitance model can be solved with a heat transfer coefficient that varies with temperature difference. First-order transient response of lumped-capacitance objects. A body treated as a lumped capacitance object, with a total internal energy of formula_13 (in joules), is characterized by a single uniform internal temperature, formula_14. The heat capacitance, formula_15, of the body is formula_16 (in J/K), for the case of an incompressible material. The internal energy may be written in terms of the temperature of the body, the heat capacitance (taken to be independent of temperature), and a reference temperature at which the internal energy is zero: formula_17. Differentiating formula_13 with respect to time gives: formula_18 Applying the first law of thermodynamics to the lumped object gives formula_19, where the rate of heat transfer out of the body, formula_7, may be expressed by Newton's law of cooling, and where no work transfer occurs for an incompressible material. Thus, formula_20 where the time constant of the system is formula_21. The heat capacitance formula_15 may be written in terms of the object's specific heat capacity, formula_22 (J/kg-K), and mass, formula_23 (kg). The time constant is then formula_24. When the environmental temperature is constant in time, we may define formula_5. The equation becomes formula_25 The solution of this differential equation, by integration from the initial condition, is formula_26 where formula_27 is the temperature difference at time 0. Reverting to temperature, the solution is formula_28 The temperature difference between the body and the environment decays exponentially as a function of time. Nondimensionalisation. By non-dimensionalizing, the differential equation becomes formula_29 where Solving the initial-value problem using separation of variables gives formula_33 References. <templatestyles src="Reflist/styles.css" /> See also:
[ { "math_id": 0, "text": " q= h \\left(T(t) - T_\\text{env}\\right) = h \\, \\Delta T(t)," }, { "math_id": 1, "text": "q" }, { "math_id": 2, "text": "h" }, { "math_id": 3, "text": "T" }, { "math_id": 4, "text": "T_\\text{env}" }, { "math_id": 5, "text": "\\Delta T(t) = T(t) - T_\\text{env}" }, { "math_id": 6, "text": "\\dot{Q} =\\oint_A h \\left(T(t) - T_\\text{env}\\right) dA = \\oint_A h \\, \\Delta T(t) dA," }, { "math_id": 7, "text": "\\dot{Q}" }, { "math_id": 8, "text": "\\dot{Q} = \\oint_A q d A" }, { "math_id": 9, "text": "A" }, { "math_id": 10, "text": "\\dot{Q} = h A \\left(T(t) - T_\\text{env}\\right) = h A \\, \\Delta T(t) " }, { "math_id": 11, "text": "\\text{Bi} = \\frac{h L_{\\rm C}}{k_{\\rm b}}," }, { "math_id": 12, "text": "L_{\\rm C} = V_\\text{body} / A_\\text{surface}" }, { "math_id": 13, "text": "U" }, { "math_id": 14, "text": "T(t)" }, { "math_id": 15, "text": "C" }, { "math_id": 16, "text": "C = dU/dT" }, { "math_id": 17, "text": "U = C (T - T_\\text{ref})" }, { "math_id": 18, "text": "\\frac{dU}{dt} = C \\, \\frac{dT}{dt}." }, { "math_id": 19, "text": "\\frac{dU}{dt} = -\\dot{Q}" }, { "math_id": 20, "text": "\\frac{dT(t)}{dt} = -\\frac{hA}{C} (T(t) - T_\\text{env}) = -\\frac{1}{\\tau}\\ \\Delta T(t)," }, { "math_id": 21, "text": "\\tau = C / (hA)" }, { "math_id": 22, "text": "c" }, { "math_id": 23, "text": "m" }, { "math_id": 24, "text": "\\tau = mc / (hA)" }, { "math_id": 25, "text": "\\frac{dT(t)}{dt} = \\frac{d\\Delta T(t)}{dt} = -\\frac{1}{\\tau} \\Delta T(t)." }, { "math_id": 26, "text": "\\Delta T(t) = \\Delta T(0) \\, e^{-t / \\tau}." }, { "math_id": 27, "text": "\\Delta T(0)" }, { "math_id": 28, "text": "T(t) = T_\\text{env} + (T(0) - T_\\text{env}) \\, e^{-t/\\tau}." }, { "math_id": 29, "text": "\\dot{T} = r \\left(T_\\text{env} - T(t)\\right)," }, { "math_id": 30, "text": "\\dot{T}" }, { "math_id": 31, "text": "r" }, { "math_id": 32, "text": "^{-1}" }, { "math_id": 33, "text": " T(t) = T_\\text{env} + (T(0)-T_\\text{env})e^{-rt}. " } ]
https://en.wikipedia.org/wiki?curid=81884
81887
Proxima Centauri
Nearest star to the Solar System where formula_0 is the average solar density. See: A 1998 study of photometric variations indicates that Proxima Centauri completes a full rotation once every 83.5 days. A subsequent time series analysis of chromospheric indicators in 2002 suggests a longer rotation period of  days. Later observations of the star's magnetic field subsequently revealed that the star rotates with a period of  days, consistent with a measurement of  days from radial velocity observations. Structure and fusion. Because of its low mass, the interior of the star is completely convective, causing energy to be transferred to the exterior by the physical movement of plasma rather than through radiative processes. This convection means that the helium ash left over from the thermonuclear fusion of hydrogen does not accumulate at the core but is instead circulated throughout the star. Unlike the Sun, which will only burn through about 10% of its total hydrogen supply before leaving the main sequence, Proxima Centauri will consume nearly all of its fuel before the fusion of hydrogen comes to an end. Convection is associated with the generation and persistence of a magnetic field. The magnetic energy from this field is released at the surface through stellar flares that briefly (as short as per ten seconds) increase the overall luminosity of the star. On May 6, 2019, a flare event bordering Solar M and X flare class, briefly became the brightest ever detected, with a far ultraviolet emission of . These flares can grow as large as the star and reach temperatures measured as high as 27 million K—hot enough to radiate X-rays. Proxima Centauri's quiescent X-ray luminosity, approximately (4–16) × 1026 erg/s ((4–16) × 1019 W), is roughly equal to that of the much larger Sun. The peak X-ray luminosity of the largest flares can reach 1028 erg/s (1021 W). Proxima Centauri's chromosphere is active, and its spectrum displays a strong emission line of singly ionized magnesium at a wavelength of 280 nm. About 88% of the surface of Proxima Centauri may be active, a percentage that is much higher than that of the Sun even at the peak of the solar cycle. Even during quiescent periods with few or no flares, this activity increases the corona temperature of Proxima Centauri to 3.5 million K, compared to the 2 million K of the Sun's corona, and its total X-ray emission is comparable to the sun's. Proxima Centauri's overall activity level is considered low compared to other red dwarfs, which is consistent with the star's estimated age of 4.85 × 109 years, since the activity level of a red dwarf is expected to steadily wane over billions of years as its stellar rotation rate decreases. The activity level appears to vary with a period of roughly 442 days, which is shorter than the solar cycle of 11 years. Proxima Centauri has a relatively weak stellar wind, no more than 20% of the mass loss rate of the solar wind. Because the star is much smaller than the Sun, the mass loss per unit surface area from Proxima Centauri may be eight times that from the solar surface. Life phases. A red dwarf with the mass of Proxima Centauri will remain on the main sequence for about four trillion years. As the proportion of helium increases because of hydrogen fusion, the star will become smaller and hotter, gradually transforming into a so-called "blue dwarf". Near the end of this period it will become significantly more luminous, reaching 2.5% of the Sun's luminosity (L☉) and warming up any orbiting bodies for a period of several billion years. When the hydrogen fuel is exhausted, Proxima Centauri will then evolve into a helium white dwarf (without passing through the red giant phase) and steadily lose any remaining heat energy. The Alpha Centauri system may have formed through a low-mass star being dynamically captured by a more massive binary of 1.5–2 M☉ within their embedded star cluster before the cluster disperses. However, more accurate measurements of the radial velocity are needed to confirm this hypothesis. If Proxima Centauri was bound to the Alpha Centauri system during its formation, the stars are likely to share the same elemental composition. The gravitational influence of Proxima might have stirred up the Alpha Centauri protoplanetary disks. This would have increased the delivery of volatiles such as water to the dry inner regions, so possibly enriching any terrestrial planets in the system with this material. Alternatively, Proxima Centauri may have been captured at a later date during an encounter, resulting in a highly eccentric orbit that was then stabilized by the galactic tide and additional stellar encounters. Such a scenario may mean that Proxima Centauri's planetary companions have had a much lower chance for orbital disruption by Alpha Centauri. As the members of the Alpha Centauri pair continue to evolve and lose mass, Proxima Centauri is predicted to become unbound from the system in around 3.5 billion years from the present. Thereafter, the star will steadily diverge from the pair. Motion and location. Based on a parallax of , published in 2020 in Gaia Data Release 3, Proxima Centauri is from the Sun. Previously published parallaxes include: in 2018 by Gaia DR2, , in 2014 by the Research Consortium On Nearby Stars; , in the original Hipparcos Catalogue, in 1997; in the Hipparcos New Reduction, in 2007; and using the Hubble Space Telescope's fine guidance sensors, in 1999. From Earth's vantage point, Proxima Centauri is separated from Alpha Centauri by 2.18 degrees, or four times the angular diameter of the full Moon. Proxima Centauri has a relatively large proper motion—moving 3.85 arcseconds per year across the sky. It has a radial velocity toward the Sun of 22.2 km/s. From Proxima Centauri, the Sun would appear as a bright 0.4-magnitude star in the constellation Cassiopeia, similar to that of Achernar or Procyon from Earth. Among the known stars, Proxima Centauri has been the closest star to the Sun for about 32,000 years and will be so for about another 25,000 years, after which Alpha Centauri A and Alpha Centauri B will alternate approximately every 79.91 years as the closest star to the Sun. In 2001, J. García-Sánchez "et al." predicted that Proxima Centauri will make its closest approach to the Sun in approximately 26,700 years, coming within . A 2010 study by V. V. Bobylev predicted a closest approach distance of in about 27,400 years, followed by a 2014 study by C. A. L. Bailer-Jones predicting a perihelion approach of in roughly 26,710 years. Proxima Centauri is orbiting through the Milky Way at a distance from the Galactic Centre that varies from , with an orbital eccentricity of 0.07. Alpha Centauri. Proxima Centauri has been suspected to be a companion of the Alpha Centauri binary star system since its discovery in 1915. For this reason, it is sometimes referred to as Alpha Centauri C. Data from the Hipparcos satellite, combined with ground-based observations, were consistent with the hypothesis that the three stars are a gravitationally bound system. Kervella et al. (2017) used high-precision radial velocity measurements to determine with a high degree of confidence that Proxima and Alpha Centauri are gravitationally bound. Proxima Centauri's orbital period around the Alpha Centauri AB barycenter is years with an eccentricity of ; it approaches Alpha Centauri to at periastron and retreats to at apastron. At present, Proxima Centauri is from the Alpha Centauri AB barycenter, nearly to the farthest point in its orbit. Six single stars, two binary star systems, and a triple star share a common motion through space with Proxima Centauri and the Alpha Centauri system. (The co-moving stars include HD 4391, γ2 Normae, and Gliese 676.) The space velocities of these stars are all within 10 km/s of Alpha Centauri's peculiar motion. Thus, they may form a moving group of stars, which would indicate a common point of origin, such as in a star cluster. Planetary system. ! align=center| Companion ! align=center| Mass ! align=center| Semimajor axis ! align=center| Orbital period ! align=center| Eccentricity ! align=center| Inclination ! align=center| Radius As of 2022, three planets (two confirmed and one candidate) have been detected in orbit around Proxima Centauri, with one being among the lightest ever detected by radial velocity ("d"), one close to Earth's size within the habitable zone ("b"), and a possible gas dwarf that orbits much farther out than the inner two ("c"), although its status remains disputed. Searches for exoplanets around Proxima Centauri date back to the late 1970s. In the 1990s, multiple measurements of Proxima Centauri's radial velocity constrained the maximum mass that a detectable companion could possess. The activity level of the star adds noise to the radial velocity measurements, complicating detection of a companion using this method. In 1998, an examination of Proxima Centauri using the Faint Object Spectrograph on board the Hubble Space Telescope appeared to show evidence of a companion orbiting at a distance of about 0.5 AU. A subsequent search using the Wide Field and Planetary Camera 2 failed to locate any companions. Astrometric measurements at the Cerro Tololo Inter-American Observatory appear to rule out a Jupiter-sized planet with an orbital period of 2−12 years. In 2017, a team of astronomers using the Atacama Large Millimeter Array reported detecting a belt of cold dust orbiting Proxima Centauri at a range of 1−4 AU from the star. This dust has a temperature of around 40 K and has a total estimated mass of 1% of the planet Earth. They tentatively detected two additional features: a cold belt with a temperature of 10 K orbiting around 30 AU and a compact emission source about 1.2 arcseconds from the star. There was a hint at an additional warm dust belt at a distance of 0.4 AU from the star. However, upon further analysis, these emissions were determined to be most likely the result of a large flare emitted by the star in March 2017. The presence of dust within 4 AU radius from the star is not needed to model the observations. Planet b. Proxima Centauri b, or Alpha Centauri Cb, orbits the star at a distance of roughly with an orbital period of approximately 11.2 Earth days. Its estimated mass is at least 1.17 times that of the Earth. Moreover, the equilibrium temperature of Proxima Centauri b is estimated to be within the range where water could exist as liquid on its surface; thus, placing it within the habitable zone of Proxima Centauri. The first indications of the exoplanet Proxima Centauri b were found in 2013 by Mikko Tuomi of the University of Hertfordshire from archival observation data. To confirm the possible discovery, a team of astronomers launched the Pale Red Dot project in January 2016. On August 24, 2016, the team of 31 scientists from all around the world, led by Guillem Anglada-Escudé of Queen Mary University of London, confirmed the existence of Proxima Centauri b through a peer-reviewed article published in "Nature". The measurements were performed using two spectrographs: HARPS on the ESO 3.6 m Telescope at La Silla Observatory and UVES on the 8 m Very Large Telescope at Paranal Observatory. Several attempts to detect a transit of this planet across the face of Proxima Centauri have been made. A transit-like signal appearing on September 8, 2016, was tentatively identified, using the Bright Star Survey Telescope at the Zhongshan Station in Antarctica. In 2016, in a paper that helped to confirm Proxima Centauri b's existence, a second signal in the range of 60 to 500 days was detected. However, stellar activity and inadequate sampling causes its nature to remain unclear. Planet c. Proxima Centauri c is a candidate super-Earth or gas dwarf about 7 Earth masses orbiting at roughly every . If Proxima Centauri b were the star's Earth, Proxima Centauri c would be equivalent to Neptune. Due to its large distance from Proxima Centauri, it is unlikely to be habitable, with a low equilibrium temperature of around 39 K. The planet was first reported by Italian astrophysicist Mario Damasso and his colleagues in April 2019. Damasso's team had noticed minor movements of Proxima Centauri in the radial velocity data from the ESO's HARPS instrument, indicating a possible additional planet orbiting Proxima Centauri. In 2020, the planet's existence was confirmed by Hubble astrometry data from c. 1995. A possible direct imaging counterpart was detected in the infrared with the SPHERE, but the authors admit that they "did not obtain a clear detection." If their candidate source is in fact Proxima Centauri c, it is too bright for a planet of its mass and age, implying that the planet may have a ring system with a radius of around 5 RJ. A 2022 study disputed the radial velocity confirmation of the planet. Planet d. In 2019, a team of astronomers revisited the data from ESPRESSO about Proxima Centauri b to refine its mass. While doing so, the team found another radial velocity spike with a periodicity of 5.15 days. They estimated that if it were a planetary companion, it would be no less than 0.29 Earth masses. Further analysis confirmed the signal's existence leading up the discovery's announcement in February 2022. Habitability. <templatestyles src="Stack/styles.css"/> Prior to the discovery of Proxima Centauri b, the TV documentary "" hypothesized that a life-sustaining planet could exist in orbit around Proxima Centauri or other red dwarfs. Such a planet would lie within the habitable zone of Proxima Centauri, about from the star, and would have an orbital period of 3.6–14 days. A planet orbiting within this zone may experience tidal locking to the star. If the orbital eccentricity of this hypothetical planet were low, Proxima Centauri would move little in the planet's sky, and most of the surface would experience either day or night perpetually. The presence of an atmosphere could serve to redistribute heat from the star-lit side to the far side of the planet. Proxima Centauri's flare outbursts could erode the atmosphere of any planet in its habitable zone, but the documentary's scientists thought that this obstacle could be overcome. Gibor Basri of the University of California, Berkeley argued: "No one [has] found any showstoppers to habitability." For example, one concern was that the torrents of charged particles from the star's flares could strip the atmosphere off any nearby planet. If the planet had a strong magnetic field, the field would deflect the particles from the atmosphere; even the slow rotation of a tidally locked planet that spins once for every time it orbits its star would be enough to generate a magnetic field, as long as part of the planet's interior remained molten. Other scientists, especially proponents of the rare-Earth hypothesis, disagree that red dwarfs can sustain life. Any exoplanet in this star's habitable zone would likely be tidally locked, resulting in a relatively weak planetary magnetic moment, leading to strong atmospheric erosion by coronal mass ejections from Proxima Centauri. In December 2020, a candidate SETI radio signal BLC-1 was announced as potentially coming from the star. The signal was later determined to be human-made radio interference. Observational history. In 1915, the Scottish astronomer Robert Innes, director of the Union Observatory in Johannesburg, South Africa, discovered a star that had the same proper motion as Alpha Centauri. He suggested that it be named "Proxima Centauri" (actually "Proxima Centaurus"). In 1917, at the Royal Observatory at the Cape of Good Hope, the Dutch astronomer Joan Voûte measured the star's trigonometric parallax at and determined that Proxima Centauri was approximately the same distance from the Sun as Alpha Centauri. It was the lowest-luminosity star known at the time. An equally accurate parallax determination of Proxima Centauri was made by American astronomer Harold L. Alden in 1928, who confirmed Innes's view that it is closer, with a parallax of . A size estimate for Proxima Centauri was obtained by the Canadian astronomer John Stanley Plaskett in 1925 using interferometry. The result was 207,000 miles (333,000 km), or approximately 0.24 R☉. In 1951, American astronomer Harlow Shapley announced that Proxima Centauri is a flare star. Examination of past photographic records showed that the star displayed a measurable increase in magnitude on about 8% of the images, making it the most active flare star then known. The proximity of the star allows for detailed observation of its flare activity. In 1980, the Einstein Observatory produced a detailed X-ray energy curve of a stellar flare on Proxima Centauri. Further observations of flare activity were made with the EXOSAT and ROSAT satellites, and the X-ray emissions of smaller, solar-like flares were observed by the Japanese ASCA satellite in 1995. Proxima Centauri has since been the subject of study by most X-ray observatories, including XMM-Newton and Chandra. Because of Proxima Centauri's southern declination, it can only be viewed south of latitude 27° N. Red dwarfs such as Proxima Centauri are too faint to be seen with the naked eye. Even from Alpha Centauri A or B, Proxima would only be seen as a fifth magnitude star. It has apparent visual magnitude 11, so a telescope with an aperture of at least is needed to observe it, even under ideal viewing conditions—under clear, dark skies with Proxima Centauri well above the horizon. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name "Proxima Centauri" for this star on August 21, 2016, and it is now so included in the List of IAU approved Star Names. In 2016, a superflare was observed from Proxima Centauri, the strongest flare ever seen. The optical brightness increased by a factor of 68× to approximately magnitude 6.8. It is estimated that similar flares occur around five times every year but are of such short duration, just a few minutes, that they have never been observed before. On 2020 April 22 and 23, the "New Horizons" spacecraft took images of two of the nearest stars, Proxima Centauri and Wolf 359. When compared with Earth-based images, a very large parallax effect was easily visible. However, this was only used for illustrative purposes and did not improve on previous distance measurements. Future exploration. Because of the star's proximity to Earth, Proxima Centauri has been proposed as a flyby destination for interstellar travel. If non-nuclear, conventional propulsion technologies are used, the flight of a spacecraft to Proxima Centauri and its planets would probably require thousands of years. For example, "Voyager 1", which is now travelling relative to the Sun, would reach Proxima Centauri in 73,775 years, were the spacecraft travelling in the direction of that star and Proxima was standing still. Proxima's actual galactic orbit means a slow-moving probe would have only several tens of thousands of years to catch the star at its closest approach, before it recedes out of reach. Nuclear pulse propulsion might enable such interstellar travel with a trip timescale of a century, inspiring several studies such as Project Orion, Project Daedalus, and Project Longshot. Project Breakthrough Starshot aims to reach the Alpha Centauri system within the first half of the 21st century, with microprobes travelling at 20% of the speed of light propelled by around 100 gigawatts of Earth-based lasers. The probes would perform a fly-by of Proxima Centauri about 20 years after its launch, or possibly go into orbit after about 140 years if swing-by's around Proxima Centauri or Alpha Centauri are to be employed. Then the probes would take photos and collect data of the planets of the stars, and their atmospheric compositions. It would take 4.25 years for the information collected to be sent back to Earth. Explanatory notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" /> Further reading. <templatestyles src="Div col/styles.css"/> External links. <indicator name="01-sky-coordinates"><templatestyles src="Template:Sky/styles.css" />Coordinates: &show_grid=1&show_constellation_lines=1&show_constellation_boundaries=1&show_const_names=1&show_galaxies=1&img_source=IMG_all 14h 29m 42.9487s, −62° 40′ 46.141″</indicator>
[ { "math_id": 0, "text": "\\begin{smallmatrix}\\rho_{\\odot}\\end{smallmatrix}" } ]
https://en.wikipedia.org/wiki?curid=81887
8190304
Spalart–Allmaras turbulence model
In physics, the Spalart–Allmaras model is a one-equation model that solves a modelled transport equation for the kinematic eddy turbulent viscosity. The Spalart–Allmaras model was designed specifically for aerospace applications involving wall-bounded flows and has been shown to give good results for boundary layers subjected to adverse pressure gradients. It is also gaining popularity in turbomachinery applications. In its original form, the model is effectively a low-Reynolds number model, requiring the viscosity-affected region of the boundary layer to be properly resolved ( y+ ~1 meshes). The Spalart–Allmaras model was developed for aerodynamic flows. It is not calibrated for general industrial flows, and does produce relatively larger errors for some free shear flows, especially plane and round jet flows. In addition, it cannot be relied on to predict the decay of homogeneous, isotropic turbulence. It solves a transport equation for a viscosity-like variable formula_0. This may be referred to as the "Spalart–Allmaras variable". Original model. The turbulent eddy viscosity is given by formula_1 formula_2 formula_3 formula_4 formula_5 formula_6 formula_7 The rotation tensor is given by formula_8 where d is the distance from the closest surface and formula_9 is the norm of the difference between the velocity at the trip (usually zero) and that at the field point we are considering. The constants are formula_10 Modifications to original model. According to Spalart it is safer to use the following values for the last two constants: formula_11 Other models related to the S-A model: DES (1999) DDES (2006) Model for compressible flows. There are several approaches to adapting the model for compressible flows. In all cases, the turbulent dynamic viscosity is computed from formula_12 where formula_13 is the local density. The first approach applies the original equation for formula_0. In the second approach, the convective terms in the equation for formula_0 are modified to formula_14 where the right hand side (RHS) is the same as in the original model. The third approach involves inserting the density inside some of the derivatives on the LHS and RHS. The second and third approaches are not recommended by the original authors, but they are found in several solvers. Boundary conditions. Walls: formula_15 Freestream: Ideally formula_15, but some solvers can have problems with a zero value, in which case formula_16 can be used. This is if the trip term is used to "start up" the model. A convenient option is to set formula_17 in the freestream. The model then provides "Fully Turbulent" behavior, i.e., it becomes turbulent in any region that contains shear. Outlet: convective outlet.
[ { "math_id": 0, "text": "\\tilde{\\nu}" }, { "math_id": 1, "text": "\n\\nu_t = \\tilde{\\nu} f_{v1}, \\quad f_{v1} = \\frac{\\chi^3}{\\chi^3 + C^3_{v1}}, \\quad \\chi := \\frac{\\tilde{\\nu}}{\\nu}\n" }, { "math_id": 2, "text": "\n\\frac{\\partial \\tilde{\\nu}}{\\partial t} + u_j \\frac{\\partial \\tilde{\\nu}}{\\partial x_j} = C_{b1} [1 - f_{t2}] \\tilde{S} \\tilde{\\nu} + \\frac{1}{\\sigma} \\{ \\nabla \\cdot [(\\nu + \\tilde{\\nu}) \\nabla \\tilde{\\nu}] + C_{b2} | \\nabla \\tilde{\\nu} |^2 \\} - \\left[C_{w1} f_w - \\frac{C_{b1}}{\\kappa^2} f_{t2}\\right] \\left( \\frac{\\tilde{\\nu}}{d} \\right)^2 + f_{t1} \\Delta U^2\n" }, { "math_id": 3, "text": "\n\\tilde{S} \\equiv S + \\frac{ \\tilde{\\nu} }{ \\kappa^2 d^2 } f_{v2}, \\quad f_{v2} = 1 - \\frac{\\chi}{1 + \\chi f_{v1}}\n" }, { "math_id": 4, "text": "\nf_w = g \\left[ \\frac{ 1 + C_{w3}^6 }{ g^6 + C_{w3}^6 } \\right]^{1/6}, \\quad g = r + C_{w2}(r^6 - r), \\quad r \\equiv \\frac{\\tilde{\\nu} }{ \\tilde{S} \\kappa^2 d^2 }\n" }, { "math_id": 5, "text": "\nf_{t1} = C_{t1} g_t \\exp\\left( -C_{t2} \\frac{\\omega_t^2}{\\Delta U^2} [ d^2 + g^2_t d^2_t] \\right)\n" }, { "math_id": 6, "text": "\nf_{t2} = C_{t3} \\exp\\left(-C_{t4} \\chi^2 \\right)\n" }, { "math_id": 7, "text": "\nS = \\sqrt{2 \\Omega_{ij} \\Omega_{ij}}\n" }, { "math_id": 8, "text": "\n\\Omega_{ij} = \\frac{1}{2} ( \\partial u_i / \\partial x_j - \\partial u_j / \\partial x_i )\n" }, { "math_id": 9, "text": "\\Delta U^2" }, { "math_id": 10, "text": "\n\\begin{matrix}\n\\sigma &=& 2/3\\\\\nC_{b1} &=& 0.1355\\\\\nC_{b2} &=& 0.622\\\\\n\\kappa &=& 0.41\\\\\nC_{w1} &=& C_{b1}/\\kappa^2 + (1 + C_{b2})/\\sigma \\\\\nC_{w2} &=& 0.3 \\\\\nC_{w3} &=& 2 \\\\\nC_{v1} &=& 7.1 \\\\\nC_{t1} &=& 1 \\\\\nC_{t2} &=& 2 \\\\\nC_{t3} &=& 1.1 \\\\\nC_{t4} &=& 2\n\\end{matrix}\n" }, { "math_id": 11, "text": "\n\\begin{matrix}\nC_{t3} &=& 1.2 \\\\\nC_{t4} &=& 0.5\n\\end{matrix}\n" }, { "math_id": 12, "text": "\n\\mu_t = \\rho \\tilde{\\nu} f_{v1}\n" }, { "math_id": 13, "text": "\\rho" }, { "math_id": 14, "text": "\n\\frac{\\partial \\tilde{\\nu}}{\\partial t} + \\frac{\\partial}{\\partial x_j} (\\tilde{\\nu} u_j)= \\mbox{RHS}\n" }, { "math_id": 15, "text": "\\tilde{\\nu}=0" }, { "math_id": 16, "text": "\\tilde{\\nu} \\leq \\frac{\\nu}{2}" }, { "math_id": 17, "text": "\\tilde{\\nu}=5{\\nu}" } ]
https://en.wikipedia.org/wiki?curid=8190304
8194180
B (disambiguation)
B is the second letter of the Latin alphabet. B may also refer to: <templatestyles src="Template:TOC_right/styles.css" /> See also. Topics referred to by the same term <templatestyles src="Dmbox/styles.css" /> This page lists associated with the title .
[ { "math_id": 0, "text": "\\mathbb{B}" } ]
https://en.wikipedia.org/wiki?curid=8194180
819467
Fisher's exact test
Statistical significance test Fisher's exact test is a statistical significance test used in the analysis of contingency tables. Although in practice it is employed when sample sizes are small, it is valid for all sample sizes. It is named after its inventor, Ronald Fisher, and is one of a class of exact tests, so called because the significance of the deviation from a null hypothesis (e.g., "p"-value) can be calculated exactly, rather than relying on an approximation that becomes exact in the limit as the sample size grows to infinity, as with many statistical tests. Fisher is said to have devised the test following a comment from Muriel Bristol, who claimed to be able to detect whether the tea or the milk was added first to her cup. He tested her claim in the "lady tasting tea" experiment. Purpose and scope. The test is useful for categorical data that result from classifying objects in two different ways; it is used to examine the significance of the association (contingency) between the two kinds of classification. So in Fisher's original example, one criterion of classification could be whether milk or tea was put in the cup first; the other could be whether Bristol thinks that the milk or tea was put in first. We want to know whether these two classifications are associated—that is, whether Bristol really can tell whether milk or tea was poured in first. Most uses of the Fisher test involve, like this example, a 2 × 2 contingency table (discussed below). The "p"-value from the test is computed as if the margins of the table are fixed, i.e. as if, in the tea-tasting example, Bristol knows the number of cups with each treatment (milk or tea first) and will therefore provide guesses with the correct number in each category. As pointed out by Fisher, this leads under a null hypothesis of independence to a hypergeometric distribution of the numbers in the cells of the table. With large samples, a chi-squared test (or better yet, a G-test) can be used in this situation. However, the significance value it provides is only an approximation, because the sampling distribution of the test statistic that is calculated is only approximately equal to the theoretical chi-squared distribution. The approximation is poor when sample sizes are small, or the data are very unequally distributed among the cells of the table, resulting in the cell counts predicted on the null hypothesis (the "expected values") being low. The usual rule for deciding whether the chi-squared approximation is good enough is that the chi-squared test is not suitable when the expected values in any of the cells of a contingency table are below 5, or below 10 when there is only one degree of freedom (this rule is now known to be overly conservative). In fact, for small, sparse, or unbalanced data, the exact and asymptotic "p"-values can be quite different and may lead to opposite conclusions concerning the hypothesis of interest. In contrast the Fisher exact test is, as its name states, exact as long as the experimental procedure keeps the row and column totals fixed, and it can therefore be used regardless of the sample characteristics. It becomes difficult to calculate with large samples or well-balanced tables, but fortunately these are exactly the conditions where the chi-squared test is appropriate. For hand calculations, the test is feasible only in the case of a 2 × 2 contingency table. However the principle of the test can be extended to the general case of an "m" × "n" table, and some statistical packages provide a calculation (sometimes using a Monte Carlo method to obtain an approximation) for the more general case. The test can also be used to quantify the "overlap" between two sets. For example, in enrichment analyses in statistical genetics one set of genes may be annotated for a given phenotype and the user may be interested in testing the overlap of their own set with those. In this case a 2 × 2 contingency table may be generated and Fisher's exact test applied through identifying The test assumes genes in either list are taken from a broader set of genes (e.g. all remaining genes). A "p"-value may then be calculated, summarizing the significance of the overlap between the two lists. Derivation. <templatestyles src="Math_proof/styles.css" />Derivation We set up the following probability model underlying Fisher’s exact test. Suppose we have formula_0 blue balls, and formula_1 red balls. We throw them together into a black box, shake well, then remove them one by one until we have pulled out exactly formula_2 balls. We call these balls “class I” and the formula_3 remaining balls “class II”. The question is to calculate the probability that exactly formula_4 blue balls are in class I. Every other entry in the table is fixed once we fill in one entry of the table. Suppose we pretend that every ball is labelled, and before we start pulling out the balls, we permutate them uniformly randomly, then pull out the first formula_2 balls. This gives us formula_5 possibilities. Of these possibilities, we condition on the case where the first formula_2 balls contain exactly formula_4 blue balls. To count these possibilities, we do the following: first select uniformly at random a subset of size formula_4 among the formula_2 class-I balls with formula_6 possibilities, then select uniformly at random a subset of size formula_7 among the formula_3 class-II balls with formula_8 possibilities. The two selected sets would be filled with blue balls. The rest would be filled with red balls. Once we have selected the sets, we can populate them with an arbitrary ordering of the formula_0 blue balls. This gives us formula_9 possibilities. Same for the red balls, with formula_10 possibilities. In full, we have formula_11 possibilities. Thus the probability of this event is formula_12 Another derivation: <templatestyles src="Math_proof/styles.css" />Derivation Suppose each blue ball and red ball has an equal and independent probability formula_13 of being in class I, and formula_14 of being in class II. Then the number of class-I blue balls is binomially distributed. The probability there are exactly formula_4 of them is formula_15, and the probability there are exactly formula_16 of red class I balls is formula_17. The probability that there are precisely formula_2 of class I balls, regardless of number of red or blue balls in it, is formula_18. Thus, conditional on having formula_2 class I balls, the conditional probability of having a table as shown is formula_19 Example. For example, a sample of teenagers might be divided into male and female on one hand and those who are and are not currently studying for a statistics exam on the other. For example, we hypothesize that the proportion of studying students is higher among the women than among the men, and we want to test whether any difference in proportions that we observe is significant. The data might look like this: The question we ask about these data is: Knowing that 10 of these 24 teenagers are studying and that 12 of the 24 are female, and assuming the null hypothesis that men and women are equally likely to study, what is the probability that these 10 teenagers who are studying would be so unevenly distributed between the women and the men? If we were to choose 10 of the teenagers at random, what is the probability that 9 or more of them would be among the 12 women and only 1 or fewer from among the 12 men? First example. Before we proceed with the Fisher test, we first introduce some notations. We represent the cells by the letters "a, b, c" and "d", call the totals across rows and columns "marginal totals", and represent the grand total by "n". So the table now looks like this: Fisher showed that conditional on the margins of the table, "a" is distributed as a hypergeometric distribution with "a+c" draws from a population with "a+b" successes and "c+d" failures. The probability of obtaining such set of values is given by: formula_20 where formula_21 is the binomial coefficient and the symbol ! indicates the factorial operator. This can be seen as follows. If the marginal totals (i.e. formula_22, formula_23, formula_24, and formula_25) are known, only a single degree of freedom is left: the value e.g. of formula_26 suffices to deduce the other values. Now, formula_27 is the probability that formula_26 elements are positive in a random selection (without replacement) of formula_24 elements from a larger set containing formula_28 elements in total out of which formula_22 are positive, which is precisely the definition of the hypergeometric distribution. With the data above (using the first of the equivalent forms), this gives: formula_29 Second example. The formula above gives the exact hypergeometric probability of observing this particular arrangement of the data, assuming the given marginal totals, on the null hypothesis that men and women are equally likely to be studiers. To put it another way, if we assume that the probability that a man is a studier is formula_30, the probability that a woman is a studier is also formula_30, and we assume that both men and women enter our sample independently of whether or not they are studiers, then this hypergeometric formula gives the conditional probability of observing the values "a, b, c, d" in the four cells, conditionally on the observed marginals (i.e., assuming the row and column totals shown in the margins of the table are given). This remains true even if men enter our sample with different probabilities than women. The requirement is merely that the two classification characteristics—gender, and studier (or not)—are not associated. For example, suppose we knew probabilities formula_31 with formula_32 such that (male studier, male non-studier, female studier, female non-studier) had respective probabilities formula_33 for each individual encountered under our sampling procedure. Then still, were we to calculate the distribution of cell entries conditional given marginals, we would obtain the above formula in which neither formula_30 nor formula_34 occurs. Thus, we can calculate the exact probability of any arrangement of the 24 teenagers into the four cells of the table, but Fisher showed that to generate a significance level, we need consider only the cases where the marginal totals are the same as in the observed table, and among those, only the cases where the arrangement is as extreme as the observed arrangement, or more so. (Barnard's test relaxes this constraint on one set of the marginal totals.) In the example, there are 11 such cases. Of these only one is more extreme in the same direction as our data; it looks like this: For this table (with extremely unequal studying proportions) the probability is formula_35. p-value tests. In order to calculate the significance of the observed data, i.e. the total probability of observing data as extreme or more extreme if the null hypothesis is true, we have to calculate the values of "p" for both these tables, and add them together. This gives a one-tailed test, with "p" approximately 0.001346076 + 0.000033652 = 0.001379728. For example, in the R statistical computing environment, this value can be obtained as codice_0, or in Python, using codice_1 (where one receives both the prior odds ratio and the "p"-value). This value can be interpreted as the sum of evidence provided by the observed data—or any more extreme table—for the null hypothesis (that there is no difference in the proportions of studiers between men and women). The smaller the value of "p", the greater the evidence for rejecting the null hypothesis; so here the evidence is strong that men and women are not equally likely to be studiers. For a two-tailed test we must also consider tables that are equally extreme, but in the opposite direction. Unfortunately, classification of the tables according to whether or not they are 'as extreme' is problematic. An approach used by the codice_2 function in R is to compute the "p"-value by summing the probabilities for all tables with probabilities less than or equal to that of the observed table. In the example here, the 2-sided "p"-value is twice the 1-sided value—but in general these can differ substantially for tables with small counts, unlike the case with test statistics that have a symmetric sampling distribution. Controversies. Fisher's test gives exact "p"-values, but some authors have argued that it is conservative, i.e. that its actual rejection rate is below the nominal significance level. The apparent contradiction stems from the combination of a discrete statistic with fixed significance levels. Consider the following proposal for a significance test at the 5%-level: reject the null hypothesis for each table to which Fisher's test assigns a "p"-value equal to or smaller than 5%. Because the set of all tables is discrete, there may not be a table for which equality is achieved. If formula_36 is the largest "p"-value smaller than 5% which can actually occur for some table, then the proposed test effectively tests at the formula_36-level. For small sample sizes, formula_36 might be significantly lower than 5%. While this effect occurs for any discrete statistic (not just in contingency tables, or for Fisher's test), it has been argued that the problem is compounded by the fact that Fisher's test conditions on the marginals. To avoid the problem, many authors discourage the use of fixed significance levels when dealing with discrete problems. The decision to condition on the margins of the table is also controversial. The "p"-values derived from Fisher's test come from the distribution that conditions on the margin totals. In this sense, the test is exact only for the conditional distribution and not the original table where the margin totals may change from experiment to experiment. It is possible to obtain an exact "p"-value for the 2×2 table when the margins are not held fixed. Barnard's test, for example, allows for random margins. However, some authors (including, later, Barnard himself) have criticized Barnard's test based on this property. They argue that the marginal success total is an (almost) ancillary statistic, containing (almost) no information about the tested property. The act of conditioning on the marginal success rate from a 2×2 table can be shown to ignore some information in the data about the unknown odds ratio. The argument that the marginal totals are (almost) ancillary implies that the appropriate likelihood function for making inferences about this odds ratio should be conditioned on the marginal success rate. Whether this lost information is important for inferential purposes is the essence of the controversy. Alternatives. An alternative exact test, Barnard's exact test, has been developed and proponents of it suggest that this method is more powerful, particularly in 2×2 tables. Furthermore, Boschloo's test is an exact test that is uniformly more powerful than Fisher's exact test by construction. Most modern statistical packages will calculate the significance of Fisher tests, in some cases even where the chi-squared approximation would also be acceptable. The actual computations as performed by statistical software packages will as a rule differ from those described above, because numerical difficulties may result from the large values taken by the factorials. A simple, somewhat better computational approach relies on a gamma function or log-gamma function, but methods for accurate computation of hypergeometric and binomial probabilities remains an active research area. For stratified categorical data the Cochran–Mantel–Haenszel test must be used instead of Fisher's test. Choi et al. propose a "p"-value derived from the likelihood ratio test based on the conditional distribution of the odds ratio given the marginal success rate. This "p"-value is inferentially consistent with classical tests of normally distributed data as well as with likelihood ratios and support intervals based on this conditional likelihood function. It is also readily computable. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "a+b" }, { "math_id": 1, "text": "c+d" }, { "math_id": 2, "text": "a+c" }, { "math_id": 3, "text": "b+d" }, { "math_id": 4, "text": "a" }, { "math_id": 5, "text": "n!" }, { "math_id": 6, "text": "\\binom{a+c}{a}" }, { "math_id": 7, "text": "b" }, { "math_id": 8, "text": "\\binom{b+d}{b}" }, { "math_id": 9, "text": "(a+b)!" }, { "math_id": 10, "text": "(c+d)!" }, { "math_id": 11, "text": "\\binom{a+c}{a}\\binom{b+d}{b}(a+b)!(c+d)!" }, { "math_id": 12, "text": "\\frac{\\binom{a+c}{a}\\binom{b+d}{b}(a+b)!(c+d)!}{n!}=\\frac{\\binom{a+c}{a}\\binom{b+d}{b}}{\\binom{n}{a+b}}" }, { "math_id": 13, "text": "p" }, { "math_id": 14, "text": "1-p" }, { "math_id": 15, "text": "\\binom{a+b}{a}p^a(1-p)^b" }, { "math_id": 16, "text": "c" }, { "math_id": 17, "text": "\\binom{c+d}{c}p^c(1-p)^d" }, { "math_id": 18, "text": "\\binom{n}{a+c}p^{a+c}(1-p)^{b+d}" }, { "math_id": 19, "text": "\\frac{\\binom{a+c}{a}\\binom{b+d}{b}}{\\binom{n}{a+b}}" }, { "math_id": 20, "text": "p = \\frac{ \\displaystyle{{a+b}\\choose{a}} \\displaystyle{{c+d}\\choose{c}} }{ \\displaystyle{{n}\\choose{a+c}} } = \\frac{ \\displaystyle{{a+b}\\choose{b}} \\displaystyle{{c+d}\\choose{d}} }{ \\displaystyle{{n}\\choose{b+d}} } = \\frac{(a+b)!~(c+d)!~(a+c)!~(b+d)!}{a!~~b!~~c!~~d!~~n!}" }, { "math_id": 21, "text": " \\tbinom nk " }, { "math_id": 22, "text": "a+b" }, { "math_id": 23, "text": "c+d" }, { "math_id": 24, "text": "a+c" }, { "math_id": 25, "text": "b+d" }, { "math_id": 26, "text": "a" }, { "math_id": 27, "text": "p=p(a)" }, { "math_id": 28, "text": "n" }, { "math_id": 29, "text": "p = { {\\tbinom{10}{1}} {\\tbinom{14}{11}} }/{ {\\tbinom{24}{12}} } = \\tfrac{10!~14!~12!~12!}{1!~9!~11!~3!~24!} \\approx 0.001346076" }, { "math_id": 30, "text": "\\mathfrak{p}" }, { "math_id": 31, "text": "P, Q, \\mathfrak{p,q}" }, { "math_id": 32, "text": "P + Q = \\mathfrak{p} + \\mathfrak{q} = 1" }, { "math_id": 33, "text": "(P\\mathfrak{p}, P\\mathfrak{q}, Q\\mathfrak{p}, Q\\mathfrak{q})" }, { "math_id": 34, "text": "P" }, { "math_id": 35, "text": "{p = {\\tbinom{10}{0}} {\\tbinom{14}{12}} }/{ {\\tbinom{24}{12}} } \\approx 0.000033652" }, { "math_id": 36, "text": "\\alpha_e" } ]
https://en.wikipedia.org/wiki?curid=819467
8195500
Titchmarsh convolution theorem
The Titchmarsh convolution theorem describes the properties of the support of the convolution of two functions. It was proven by Edward Charles Titchmarsh in 1926. Titchmarsh convolution theorem. If formula_0 and formula_1 are integrable functions, such that formula_2 almost everywhere in the interval formula_3, then there exist formula_4 and formula_5 satisfying formula_6 such that formula_7 almost everywhere in formula_8 and formula_9 almost everywhere in formula_10 As a corollary, if the integral above is 0 for all formula_11 then either formula_12 or formula_13 is almost everywhere 0 in the interval formula_14 Thus the convolution of two functions on formula_15 cannot be identically zero unless at least one of the two functions is identically zero. As another corollary, if formula_16 for all formula_17 and one of the function formula_18 or formula_19 is almost everywhere not null in this interval, then the other function must be null almost everywhere in formula_20. The theorem can be restated in the following form: Let formula_21. Then formula_22 if the left-hand side is finite. Similarly, formula_23 if the right-hand side is finite. Above, formula_24 denotes the support of a function f (i.e., the closure of the complement of f-1(0)) and formula_25 and formula_26 denote the infimum and supremum. This theorem essentially states that the well-known inclusion formula_27 is sharp at the boundary. The higher-dimensional generalization in terms of the convex hull of the supports was proven by Jacques-Louis Lions in 1951: If formula_28, then formula_29 Above, formula_30 denotes the convex hull of the set and formula_31 denotes the space of distributions with compact support. The original proof by Titchmarsh uses complex-variable techniques, and is based on the Phragmén–Lindelöf principle, Jensen's inequality, Carleman's theorem, and Valiron's theorem. The theorem has since been proven several more times, typically using either real-variable or complex-variable methods. Gian-Carlo Rota has stated that no proof yet addresses the theorem's underlying combinatorial structure, which he believes is necessary for complete understanding. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\varphi(t)\\," }, { "math_id": 1, "text": "\\psi(t)" }, { "math_id": 2, "text": "\\varphi * \\psi = \\int_0^x \\varphi(t)\\psi(x-t)\\,dt=0" }, { "math_id": 3, "text": "0<x<\\kappa\\," }, { "math_id": 4, "text": "\\lambda\\geq0" }, { "math_id": 5, "text": "\\mu\\geq0" }, { "math_id": 6, "text": "\\lambda+\\mu\\ge\\kappa" }, { "math_id": 7, "text": "\\varphi(t)=0\\," }, { "math_id": 8, "text": "0<t<\\lambda" }, { "math_id": 9, "text": "\\psi(t)=0\\," }, { "math_id": 10, "text": "0<t<\\mu." }, { "math_id": 11, "text": "x>0," }, { "math_id": 12, "text": "\\varphi\\," }, { "math_id": 13, "text": "\\psi" }, { "math_id": 14, "text": " [0,+\\infty)." }, { "math_id": 15, "text": " [0,+\\infty)" }, { "math_id": 16, "text": "\\varphi * \\psi (x) = 0" }, { "math_id": 17, "text": "x\\in [0, \\kappa]" }, { "math_id": 18, "text": "\\varphi" }, { "math_id": 19, "text": "\\psi" }, { "math_id": 20, "text": "[0,\\kappa]" }, { "math_id": 21, "text": "\\varphi, \\psi\\in L^1(\\mathbb{R})" }, { "math_id": 22, "text": "\\inf\\operatorname{supp} \\varphi\\ast \\psi=\\inf\\operatorname{supp} \\varphi+\\inf\\operatorname{supp} \\psi" }, { "math_id": 23, "text": "\\sup\\operatorname{supp} \\varphi\\ast\\psi = \\sup\\operatorname{supp}\\varphi + \\sup\\operatorname{supp} \\psi" }, { "math_id": 24, "text": "\\operatorname{supp}" }, { "math_id": 25, "text": "\\inf" }, { "math_id": 26, "text": "\\sup" }, { "math_id": 27, "text": " \\operatorname{supp}\\varphi\\ast \\psi \\subset \\operatorname{supp}\\varphi+\\operatorname{supp}\\psi" }, { "math_id": 28, "text": "\\varphi, \\psi\\in\\mathcal{E}'(\\mathbb{R}^n)" }, { "math_id": 29, "text": "\\operatorname{c.h.} \\operatorname{supp} \\varphi\\ast \\psi=\\operatorname{c.h.} \\operatorname{supp} \\varphi+\\operatorname{c.h.}\\operatorname{supp} \\psi" }, { "math_id": 30, "text": "\\operatorname{c.h.}" }, { "math_id": 31, "text": "\\mathcal{E}' (\\mathbb{R}^n)" } ]
https://en.wikipedia.org/wiki?curid=8195500
8198194
Cornering brake control
Cornering Brake Control (CBC) is an automotive safety measure that improves handling performance by distributing the force applied on the wheels of a vehicle while turning corners. Introduced by BMW in 1992, the technology is now featured in modern electric and gasoline vehicles such as cars, motorcycles, and trucks. CBC is often included under the Electronic Stability Control (ESC) safety feature provided by vehicle manufacturers. CBC uses the vehicle's electronic control unit to receive data from multiple sensors. CBC then adjusts brake steer torque, brake pressure, yaw rate, and stopping distance, helping the driver keep control of the vehicle while turning both inwards and outwards. Experimentation done with CBC technology has shown that it is an advancement on the traditional Anti Lock Braking System (ABS) featured in modern vehicles. CBC is also likely to be incorporated with future autonomous vehicles for its precision and real-time response. History. Early Usage. CBC was first introduced by the German automobile manufacturer BMW in 1992 under their new Dynamic Stability Control feature. It was included in the 1992 750i model (their 7-series sedan), and it added a further safety measure to their pre-existing ABS and Automatic Stability Control (ASC) features. When describing the feature, BMW stated, "When braking during curves or when braking during a lane change, driving stability and steering response are improved further." While BMW was the first automobile manufacturer to create this technology, federal mandates from the EU in 2009 and the US in 2011 required the inclusion of this brake safety technology into future vehicles within these regions. Current Usage. Federal mandates made ESC safety features required in automobile production, which included both CBC technology and functions. This has led to other manufacturers incorporating this technology under different names. German automobile manufacturer Mercedes-Benz introduced the technology under their ESP Dynamic Cornering Assist and Curve Dynamic Assist systems. BMW-owned manufacturer Mini and British manufacturer Land Rover incorporated it under the Cornering Brake Control name. Other companies have used CBC technology as a part of their ESC feature, making CBC technology a more universal safety measure. Mechanical Operation. CBC uses the vehicle's electronic control unit and ESC to receive data from multiple sensors. These sensors calculate variables such as speed, acceleration, yaw rate, and steering angle. CBC then uses these variables to adjust brake pressure, desired yaw rate, brake steer torque, and stopping distance. Experimentation with CBC technology has used Hardware-in-the-Loop (HiL) testing to prove its real-time response to these factors. Brake Pressure. Wheel locking presents a severe danger to the driver while turning. Wheel locking limits the functionality of the steering function due to the centrifugal force (a force on the vehicle that shifts its balance while turning), which causes imbalances in brake pressure that CBC technology can regulate. CBC resolves this by using an adaptive brake force system to distribute pressure amongst the brakes of a vehicle while turning. CBC then adjusts the pressure based on the speed of the vehicle and where its position is relative to its curve, optimizing its stability and traction on the road. This makes both steering and braking smoother for the driver, limiting the possibility of the vehicle's wheels locking up. Yaw Rate. CBC technology works to stabilize the vehicle to a desired yaw rate (twisting motion), which is experienced by a vehicle while taking turns. When suddenly braking, stabilizing the yaw rate allows for brake pressure to lower easily. It also reduces the slip ratio, which is a ratio that determines the vehicle's actual speed after moving against friction (a force that resists motion). This change helps the technology accurately respond to the road's conditions as the vehicle's actual speed will accurately resemble the calculated forward and angular speed. CBC logic smoothly reaches the desired yaw rate and lateral acceleration, maximizing comfort and driving performance. The formula to calculate the actual yaw rate is:formula_0where Depending on conditions such as vehicle model and road layout, more calculations are taken to ensure that CBC technology can effectively stabilize the vehicle. CBC can calculate a desired yaw rate that accounts for both the actual yaw rate and the required human input (measured by the vehicle's steering angle during a turn). The formula to calculate the desired yaw rate is:formula_4where CBC is then able to partially apply the brakes to ease the vehicle into its desired yaw rate while turning. Torque Adjustment. CBC reduces unwanted brake steer torque when braking while turning corners. This limits the radius (formula_11) found in the general formula for torque, which determines how far the vehicle is from inside the curve. The formula to calculate torque is:formula_12where The change in radius keeps the vehicle from veering outward and potentially leaving the lane, compensating for the driver's error. Modern vehicles with CBC may have their steering axis shifted sideways (towards the surface of the road) in the same direction as the tire contact point (the point where the tire meets the road). The adaptive brake force distribution is then able to distribute the pressure on the brakes by directly accounting for the tire contact force (the force that is applied back on the tires), which decreases brake steer torque. As described in the general formula for torque, lowering brake steer torque will decrease the radius of the turn as the force (formula_14) remains constant, safely keeping the vehicle from veering outward. Stopping Distance. CBC shortens the brake distance needed to stop the vehicle while turning. CBC can lower brake pressure, yaw rate, and torque at once to limit lateral movement (movement from the sides). Limiting lateral movement helps improve vehicle stability while turning, allowing CBC to brake smoothly. This helps the driver immediately stop the vehicle when faced with an emergency situation ahead. Software. CBC has a software component that may be paired with modern ABS systems to include CBC logic. CBC software evaluates the different speeds of the vehicle's wheels and then adjusts variables such as brake steer torque to ensure the vehicle does not turn too far inward/outward, improving safety from the software side. Software-in-the-Loop Testing (SiL). Experimentation regarding CBC logic used Software-in-the-Loop (SiL) testing to prove its validity. This uses a simulated environment to test out the software's code in a virtual space. The algorithm used to test CBC logic incorporated many components within the vehicle, such as tires, suspension, and mass. The algorithm also modeled the driver's expected behavior and used both the predicted behavior and the vehicle components to determine the validity of CBC logic. Results from SiL testing have clearly shown that CBC logic helps keep vehicles within their intended trajectory, enhancing the traditional ABS safety measure. Future Applications. CBC is expected to be included in autonomous vehicles as the technology can work with future vehicle control systems to ensure brake safety while turning. CBC can already autonomously engage the vehicle brakes in case of an emergency but lacks the necessary signals needed to control the vehicle without any human input. Controller Area Network or CAN signals (signals sent within the autonomous vehicle software) can send the necessary data to CBC so that the vehicle may rely on its logic and real-time response. These vehicle systems can work synonymously to increase the stability of autonomous vehicles while turning, ensuring a safe and comfortable experience for the passengers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\displaystyle \\psi =V/R } \n" }, { "math_id": 1, "text": "\\psi" }, { "math_id": 2, "text": "V" }, { "math_id": 3, "text": "R" }, { "math_id": 4, "text": "\\psi^* = \\psi + k*d\\delta/dt" }, { "math_id": 5, "text": "\\psi^*" }, { "math_id": 6, "text": "k" }, { "math_id": 7, "text": "d\\delta" }, { "math_id": 8, "text": "\\delta" }, { "math_id": 9, "text": "dt" }, { "math_id": 10, "text": "t" }, { "math_id": 11, "text": "r" }, { "math_id": 12, "text": "\\tau = rF\\sin\\theta" }, { "math_id": 13, "text": "\\tau" }, { "math_id": 14, "text": "F" }, { "math_id": 15, "text": "\\theta" } ]
https://en.wikipedia.org/wiki?curid=8198194
8198761
Adleman–Pomerance–Rumely primality test
In computational number theory, the Adleman–Pomerance–Rumely primality test is an algorithm for determining whether a number is prime. Unlike other, more efficient algorithms for this purpose, it avoids the use of random numbers, so it is a deterministic primality test. It is named after its discoverers, Leonard Adleman, Carl Pomerance, and Robert Rumely. The test involves arithmetic in cyclotomic fields. It was later improved by Henri Cohen and Hendrik Willem Lenstra, commonly referred to as APR-CL. It can test primality of an integer "n" in time: formula_0
[ { "math_id": 0, "text": "(\\log n)^{O(\\log\\,\\log\\,\\log n)}." } ]
https://en.wikipedia.org/wiki?curid=8198761
8199698
Stationary sequence
Random sequence whose joint probability distribution is invariant over time In probability theory – specifically in the theory of stochastic processes, a stationary sequence is a random sequence whose joint probability distribution is invariant over time. If a random sequence "X" "j" is stationary then the following holds: formula_0 where "F" is the joint cumulative distribution function of the random variables in the subscript. If a sequence is stationary then it is wide-sense stationary. If a sequence is stationary then it has a constant mean (which may not be finite): formula_1
[ { "math_id": 0, "text": "\n\\begin{align}\n& {} \\quad F_{X_n,X_{n+1},\\dots,X_{n+N-1}}(x_n, x_{n+1},\\dots,x_{n+N-1}) \\\\\n& = F_{X_{n+k},X_{n+k+1},\\dots,X_{n+k+N-1}}(x_n, x_{n+1},\\dots,x_{n+N-1}),\n\\end{align}\n" }, { "math_id": 1, "text": "E(X[n]) = \\mu \\quad \\text{for all } n ." } ]
https://en.wikipedia.org/wiki?curid=8199698
8199713
Function representation
Function Representation (FRep or F-Rep) is used in solid modeling, volume modeling and computer graphics. FRep was introduced in "Function representation in geometric modeling: concepts, implementation and applications" as a uniform representation of multidimensional geometric objects (shapes). An object as a point set in multidimensional space is defined by a single continuous real-valued function formula_0 of point coordinates formula_1 which is evaluated at the given point by a procedure traversing a tree structure with primitives in the leaves and operations in the nodes of the tree. The points with formula_2 belong to the object, and the points with formula_3 are outside of the object. The point set with formula_4 is called an isosurface. Geometric domain. The geometric domain of FRep in 3D space includes solids with non-manifold models and lower-dimensional entities (surfaces, curves, points) defined by zero value of the function. A primitive can be defined by an equation or by a "black box" procedure converting point coordinates into the function value. Solids bounded by algebraic surfaces, skeleton-based implicit surfaces, and convolution surfaces, as well as procedural objects (such as solid noise), and voxel objects can be used as primitives (leaves of the construction tree). In the case of a voxel object (discrete field), it should be converted to a continuous real function, for example, by applying the trilinear or higher-order interpolation. Many operations such as set-theoretic, blending, offsetting, projection, non-linear deformations, metamorphosis, sweeping, hypertexturing, and others, have been formulated for this representation in such a manner that they yield continuous real-valued functions as output, thus guaranteeing the closure property of the representation. R-functions originally introduced in V.L. Rvachev's "On the analytical description of some geometric objects", provide formula_5 continuity for the functions exactly defining the set-theoretic operations (min/max functions are a particular case). Because of this property, the result of any supported operation can be treated as the input for a subsequent operation; thus very complex models can be created in this way from a single functional expression. FRep modeling is supported by the special-purpose language HyperFun. Shape Models. FRep combines and generalizes different shape models like A more general "constructive hypervolume" allows for modeling multidimensional point sets with attributes (volume models in 3D case). Point set geometry and attributes have independent representations but are treated uniformly. A point set in a geometric space of an arbitrary dimension is an FRep based geometric model of a real object. An attribute that is also represented by a real-valued function (not necessarily continuous) is a mathematical model of an object property of an arbitrary nature (material, photometric, physical, medicine, etc.). The concept of "implicit complex" proposed in "Cellular-functional modeling of heterogeneous objects" provides a framework for including geometric elements of different dimensionality by combining polygonal, parametric, and FRep components into a single cellular-functional model of a heterogeneous object. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(X)" }, { "math_id": 1, "text": "X[x_1,x_2, ..., x_n]" }, { "math_id": 2, "text": "f(x_1,x_2, ..., x_n) \\ge 0" }, { "math_id": 3, "text": "f(x_1,x_2, ..., x_n) < 0" }, { "math_id": 4, "text": "f(x_1,x_2, ..., x_n)=0" }, { "math_id": 5, "text": "C^k" } ]
https://en.wikipedia.org/wiki?curid=8199713
8200081
HisB
The hisB gene, found in the enterobacteria (such as "E. coli"), in "Campylobacter jejuni" and in "Xylella"/"Xanthomonas" encodes a protein involved in catalysis of two step in histidine biosynthesis (the sixth and eight step), namely the bifunctional Imidazoleglycerol-phosphate dehydratase/histidinol-phosphatase. The former function (EC 4.2.1.19), found at the N-terminal, dehydrated -erythroimidazoleglycerolphosphate to imidazoleacetolphosphate, the latter function (EC 3.1.3.15), found at the C-terminal, dephosphorylates -histidinolphosphate producing histidinol. The firth step is catalysed instead by histadinolphosphate aminotransferase (encoded by "hisC") The peptide is 40.5kDa and associates to form a hexamer (unless truncated) In E. coli hisB is found on the hisGDCBHAFI operon The phosphatase activity possess a substrate ambiguity and overexpression of hisB can rescue phosphoserine phosphatase (serB) knockouts. Reactions. hisB-N D-erythro-1-(imidazol-4-yl)glycerol 3-phosphate formula_0 3-(imidazol-4-yl)-2-oxopropyl phosphate + H2O hisB-C L-histidinol phosphate + H2O formula_0 L-histidinol + phosphate Non-fusion protein in other species. HIS3 from "Saccharomyces cerevisiae" is not a fused IGP dehydratase and hisidinol phosphatase, but an IGPD only (homologous to hisB-N). Whereas HIS2 is the HP (analogous to hisB-C, called hisJ in some prokaryotes). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=8200081
820041
Cissoid of Diocles
Cubic plane curve In geometry, the cissoid of Diocles (from grc " "κισσοειδής" (kissoeidēs)" 'ivy-shaped'; named for Diocles) is a cubic plane curve notable for the property that it can be used to construct two mean proportionals to a given ratio. In particular, it can be used to double a cube. It can be defined as the cissoid of a circle and a line tangent to it with respect to the point on the circle opposite to the point of tangency. In fact, the curve family of cissoids is named for this example and some authors refer to it simply as "the" cissoid. It has a single cusp at the pole, and is symmetric about the diameter of the circle which is the line of tangency of the cusp. The line is an asymptote. It is a member of the conchoid of de Sluze family of curves and in form it resembles a tractrix. Construction and equations. Let the radius of C be a. By translation and rotation, we may take O to be the origin and the center of the circle to be ("a", 0), so A is (2"a", 0). Then the polar equations of L and C are: formula_0 By construction, the distance from the origin to a point on the cissoid is equal to the difference between the distances between the origin and the corresponding points on L and C. In other words, the polar equation of the cissoid is formula_1 Applying some trigonometric identities, this is equivalent to formula_2 Let "t" = tan "θ" in the above equation. Then formula_3 are parametric equations for the cissoid. Converting the polar form to Cartesian coordinates produces formula_4 Construction by double projection. A compass-and-straightedge construction of various points on the cissoid proceeds as follows. Given a line L and a point O not on L, construct the line L' through O parallel to L. Choose a variable point P on L, and construct Q, the orthogonal projection of P on L', then R, the orthogonal projection of Q on OP. Then the cissoid is the locus of points R. To see this, let O be the origin and L the line "x" = 2"a" as above. Let P be the point (2"a", 2"at"); then Q is (0, 2"at") and the equation of the line OP is "y" = "tx". The line through Q perpendicular to OP is formula_5 To find the point of intersection R, set "y" = "tx" in this equation to get formula_6 which are the parametric equations given above. While this construction produces arbitrarily many points on the cissoid, it cannot trace any continuous segment of the curve. Newton's construction. The following construction was given by Isaac Newton. Let J be a line and B a point not on J. Let ∠"BST" be a right angle which moves so that equals the distance from B to J and T remains on J, while the other leg slides along B. Then the midpoint P of describes the curve. To see this, let the distance between B and J be 2"a". By translation and rotation, take "B" = (–a, 0) and J the line "x" = "a". Let "P" = ("x", "y") and let ψ be the angle between and the x-axis; this is equal to the angle between and J. By construction, "PT" = "a", so the distance from P to J is "a" sin "ψ". In other words "a" – "x" = "a" sin "ψ". Also, "SP" = "a" is the y-coordinate of ("x", "y") if it is rotated by angle ψ, so "a" = ("x" + "a") sin "ψ" + "y" cos "ψ". After simplification, this produces parametric equations formula_7 Change parameters by replacing ψ with its complement to get formula_8 or, applying double angle formulas, formula_9 But this is polar equation formula_10 given above with "θ" = "ψ"/2. Note that, as with the double projection construction, this can be adapted to produce a mechanical device that generates the curve. Delian problem. The Greek geometer Diocles used the cissoid to obtain two mean proportionals to a given ratio. This means that given lengths a and b, the curve can be used to find u and v so that a is to u as u is to v as v is to b, i.e. "a"/"u" = "u"/"v" = "v"/"b", as discovered by Hippocrates of Chios. As a special case, this can be used to solve the Delian problem: how much must the length of a cube be increased in order to double its volume? Specifically, if a is the side of a cube, and "b" = 2"a", then the volume of a cube of side u is formula_11 so u is the side of a cube with double the volume of the original cube. Note however that this solution does not fall within the rules of compass and straightedge construction since it relies on the existence of the cissoid. Let a and b be given. It is required to find u so that "u"3 = "a"2"b", giving u and "v" = "u"2/"a" as the mean proportionals. Let the cissoid formula_4 be constructed as above, with O the origin, A the point (2"a", 0), and J the line "x" = "a", also as given above. Let C be the point of intersection of J with OA. From the given length b, mark B on J so that "CB" = "b". Draw BA and let "P" = ("x", "y") be the point where it intersects the cissoid. Draw OP and let it intersect J at U. Then "u" = "CU" is the required length. To see this, rewrite the equation of the curve as formula_12 and let "N" = ("x", 0), so is the perpendicular to OA through P. From the equation of the curve, formula_13 From this, formula_14 By similar triangles "PN"/"ON" = "UC"/"OC" and "PN"/"NA" = "BC"/"CA". So the equation becomes formula_15 so formula_16 as required. Diocles did not really solve the Delian problem. The reason is that the cissoid of Diocles cannot be constructed perfectly, at least not with compass and straightedge. To construct the cissoid of Diocles, one would construct a finite number of its individual points, then connect all these points to form a curve. (An example of this construction is shown on the right.) The problem is that there is no well-defined way to connect the points. If they are connected by line segments, then the construction will be well-defined, but it will not be an exact cissoid of Diocles, but only an approximation. Likewise, if the dots are connected with circular arcs, the construction will be well-defined, but incorrect. Or one could simply draw a curve directly, trying to eyeball the shape of the curve, but the result would only be imprecise guesswork. Once the finite set of points on the cissoid have been drawn, then line PC will probably not intersect one of these points exactly, but will pass between them, intersecting the cissoid of Diocles at some point whose exact location has not been constructed, but has only been approximated. An alternative is to keep adding constructed points to the cissoid which get closer and closer to the intersection with line PC, but the number of steps may very well be infinite, and the Greeks did not recognize approximations as limits of infinite steps (so they were very puzzled by Zeno's paradoxes). One could also construct a cissoid of Diocles by means of a mechanical tool specially designed for that purpose, but this violates the rule of only using compass and straightedge. This rule was established for reasons of logical — axiomatic — consistency. Allowing construction by new tools would be like adding new axioms, but axioms are supposed to be simple and self-evident, but such tools are not. So by the rules of classical, synthetic geometry, Diocles did not solve the Delian problem, which actually can not be solved by such means. As a pedal curve. The pedal curve of a parabola with respect to its vertex is a cissoid of Diocles. The geometrical properties of pedal curves in general produce several alternate methods of constructing the cissoid. It is the envelopes of circles whose centers lie on a parabola and which pass through the vertex of the parabola. Also, if two congruent parabolas are set vertex-to-vertex and one is rolled along the other; the vertex of the rolling parabola will trace the cissoid. Inversion. The cissoid of Diocles can also be defined as the inverse curve of a parabola with the center of inversion at the vertex. To see this, take the parabola to be "x" = "y"2, in polar coordinate formula_17 or: formula_18 The inverse curve is thus: formula_19 which agrees with the polar equation of the cissoid above. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n& r=2a\\sec\\theta \\\\\n& r=2a\\cos\\theta .\n\\end{align}" }, { "math_id": 1, "text": "r=2a\\sec\\theta-2a\\cos\\theta=2a(\\sec\\theta-\\cos\\theta)." }, { "math_id": 2, "text": "r=2a\\sin^2\\!\\theta\\mathbin/\\cos\\theta=2a\\sin\\theta\\tan\\theta ." }, { "math_id": 3, "text": "\\begin{align}\n& x = r\\cos\\theta = 2a\\sin^2\\!\\theta = \\frac{2a\\tan^2\\!\\theta}{\\sec^2\\!\\theta} = \\frac{2at^2}{1+t^2} \\\\\n& y = tx = \\frac{2at^3}{1+t^2}\n\\end{align}" }, { "math_id": 4, "text": "(x^2+y^2)x=2ay^2" }, { "math_id": 5, "text": "t(y-2at)+x=0." }, { "math_id": 6, "text": "\\begin{align}\n& t(tx-2at)+x=0,\\ x(t^2+1)=2at^2,\\ x=\\frac{2at^2}{t^2+1} \\\\\n& y=tx=\\frac{2at^3}{t^2+1}\n\\end{align}" }, { "math_id": 7, "text": "x=a(1-\\sin\\psi),\\,y=a\\frac{(1-\\sin\\psi)^2}{\\cos\\psi}." }, { "math_id": 8, "text": "x=a(1-\\cos\\psi),\\,y=a\\frac{(1-\\cos\\psi)^2}{\\sin\\psi}" }, { "math_id": 9, "text": "x=2a\\sin^2{\\psi \\over 2},\\,y=a\\frac{4\\sin^4{\\psi \\over 2}}{2\\sin{\\psi \\over 2}\\cos{\\psi \\over 2}} = 2a\\frac{\\sin^3{\\psi \\over 2}}{\\cos{\\psi \\over 2}}." }, { "math_id": 10, "text": "r=2a\\frac{\\sin^2\\theta}{\\cos\\theta}" }, { "math_id": 11, "text": "u^3=a^3\\left(\\frac{u}{a}\\right)^3=a^3\\left(\\frac{u}{a}\\right)\\left(\\frac{v}{u}\\right)\\left(\\frac{b}{v}\\right)=a^3\\left(\\frac{b}{a}\\right)=2a^3" }, { "math_id": 12, "text": "y^2=\\frac{x^3}{2a-x}" }, { "math_id": 13, "text": "\\overline{PN}^2=\\frac{\\overline{ON}^3}{\\overline{NA}}." }, { "math_id": 14, "text": "\\frac{\\overline{PN}^3}{\\overline{ON}^3}=\\frac{\\overline{PN}}{\\overline{NA}}." }, { "math_id": 15, "text": "\\frac{\\overline{UC}^3}{\\overline{OC}^3}=\\frac{\\overline{BC}}{\\overline{CA}}," }, { "math_id": 16, "text": "\\frac{u^3}{a^3}=\\frac{b}{a},\\, u^3=a^2b" }, { "math_id": 17, "text": "r\\cos\\theta = (r\\sin \\theta)^2" }, { "math_id": 18, "text": "r=\\frac{\\cos\\theta}{\\sin^2\\!\\theta}\\,." }, { "math_id": 19, "text": "r=\\frac{\\sin^2\\!\\theta}{\\cos\\theta} = \\sin\\theta \\tan\\theta," } ]
https://en.wikipedia.org/wiki?curid=820041
8200947
Alternating multilinear map
Multilinear map that is 0 whenever arguments are linearly dependent In mathematics, more specifically in multilinear algebra, an alternating multilinear map is a multilinear map with all arguments belonging to the same vector space (for example, a bilinear form or a multilinear form) that is zero whenever any pair of its arguments is equal. This generalizes directly to a module over a commutative ring. The notion of alternatization (or alternatisation) is used to derive an alternating multilinear map from any multilinear map of which all arguments belong to the same space. Definition. Let formula_0 be a commutative ring and formula_1, formula_2 be modules over formula_0. A multilinear map of the form formula_3 is said to be alternating if it satisfies the following equivalent conditions: Vector spaces. Let formula_9 be vector spaces over the same field. Then a multilinear map of the form formula_3 is alternating if it satisfies the following condition: Example. In a Lie algebra, the Lie bracket is an alternating bilinear map. The determinant of a matrix is a multilinear alternating map of the rows or columns of the matrix. Properties. If any component formula_11 of an alternating multilinear map is replaced by formula_12 for any formula_13 and formula_14 in the base ring formula_0, then the value of that map is not changed. Every alternating multilinear map is antisymmetric, meaning that formula_15 or equivalently, formula_16 where formula_17 denotes the permutation group of degree formula_18 and formula_19 is the sign of formula_20. If formula_21 is a unit in the base ring formula_0, then every antisymmetric formula_18-multilinear form is alternating. Alternatization. Given a multilinear map of the form formula_22 the alternating multilinear map formula_23 defined by formula_24 is said to be the alternatization of formula_25. Properties Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "V" }, { "math_id": 2, "text": "W" }, { "math_id": 3, "text": "f: V^n \\to W" }, { "math_id": 4, "text": "1 \\leq i \\leq n-1" }, { "math_id": 5, "text": "x_i = x_{i+1}" }, { "math_id": 6, "text": "f(x_1,\\ldots,x_n) = 0" }, { "math_id": 7, "text": "1 \\leq i \\neq j \\leq n" }, { "math_id": 8, "text": "x_i = x_j" }, { "math_id": 9, "text": "V, W" }, { "math_id": 10, "text": "x_1,\\ldots,x_n" }, { "math_id": 11, "text": "x_i" }, { "math_id": 12, "text": "x_i + c x_j" }, { "math_id": 13, "text": "j \\neq i" }, { "math_id": 14, "text": "c" }, { "math_id": 15, "text": "f(\\dots,x_i,x_{i+1},\\dots)=-f(\\dots,x_{i+1},x_i,\\dots) \\quad \\text{ for any } 1 \\leq i \\leq n-1," }, { "math_id": 16, "text": "f(x_{\\sigma(1)},\\dots,x_{\\sigma(n)}) = (\\sgn\\sigma)f(x_1,\\dots,x_n) \\quad \\text{ for any } \\sigma\\in \\mathrm{S}_n," }, { "math_id": 17, "text": "\\mathrm{S}_n" }, { "math_id": 18, "text": "n" }, { "math_id": 19, "text": "\\sgn\\sigma" }, { "math_id": 20, "text": "\\sigma" }, { "math_id": 21, "text": "n!" }, { "math_id": 22, "text": "f : V^n \\to W," }, { "math_id": 23, "text": "g : V^n \\to W" }, { "math_id": 24, "text": "g(x_1, \\ldots, x_n) \\mathrel{:=} \\sum_{\\sigma \\in S_n} \\sgn(\\sigma)f(x_{\\sigma(1)}, \\ldots, x_{\\sigma(n)})" }, { "math_id": 25, "text": "f" } ]
https://en.wikipedia.org/wiki?curid=8200947
8202081
Cellular Potts model
Computational model of cells and tissues In computational biology, a Cellular Potts model (CPM, also known as the Glazier-Graner-Hogeweg model) is a computational model of cells and tissues. It is used to simulate individual and collective cell behavior, tissue morphogenesis and cancer development. CPM describes cells as deformable objects with a certain volume, that can adhere to each other and to the medium in which they live. The formalism can be extended to include cell behaviours such as cell migration, growth and division, and cell signalling. The first CPM was proposed for the simulation of cell sorting by François Graner and James A. Glazier as a modification of a large-Q Potts model. CPM was then popularized by Paulien Hogeweg for studying morphogenesis. Although the model was developed to describe biological cells, it can also be used to model individual parts of a biological cell, or even regions of fluid. Model description. The CPM consists of a rectangular Euclidean lattice, where each cell is a subset of lattice sites sharing the same "cell ID" (analogous to spin in Potts models in physics). Lattice sites that are not occupied by cells are the medium. The dynamics of the model are governed by an energy function: the Hamiltonian which describes the energy of a particular configuration of cells in the lattice. In a basic CPM, this energy results from adhesion between cells and resistance of cells to volume changes. The algorithm for updating CPM minimizes this energy. In order to evolve the model Metropolis-style updates are performed, that is: The Hamiltonian. The original model proposed by Graner and Glazier contains cells of two types, with different adhesion energies for cells of the same type and cells of a different type. Each cell type also has a different contact energy with the medium, and the cell volume is assumed to remain close to a target value. The Hamiltonian is formulated as: formula_2 where "i", "j" are lattice sites, σi is the cell at site i, τ(σ) is the cell type of cell σ, J is the coefficient determining the adhesion between two cells of types τ(σ),τ(σ'), δ is the Kronecker delta, v(σ) is the volume of cell σ, V(σ) is the target volume, and λ is a Lagrange multiplier determining the strength of the volume constraint. Cells with a lower J value for their membrane contact will stick together more strongly. Therefore, different patterns of cell sorting can be simulated by varying the J values. Extensions. Over time, the CPM has evolved from a specific model of cell sorting to a general framework with many extensions, some of which are partially or entirely off-lattice. Various cell behaviours, such as chemotaxis, elongation and haptotaxis can be incorporated by extending either the Hamiltonian, H, or the change in energy formula_0. Auxiliary sub-lattices may be used to include additional spatial information, such as the concentrations of chemicals. Chemotaxis. In CPM, cells can be made to move in the direction of higher chemokine concentration, by increasing the probability of copying the ID of site "j" into site "i" when the chemokine concentration is higher at "j". This is done by modifying the change in energy formula_0 with a term that is proportional to the difference in concentration at "i" and "j": formula_3 Where formula_4 is the strength of chemotactic movement, and formula_5 and formula_6 are the concentration of the chemokine at site i and j, respectively. The chemokine gradient is typically implemented on a separate lattice of the same dimensions as the cell lattice. Multiscale and hybrid modeling using CPM. Core GGH (or CPM) algorithm which defines the evolution of the cellular level structures can easily be integrated with intracellular signaling dynamics, reaction diffusion dynamics and rule based model to account for the processes which happen at lower (or higher) time scale. Open source software Bionetsolver can be used to integrate intracellular dynamics with CPM algorithm. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta H " }, { "math_id": 1, "text": " e^{-\\Delta H / T}" }, { "math_id": 2, "text": "\n\\begin{align}\nH = \\sum_{i,j\\text{ neighbors}}J\\left(\\tau(\\sigma_i),\\tau(\\sigma_j)\\right)\\left(1-\\delta(\\sigma_i,\\sigma_j)\\right)\n + \\lambda\\sum_{\\sigma_i}\\left(v(\\sigma_i)- V(\\sigma_i)\\right)^2,\\\\\n\\end{align}\n" }, { "math_id": 3, "text": "\n\\begin{align}\n\\Delta H'=\\Delta H-\\mu(C_i-C_j)\\\\\n\\end{align}\n" }, { "math_id": 4, "text": "\\mu" }, { "math_id": 5, "text": "C_i" }, { "math_id": 6, "text": "C_j" } ]
https://en.wikipedia.org/wiki?curid=8202081
8202435
Mason equation
The Mason equation is an approximate analytical expression for the growth (due to condensation) or evaporation of a water droplet—it is due to the meteorologist B. J. Mason. The expression is found by recognising that mass diffusion towards the water drop in a supersaturated environment transports energy as latent heat, and this has to be balanced by the diffusion of sensible heat back across the boundary layer, (and the energy of heatup of the drop, but for a cloud-sized drop this last term is usually small). Equation. In Mason's formulation the changes in temperature across the boundary layer can be related to the changes in saturated vapour pressure by the Clausius–Clapeyron relation; the two energy transport terms must be nearly equal but opposite in sign and so this sets the interface temperature of the drop. The resulting expression for the growth rate is significantly lower than that expected if the drop were not warmed by the latent heat. Thus if the drop has a size "r", the inward mass flow rate is given by formula_0 and the sensible heat flux by formula_1 and the final expression for the growth rate is formula_2 where
[ { "math_id": 0, "text": " \\frac{dM}{dt} = 4 \\pi r_{p} D_{v} (\\rho_{0} - \\rho_{w} ) \\," }, { "math_id": 1, "text": " \\frac{dQ}{dt} = 4 \\pi r_{p} K (T_{0} - T_{w}) \\," }, { "math_id": 2, "text": "r \\frac{dr}{dt} = \\frac {(S-1)} { [(L/RT-1) \\cdot L \\rho_l /K T_0 + (\\rho_l R T_0)/ (D \\rho_v) ]}" } ]
https://en.wikipedia.org/wiki?curid=8202435
820253
Helmholtz decomposition
Certain vector fields are the sum of an irrotational and a solenoidal vector field In physics and mathematics, the Helmholtz decomposition theorem or the fundamental theorem of vector calculus states that certain differentiable vector fields can be resolved into the sum of an irrotational (curl-free) vector field and a solenoidal (divergence-free) vector field. In physics, often only the decomposition of sufficiently smooth, rapidly decaying vector fields in three dimensions is discussed. It is named after Hermann von Helmholtz. Definition. For a vector field formula_0 defined on a domain formula_1, a Helmholtz decomposition is a pair of vector fields formula_2 and formula_3 such that: formula_4 Here, formula_5 is a scalar potential, formula_6 is its gradient, and formula_7 is the divergence of the vector field formula_8. The irrotational vector field formula_9 is called a "gradient field" and formula_8 is called a "solenoidal field" or "rotation field". This decomposition does not exist for all vector fields and is not unique. History. The Helmholtz decomposition in three dimensions was first described in 1849 by George Gabriel Stokes for a theory of diffraction. Hermann von Helmholtz published his paper on some hydrodynamic basic equations in 1858, which was part of his research on the Helmholtz's theorems describing the motion of fluid in the vicinity of vortex lines. Their derivation required the vector fields to decay sufficiently fast at infinity. Later, this condition could be relaxed, and the Helmholtz decomposition could be extended to higher dimensions. For Riemannian manifolds, the Helmholtz-Hodge decomposition using differential geometry and tensor calculus was derived. The decomposition has become an important tool for many problems in theoretical physics, but has also found applications in animation, computer vision as well as robotics. Three-dimensional space. Many physics textbooks restrict the Helmholtz decomposition to the three-dimensional space and limit its application to vector fields that decay sufficiently fast at infinity or to bump function that are defined on a bounded domain. Then, a vector potential formula_10 can be defined, such that the rotation field is given by formula_11, using the curl of a vector field. Let formula_12 be a vector field on a bounded domain formula_13, which is twice continuously differentiable inside formula_14, and let formula_15 be the surface that encloses the domain formula_14. Then formula_12 can be decomposed into a curl-free component and a divergence-free component as follows: formula_16 where formula_17 and formula_18 is the nabla operator with respect to formula_19, not formula_20. If formula_21 and is therefore unbounded, and formula_12 vanishes faster than formula_22 as formula_23, then one has formula_24 This holds in particular if formula_25 is twice continuously differentiable in formula_26 and of bounded support. Derivation. &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof Suppose we have a vector function formula_27 of which we know the curl, formula_28, and the divergence, formula_29, in the domain and the fields on the boundary. Writing the function using delta function in the form formula_30 where formula_31 is the Laplace operator, we have formula_32 where we have used the definition of the vector Laplacian: formula_33 differentiation/integration with respect to formula_34by formula_35 and in the last line, linearity of function arguments: formula_36 Then using the vectorial identities formula_37 we get formula_38 Thanks to the divergence theorem the equation can be rewritten as formula_39 with outward surface normal formula_40. Defining formula_41 formula_42 we finally obtain formula_43 Solution space. If formula_44 is a Helmholtz decomposition of formula_25, then formula_45 is another decomposition if, and only if, formula_46 and formula_47 where * formula_48 is a harmonic scalar field, * formula_49 is a vector field which fulfills formula_50 * formula_51 is a scalar field. Proof: Set formula_52 and formula_53. According to the definition of the Helmholtz decomposition, the condition is equivalent to formula_54. Taking the divergence of each member of this equation yields formula_55, hence formula_56 is harmonic. Conversely, given any harmonic function formula_56, formula_57 is solenoidal since formula_58 Thus, according to the above section, there exists a vector field formula_59 such that formula_60. If formula_61 is another such vector field, then formula_62 fulfills formula_63, hence formula_64 for some scalar field formula_65. Fields with prescribed divergence and curl. The term "Helmholtz theorem" can also refer to the following. Let C be a solenoidal vector field and "d" a scalar field on R3 which are sufficiently smooth and which vanish faster than 1/"r"2 at infinity. Then there exists a vector field F such that formula_66 if additionally the vector field F vanishes as "r" → ∞, then F is unique. In other words, a vector field can be constructed with both a specified divergence and a specified curl, and if it also vanishes at infinity, it is uniquely specified by its divergence and curl. This theorem is of great importance in electrostatics, since Maxwell's equations for the electric and magnetic fields in the static case are of exactly this type. The proof is by a construction generalizing the one given above: we set formula_67 where formula_68 represents the Newtonian potential operator. (When acting on a vector field, such as ∇ × F, it is defined to act on each component.) Weak formulation. The Helmholtz decomposition can be generalized by reducing the regularity assumptions (the need for the existence of strong derivatives). Suppose Ω is a bounded, simply-connected, Lipschitz domain. Every square-integrable vector field u ∈ ("L"2(Ω))3 has an orthogonal decomposition: formula_69 where φ is in the Sobolev space "H"1(Ω) of square-integrable functions on Ω whose partial derivatives defined in the distribution sense are square integrable, and A ∈ "H"(curl, Ω), the Sobolev space of vector fields consisting of square integrable vector fields with square integrable curl. For a slightly smoother vector field u ∈ "H"(curl, Ω), a similar decomposition holds: formula_70 where "φ" ∈ "H"1(Ω), v ∈ ("H"1(Ω))"d". Derivation from the Fourier transform. Note that in the theorem stated here, we have imposed the condition that if formula_12 is not defined on a bounded domain, then formula_12 shall decay faster than formula_22. Thus, the Fourier transform of formula_12, denoted as formula_9, is guaranteed to exist. We apply the convention formula_71 The Fourier transform of a scalar field is a scalar field, and the Fourier transform of a vector field is a vector field of same dimension. Now consider the following scalar and vector fields: formula_72 Hence formula_73 Longitudinal and transverse fields. A terminology often used in physics refers to the curl-free component of a vector field as the longitudinal component and the divergence-free component as the transverse component. This terminology comes from the following construction: Compute the three-dimensional Fourier transform formula_74 of the vector field formula_12. Then decompose this field, at each point k, into two components, one of which points longitudinally, i.e. parallel to k, the other of which points in the transverse direction, i.e. perpendicular to k. So far, we have formula_75 formula_76 formula_77 Now we apply an inverse Fourier transform to each of these components. Using properties of Fourier transforms, we derive: formula_78 formula_79 formula_80 Since formula_81 and formula_82, we can get formula_83 formula_84 so this is indeed the Helmholtz decomposition. Generalization to higher dimensions. Matrix approach. The generalization to formula_85 dimensions cannot be done with a vector potential, since the rotation operator and the cross product are defined (as vectors) only in three dimensions. Let formula_12 be a vector field on a bounded domain formula_86 which decays faster than formula_87 for formula_88 and formula_89. The scalar potential is defined similar to the three dimensional case as: formula_90 where as the integration kernel formula_91 is again the fundamental solution of Laplace's equation, but in d-dimensional space: formula_92 with formula_93 the volume of the d-dimensional unit balls and formula_94 the gamma function. For formula_95, formula_96 is just equal to formula_97, yielding the same prefactor as above. The rotational potential is an antisymmetric matrix with the elements: formula_98 Above the diagonal are formula_99 entries which occur again mirrored at the diagonal, but with a negative sign. In the three-dimensional case, the matrix elements just correspond to the components of the vector potential formula_100. However, such a matrix potential can be written as a vector only in the three-dimensional case, because formula_101 is valid only for formula_95. As in the three-dimensional case, the gradient field is defined as formula_102 The rotational field, on the other hand, is defined in the general case as the row divergence of the matrix: formula_103 In three-dimensional space, this is equivalent to the rotation of the vector potential. Tensor approach. In a formula_85-dimensional vector space with formula_104, formula_105 can be replaced by the appropriate Green's function for the Laplacian, defined by formula_106 where Einstein summation convention is used for the index formula_107. For example, formula_108 in 2D. Following the same steps as above, we can write formula_109 where formula_110 is the Kronecker delta (and the summation convention is again used). In place of the definition of the vector Laplacian used above, we now make use of an identity for the Levi-Civita symbol formula_111, formula_112 which is valid in formula_113 dimensions, where formula_114 is a formula_115-component multi-index. This gives formula_116 We can therefore write formula_117 where formula_118 Note that the vector potential is replaced by a rank-formula_115 tensor in formula_85 dimensions. Because formula_119 is a function of only formula_120, one can replace formula_121, giving formula_122 Integration by parts can then be used to give formula_123 where formula_124 is the boundary of formula_14. These expressions are analogous to those given above for three-dimensional space. For a further generalization to manifolds, see the discussion of Hodge decomposition below. Differential forms. The Hodge decomposition is closely related to the Helmholtz decomposition, generalizing from vector fields on R3 to differential forms on a Riemannian manifold "M". Most formulations of the Hodge decomposition require "M" to be compact. Since this is not true of R3, the Hodge decomposition theorem is not strictly a generalization of the Helmholtz theorem. However, the compactness restriction in the usual formulation of the Hodge decomposition can be replaced by suitable decay assumptions at infinity on the differential forms involved, giving a proper generalization of the Helmholtz theorem. Extensions to fields not decaying at infinity. Most textbooks only deal with vector fields decaying faster than formula_87 with formula_125 at infinity. However, Otto Blumenthal showed in 1905 that an adapted integration kernel can be used to integrate fields decaying faster than formula_87 with formula_126, which is substantially less strict. To achieve this, the kernel formula_91 in the convolution integrals has to be replaced by formula_127. With even more complex integration kernels, solutions can be found even for divergent functions that need not grow faster than polynomial. For all analytic vector fields that need not go to zero even at infinity, methods based on partial integration and the Cauchy formula for repeated integration can be used to compute closed-form solutions of the rotation and scalar potentials, as in the case of multivariate polynomial, sine, cosine, and exponential functions. Uniqueness of the solution. In general, the Helmholtz decomposition is not uniquely defined. A harmonic function formula_128 is a function that satisfies formula_129. By adding formula_128 to the scalar potential formula_130, a different Helmholtz decomposition can be obtained: formula_131 For vector fields formula_12, decaying at infinity, it is a plausible choice that scalar and rotation potentials also decay at infinity. Because formula_132 is the only harmonic function with this property, which follows from Liouville's theorem, this guarantees the uniqueness of the gradient and rotation fields. This uniqueness does not apply to the potentials: In the three-dimensional case, the scalar and vector potential jointly have four components, whereas the vector field has only three. The vector field is invariant to gauge transformations and the choice of appropriate potentials known as gauge fixing is the subject of gauge theory. Important examples from physics are the Lorenz gauge condition and the Coulomb gauge. An alternative is to use the poloidal–toroidal decomposition. Applications. Electrodynamics. The Helmholtz theorem is of particular interest in electrodynamics, since it can be used to write Maxwell's equations in the potential image and solve them more easily. The Helmholtz decomposition can be used to prove that, given electric current density and charge density, the electric field and the magnetic flux density can be determined. They are unique if the densities vanish at infinity and one assumes the same for the potentials. Fluid dynamics. In fluid dynamics, the Helmholtz projection plays an important role, especially for the solvability theory of the Navier-Stokes equations. If the Helmholtz projection is applied to the linearized incompressible Navier-Stokes equations, the Stokes equation is obtained. This depends only on the velocity of the particles in the flow, but no longer on the static pressure, allowing the equation to be reduced to one unknown. However, both equations, the Stokes and linearized equations, are equivalent. The operator formula_133 is called the Stokes operator. Dynamical systems theory. In the theory of dynamical systems, Helmholtz decomposition can be used to determine "quasipotentials" as well as to compute Lyapunov functions in some cases. For some dynamical systems such as the Lorenz system (Edward N. Lorenz, 1963), a simplified model for atmospheric convection, a closed-form expression of the Helmholtz decomposition can be obtained: formula_134 The Helmholtz decomposition of formula_27, with the scalar potential formula_135 is given as: formula_136 formula_137 The quadratic scalar potential provides motion in the direction of the coordinate origin, which is responsible for the stable fix point for some parameter range. For other parameters, the rotation field ensures that a strange attractor is created, causing the model to exhibit a butterfly effect. Medical Imaging. In magnetic resonance elastography, a variant of MR imaging where mechanical waves are used to probe the viscoelasticity of organs, the Helmholtz decomposition is sometimes used to separate the measured displacement fields into its shear component (divergence-free) and its compression component (curl-free). In this way, the complex shear modulus can be calculated without contributions from compression waves. Computer animation and robotics. The Helmholtz decomposition is also used in the field of computer engineering. This includes robotics, image reconstruction but also computer animation, where the decomposition is used for realistic visualization of fluids or vector fields. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{F} \\in C^1(V, \\mathbb{R}^n)" }, { "math_id": 1, "text": "V \\subseteq \\mathbb{R}^n" }, { "math_id": 2, "text": "\\mathbf{G} \\in C^1(V, \\mathbb{R}^n)" }, { "math_id": 3, "text": "\\mathbf{R} \\in C^1(V, \\mathbb{R}^n)" }, { "math_id": 4, "text": "\n\\begin{align}\n\\mathbf{F}(\\mathbf{r}) &= \\mathbf{G}(\\mathbf{r}) + \\mathbf{R}(\\mathbf{r}), \\\\\n\\mathbf{G}(\\mathbf{r}) &= - \\nabla \\Phi(\\mathbf{r}), \\\\\n\\nabla \\cdot \\mathbf{R}(\\mathbf{r}) &= 0.\n\\end{align}\n" }, { "math_id": 5, "text": "\\Phi \\in C^2(V, \\mathbb{R})" }, { "math_id": 6, "text": "\\nabla \\Phi" }, { "math_id": 7, "text": "\\nabla \\cdot \\mathbf{R}" }, { "math_id": 8, "text": "\\mathbf{R}" }, { "math_id": 9, "text": "\\mathbf{G}" }, { "math_id": 10, "text": "A" }, { "math_id": 11, "text": "\\mathbf{R} = \\nabla \\times \\mathbf{A}" }, { "math_id": 12, "text": "\\mathbf{F}" }, { "math_id": 13, "text": "V\\subseteq\\mathbb{R}^3" }, { "math_id": 14, "text": "V" }, { "math_id": 15, "text": "S" }, { "math_id": 16, "text": "\\mathbf{F}=-\\nabla \\Phi+\\nabla\\times\\mathbf{A}," }, { "math_id": 17, "text": "\n\\begin{align}\n\\Phi(\\mathbf{r}) & =\\frac 1 {4\\pi} \\int_V \\frac{\\nabla'\\cdot\\mathbf{F} (\\mathbf{r}')}{|\\mathbf{r} -\\mathbf{r}'|} \\, \\mathrm{d}V' -\\frac 1 {4\\pi} \\oint_S \\mathbf{\\hat{n}}' \\cdot \\frac{\\mathbf{F} (\\mathbf{r}')}{|\\mathbf{r}-\\mathbf{r}'|} \\, \\mathrm{d}S' \\\\[8pt]\n\\mathbf{A}(\\mathbf{r}) & =\\frac 1 {4\\pi} \\int_V \\frac{\\nabla' \\times \\mathbf{F}(\\mathbf{r}')}{|\\mathbf{r}-\\mathbf{r}'|} \\, \\mathrm{d}V' -\\frac 1 {4\\pi} \\oint_S \\mathbf{\\hat{n}}'\\times\\frac{\\mathbf{F} (\\mathbf{r}')}{|\\mathbf{r}-\\mathbf{r}'|} \\, \\mathrm{d}S'\n\\end{align}\n" }, { "math_id": 18, "text": "\\nabla'" }, { "math_id": 19, "text": "\\mathbf{r'}" }, { "math_id": 20, "text": " \\mathbf{r} " }, { "math_id": 21, "text": "V = \\R^3" }, { "math_id": 22, "text": "1/r" }, { "math_id": 23, "text": "r \\to \\infty" }, { "math_id": 24, "text": "\\begin{align}\n\\Phi(\\mathbf{r}) & =\\frac{1}{4\\pi}\\int_{\\R^3} \\frac{\\nabla' \\cdot \\mathbf{F} (\\mathbf{r}')}{|\\mathbf{r}-\\mathbf{r}'|} \\, \\mathrm{d}V' \\\\[8pt]\n\\mathbf{A} (\\mathbf{r}) & =\\frac{1}{4\\pi}\\int_{\\R^3} \\frac{\\nabla'\\times\\mathbf{F} (\\mathbf{r}')}{|\\mathbf{r}-\\mathbf{r}'|} \\, \\mathrm{d}V'\n\\end{align}" }, { "math_id": 25, "text": "\\mathbf F" }, { "math_id": 26, "text": "\\mathbb R^3" }, { "math_id": 27, "text": "\\mathbf{F}(\\mathbf{r})" }, { "math_id": 28, "text": "\\nabla\\times\\mathbf{F}" }, { "math_id": 29, "text": "\\nabla\\cdot\\mathbf{F}" }, { "math_id": 30, "text": "\\delta^3(\\mathbf{r}-\\mathbf{r}')=-\\frac 1 {4\\pi} \\nabla^2 \\frac{1}{|\\mathbf{r}-\\mathbf{r}'|}\\, ," }, { "math_id": 31, "text": "\\nabla^2:=\\nabla\\cdot\\nabla" }, { "math_id": 32, "text": "\\begin{align}\n\\mathbf{F}(\\mathbf{r}) &= \\int_V \\mathbf{F}\\left(\\mathbf{r}'\\right)\\delta^3 (\\mathbf{r}-\\mathbf{r}') \\mathrm{d}V' \\\\\n&=\\int_V\\mathbf{F}(\\mathbf{r}')\\left(-\\frac{1}{4\\pi}\\nabla^2\\frac{1}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\right)\\mathrm{d}V' \\\\\n&=-\\frac{1}{4\\pi}\\nabla^2 \\int_V \\frac{\\mathbf{F}(\\mathbf{r}')}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\mathrm{d}V' \\\\\n&=-\\frac{1}{4\\pi}\\left[\\nabla\\left(\\nabla\\cdot\\int_V\\frac{\\mathbf{F}(\\mathbf{r}')}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\mathrm{d}V'\\right)-\\nabla\\times\\left(\\nabla\\times\\int_V\\frac{\\mathbf{F}(\\mathbf{r}')}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\mathrm{d}V'\\right)\\right] \\\\\n&= -\\frac{1}{4\\pi} \\left[\\nabla\\left(\\int_V\\mathbf{F}(\\mathbf{r}')\\cdot\\nabla\\frac{1}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\mathrm{d}V'\\right)+\\nabla\\times\\left(\\int_V\\mathbf{F}(\\mathbf{r}')\\times\\nabla\\frac{1}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\mathrm{d}V'\\right)\\right] \\\\\n&=-\\frac{1}{4\\pi}\\left[-\\nabla\\left(\\int_V\\mathbf{F}(\\mathbf{r}')\\cdot\\nabla'\\frac{1}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\mathrm{d}V'\\right)-\\nabla\\times\\left(\\int_V\\mathbf{F} (\\mathbf{r}')\\times\\nabla'\\frac{1}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\mathrm{d}V'\\right)\\right]\n\\end{align}" }, { "math_id": 33, "text": "\\nabla^{2}\\mathbf{a}=\\nabla (\\nabla\\cdot\\mathbf{a})-\\nabla\\times (\\nabla\\times\\mathbf{a}) \\ ," }, { "math_id": 34, "text": "\\mathbf r'" }, { "math_id": 35, "text": "\\nabla'/\\mathrm dV'," }, { "math_id": 36, "text": " \\nabla\\frac{1}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}=-\\nabla'\\frac{1}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\ ." }, { "math_id": 37, "text": "\\begin{align}\n\\mathbf{a}\\cdot\\nabla\\psi &=-\\psi(\\nabla\\cdot\\mathbf{a})+\\nabla\\cdot (\\psi\\mathbf{a}) \\\\\n\\mathbf{a}\\times\\nabla\\psi &=\\psi(\\nabla\\times\\mathbf{a})-\\nabla \\times (\\psi\\mathbf{a})\n\\end{align}" }, { "math_id": 38, "text": "\\begin{align}\n\\mathbf{F}(\\mathbf{r})=-\\frac{1}{4\\pi}\\bigg[\n&-\\nabla\\left(-\\int_{V}\\frac{\\nabla'\\cdot\\mathbf{F}\\left(\\mathbf{r}'\\right)}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\mathrm{d}V'+\\int_{V}\\nabla'\\cdot\\frac{\\mathbf{F}\\left(\\mathbf{r}'\\right)}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\mathrm{d}V'\\right)\n\\\\&\n-\\nabla\\times\\left(\\int_{V}\\frac{\\nabla'\\times\\mathbf{F}\\left(\\mathbf{r}'\\right)}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\mathrm{d}V'\n- \\int_{V}\\nabla'\\times\\frac{\\mathbf{F}\\left(\\mathbf{r}'\\right)}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\mathrm{d}V'\\right)\\bigg].\n\\end{align}" }, { "math_id": 39, "text": "\\begin{align}\n\\mathbf{F} (\\mathbf{r}) \n&=\n-\\frac{1}{4\\pi}\n\\bigg[\n -\\nabla\\left(\n -\\int_{V}\n \\frac{\n \\nabla'\\cdot\\mathbf{F}\\left(\\mathbf{r}'\\right)\n }{\n \\left|\\mathbf{r}-\\mathbf{r}'\\right|\n } \\mathrm{d}V'\n +\n \\oint_{S}\\mathbf{\\hat{n}}'\\cdot\n \\frac{\n \\mathbf{F}\\left(\\mathbf{r}'\\right)\n }{\n \\left|\\mathbf{r}-\\mathbf{r}'\\right|\n }\\mathrm{d}S'\n \\right)\n\\\\\n&\\qquad\\qquad\n-\\nabla\\times\\left(\\int_{V}\\frac{\\nabla'\\times\\mathbf{F}\\left(\\mathbf{r}'\\right)}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\mathrm{d}V'\n-\\oint_{S}\\mathbf{\\hat{n}}'\\times\\frac{\\mathbf{F}\\left(\\mathbf{r}'\\right)}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\mathrm{d}S'\\right)\n\\bigg]\n\\\\\n&= \n-\\nabla\\left[\n \\frac{1}{4\\pi}\\int_{V}\n \\frac{\n \\nabla'\\cdot\\mathbf{F}\\left(\\mathbf{r}'\\right)\n }{\\left|\n \\mathbf{r}-\\mathbf{r}'\n \\right|}\n \\mathrm{d}V' \n - \n \\frac{1}{4\\pi} \n \\oint_{S}\\mathbf{\\hat{n}}' \\cdot\n \\frac{\n \\mathbf{F}\\left(\\mathbf{r}'\\right)\n }{\n \\left|\n \\mathbf{r}-\\mathbf{r}'\n \\right|\n }\n \\mathrm{d}S'\n\\right]\n\\\\\n&\\quad\n+\n\\nabla\\times\n\\left[\n \\frac{1}{4\\pi}\\int_{V}\n \\frac{\n \\nabla '\\times\\mathbf{F}\\left(\\mathbf{r}'\\right)\n }{\n \\left|\n \\mathbf{r}-\\mathbf{r}'\n \\right|\n }\n \\mathrm{d}V' \n -\n \\frac{1}{4\\pi}\\oint_{S}\n \\mathbf{\\hat{n}}'\n \\times\n \\frac{\n \\mathbf{F}\\left(\\mathbf{r}'\\right)\n }{\n \\left|\n \\mathbf{r}-\\mathbf{r}'\n \\right|\n }\n \\mathrm{d}S'\n\\right]\n\\end{align}" }, { "math_id": 40, "text": " \\mathbf{\\hat{n}}' " }, { "math_id": 41, "text": "\\Phi(\\mathbf{r})\\equiv\\frac{1}{4\\pi}\\int_{V}\\frac{\\nabla'\\cdot\\mathbf{F}\\left(\\mathbf{r}'\\right)}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\mathrm{d}V'-\\frac{1}{4\\pi}\\oint_{S}\\mathbf{\\hat{n}}'\\cdot\\frac{\\mathbf{F}\\left(\\mathbf{r}'\\right)}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\mathrm{d}S'" }, { "math_id": 42, "text": "\\mathbf{A}(\\mathbf{r})\\equiv\\frac{1}{4\\pi}\\int_{V}\\frac{\\nabla'\\times\\mathbf{F}\\left(\\mathbf{r}'\\right)}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\mathrm{d}V'-\\frac{1}{4\\pi}\\oint_{S}\\mathbf{\\hat{n}}'\\times\\frac{\\mathbf{F}\\left(\\mathbf{r}'\\right)}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\mathrm{d}S'" }, { "math_id": 43, "text": "\\mathbf{F}=-\\nabla\\Phi+\\nabla\\times\\mathbf{A}." }, { "math_id": 44, "text": "(\\Phi_1, {\\mathbf A_1})" }, { "math_id": 45, "text": "(\\Phi_2, {\\mathbf A_2})" }, { "math_id": 46, "text": "\\Phi_1-\\Phi_2 = \\lambda \\quad " }, { "math_id": 47, "text": "\\quad \\mathbf{A}_1 - \\mathbf{A}_2 = {\\mathbf A}_\\lambda + \\nabla \\varphi," }, { "math_id": 48, "text": " \\lambda" }, { "math_id": 49, "text": " {\\mathbf A}_\\lambda " }, { "math_id": 50, "text": "\\nabla\\times {\\mathbf A}_\\lambda = \\nabla \\lambda," }, { "math_id": 51, "text": " \\varphi " }, { "math_id": 52, "text": "\\lambda = \\Phi_2 - \\Phi_1" }, { "math_id": 53, "text": "{\\mathbf B = A_2 - A_1}" }, { "math_id": 54, "text": " -\\nabla \\lambda + \\nabla \\times \\mathbf B = 0 " }, { "math_id": 55, "text": "\\nabla^2 \\lambda = 0" }, { "math_id": 56, "text": "\\lambda" }, { "math_id": 57, "text": "\\nabla \\lambda " }, { "math_id": 58, "text": "\\nabla\\cdot (\\nabla \\lambda) = \\nabla^2 \\lambda = 0." }, { "math_id": 59, "text": "{\\mathbf A}_\\lambda" }, { "math_id": 60, "text": "\\nabla \\lambda = \\nabla\\times {\\mathbf A}_\\lambda" }, { "math_id": 61, "text": "{\\mathbf A'}_\\lambda" }, { "math_id": 62, "text": "\\mathbf C = {\\mathbf A}_\\lambda - {\\mathbf A'}_\\lambda" }, { "math_id": 63, "text": "\\nabla \\times {\\mathbf C} = 0" }, { "math_id": 64, "text": "C = \\nabla \\varphi" }, { "math_id": 65, "text": "\\varphi" }, { "math_id": 66, "text": "\\nabla \\cdot \\mathbf{F} = d \\quad \\text{ and } \\quad \\nabla \\times \\mathbf{F} = \\mathbf{C};" }, { "math_id": 67, "text": "\\mathbf{F} = \\nabla(\\mathcal{G} (d)) - \\nabla \\times (\\mathcal{G}(\\mathbf{C}))," }, { "math_id": 68, "text": "\\mathcal{G}" }, { "math_id": 69, "text": "\\mathbf{u}=\\nabla\\varphi+\\nabla \\times \\mathbf{A}" }, { "math_id": 70, "text": "\\mathbf{u}=\\nabla\\varphi+\\mathbf{v}" }, { "math_id": 71, "text": "\\mathbf{F}(\\mathbf{r}) = \\iiint \\mathbf{G}(\\mathbf{k}) e^{i\\mathbf{k} \\cdot \\mathbf{r}} dV_k " }, { "math_id": 72, "text": "\\begin{align}\nG_\\Phi(\\mathbf{k}) &= i \\frac{\\mathbf{k} \\cdot \\mathbf{G}(\\mathbf{k})}{\\|\\mathbf{k}\\|^2} \\\\\n\\mathbf{G}_\\mathbf{A}(\\mathbf{k}) &= i \\frac{\\mathbf{k} \\times \\mathbf{G}(\\mathbf{k})}{\\|\\mathbf{k}\\|^2} \\\\ [8pt]\n\\Phi(\\mathbf{r}) &= \\iiint G_\\Phi(\\mathbf{k}) e^{i \\mathbf{k} \\cdot \\mathbf{r}} dV_k \\\\\n\\mathbf{A}(\\mathbf{r}) &= \\iiint \\mathbf{G}_\\mathbf{A}(\\mathbf{k}) e^{i \\mathbf{k} \\cdot \\mathbf{r}} dV_k\n\\end{align} " }, { "math_id": 73, "text": "\\begin{align}\n\\mathbf{G}(\\mathbf{k}) &= - i \\mathbf{k} G_\\Phi(\\mathbf{k}) + i \\mathbf{k} \\times \\mathbf{G}_\\mathbf{A}(\\mathbf{k}) \\\\ [6pt]\n\\mathbf{F}(\\mathbf{r}) &= -\\iiint i \\mathbf{k} G_\\Phi(\\mathbf{k}) e^{i \\mathbf{k} \\cdot \\mathbf{r}} dV_k + \\iiint i \\mathbf{k} \\times \\mathbf{G}_\\mathbf{A}(\\mathbf{k}) e^{i \\mathbf{k} \\cdot \\mathbf{r}} dV_k \\\\\n&= - \\nabla \\Phi(\\mathbf{r}) + \\nabla \\times \\mathbf{A}(\\mathbf{r})\n\\end{align}" }, { "math_id": 74, "text": "\\hat\\mathbf{F}" }, { "math_id": 75, "text": "\\hat\\mathbf{F} (\\mathbf{k}) = \\hat\\mathbf{F}_t (\\mathbf{k}) + \\hat\\mathbf{F}_l (\\mathbf{k})" }, { "math_id": 76, "text": "\\mathbf{k} \\cdot \\hat\\mathbf{F}_t(\\mathbf{k}) = 0." }, { "math_id": 77, "text": "\\mathbf{k} \\times \\hat\\mathbf{F}_l(\\mathbf{k}) = \\mathbf{0}." }, { "math_id": 78, "text": "\\mathbf{F}(\\mathbf{r}) = \\mathbf{F}_t(\\mathbf{r})+\\mathbf{F}_l(\\mathbf{r})" }, { "math_id": 79, "text": "\\nabla \\cdot \\mathbf{F}_t (\\mathbf{r}) = 0" }, { "math_id": 80, "text": "\\nabla \\times \\mathbf{F}_l (\\mathbf{r}) = \\mathbf{0}" }, { "math_id": 81, "text": "\\nabla\\times(\\nabla\\Phi)=0" }, { "math_id": 82, "text": "\\nabla\\cdot(\\nabla\\times\\mathbf{A})=0" }, { "math_id": 83, "text": "\\mathbf{F}_t=\\nabla\\times\\mathbf{A}=\\frac{1}{4\\pi}\\nabla\\times\\int_V\\frac{\\nabla'\\times\\mathbf{F}}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\mathrm{d}V'" }, { "math_id": 84, "text": "\\mathbf{F}_l=-\\nabla\\Phi=-\\frac{1}{4\\pi}\\nabla\\int_V\\frac{\\nabla'\\cdot\\mathbf{F}}{\\left|\\mathbf{r}-\\mathbf{r}'\\right|}\\mathrm{d}V'" }, { "math_id": 85, "text": "d" }, { "math_id": 86, "text": "V\\subseteq\\mathbb{R}^d" }, { "math_id": 87, "text": "|\\mathbf{r}|^{-\\delta}" }, { "math_id": 88, "text": "|\\mathbf{r}| \\to \\infty" }, { "math_id": 89, "text": "\\delta > 2" }, { "math_id": 90, "text": "\\Phi(\\mathbf{r}) = - \\int_{\\mathbb{R}^d} \\operatorname{div}(\\mathbf{F}(\\mathbf{r}')) K(\\mathbf{r}, \\mathbf{r}') \\mathrm{d}V' = - \\int_{\\mathbb{R}^d} \\sum_i \\frac{\\partial F_i}{\\partial r_i}(\\mathbf{r}') K(\\mathbf{r}, \\mathbf{r}') \\mathrm{d}V'," }, { "math_id": 91, "text": "K(\\mathbf{r}, \\mathbf{r}')" }, { "math_id": 92, "text": "K(\\mathbf{r}, \\mathbf{r}') = \\begin{cases} \\frac{1}{2\\pi} \\log{ | \\mathbf{r}-\\mathbf{r}' | } & d=2, \\\\ \\frac{1}{d(2-d)V_d} | \\mathbf{r}-\\mathbf{r}' | ^{2-d} & \\text{otherwise}, \\end{cases}" }, { "math_id": 93, "text": "V_d = \\pi^\\frac{d}{2} / \\Gamma\\big(\\tfrac{d}{2}+1\\big)" }, { "math_id": 94, "text": "\\Gamma(\\mathbf{r})" }, { "math_id": 95, "text": "d = 3" }, { "math_id": 96, "text": "V_d" }, { "math_id": 97, "text": "\\frac{4 \\pi}{3}" }, { "math_id": 98, "text": "A_{ij}(\\mathbf{r}) = \\int_{\\mathbb{R}^d} \\left( \\frac{\\partial F_i}{\\partial x_j}(\\mathbf{r}') - \\frac{\\partial F_j}{\\partial x_i}(\\mathbf{r}') \\right) K(\\mathbf{r}, \\mathbf{r}') \\mathrm{d}V'. " }, { "math_id": 99, "text": "\\textstyle\\binom{d}{2}" }, { "math_id": 100, "text": "\\mathbf{A} = [A_1, A_2, A_3] = [A_{23}, A_{31}, A_{12}]" }, { "math_id": 101, "text": "\\textstyle\\binom{d}{2} = d" }, { "math_id": 102, "text": "\n\\mathbf{G}(\\mathbf{r}) = - \\nabla \\Phi(\\mathbf{r}).\n" }, { "math_id": 103, "text": "\\mathbf{R}(\\mathbf{r}) = \\left[ \\sum\\nolimits_k \\partial_{r_k} A_{ik}(\\mathbf{r}); {1 \\leq i \\leq d} \\right]." }, { "math_id": 104, "text": "d\\neq 3" }, { "math_id": 105, "text": "-\\frac{1}{4\\pi\\left|\\mathbf{r}-\\mathbf{r}'\\right|}" }, { "math_id": 106, "text": "\n\\nabla^2 G(\\mathbf{r},\\mathbf{r}') = \\frac{\\partial}{\\partial r_\\mu}\\frac{\\partial}{\\partial r_\\mu}G(\\mathbf{r},\\mathbf{r}') = \\delta^d(\\mathbf{r}-\\mathbf{r}')\n" }, { "math_id": 107, "text": "\\mu" }, { "math_id": 108, "text": "G(\\mathbf{r},\\mathbf{r}')=\\frac{1}{2\\pi}\\ln\\left|\\mathbf{r}-\\mathbf{r}'\\right|" }, { "math_id": 109, "text": "\nF_\\mu(\\mathbf{r}) = \\int_V F_\\mu(\\mathbf{r}') \\frac{\\partial}{\\partial r_\\mu}\\frac{\\partial}{\\partial r_\\mu}G(\\mathbf{r},\\mathbf{r}') \\,\\mathrm{d}^d \\mathbf{r}'\n = \\delta_{\\mu\\nu}\\delta_{\\rho\\sigma}\\int_V F_\\nu(\\mathbf{r}') \\frac{\\partial}{\\partial r_\\rho}\\frac{\\partial}{\\partial r_\\sigma}G(\\mathbf{r},\\mathbf{r}') \\,\\mathrm{d}^d \\mathbf{r}'\n" }, { "math_id": 110, "text": "\\delta_{\\mu\\nu}" }, { "math_id": 111, "text": "\\varepsilon" }, { "math_id": 112, "text": "\n\\varepsilon_{\\alpha\\mu\\rho}\\varepsilon_{\\alpha\\nu\\sigma} = (d-2)!(\\delta_{\\mu\\nu}\\delta_{\\rho\\sigma} - \\delta_{\\mu\\sigma}\\delta_{\\nu\\rho})\n" }, { "math_id": 113, "text": "d\\ge 2" }, { "math_id": 114, "text": "\\alpha" }, { "math_id": 115, "text": "(d-2)" }, { "math_id": 116, "text": "\nF_\\mu(\\mathbf{r}) = \\delta_{\\mu\\sigma}\\delta_{\\nu\\rho}\\int_V F_\\nu(\\mathbf{r}') \\frac{\\partial}{\\partial r_\\rho}\\frac{\\partial}{\\partial r_\\sigma}G(\\mathbf{r},\\mathbf{r}') \\,\\mathrm{d}^d \\mathbf{r}'\n+ \\frac{1}{(d-2)!}\\varepsilon_{\\alpha\\mu\\rho}\\varepsilon_{\\alpha\\nu\\sigma} \\int_V F_\\nu(\\mathbf{r}') \\frac{\\partial}{\\partial r_\\rho}\\frac{\\partial}{\\partial r_\\sigma}G(\\mathbf{r},\\mathbf{r}') \\,\\mathrm{d}^d \\mathbf{r}'\n" }, { "math_id": 117, "text": "\nF_\\mu(\\mathbf{r}) = -\\frac{\\partial}{\\partial r_\\mu} \\Phi(\\mathbf{r}) + \\varepsilon_{\\mu\\rho\\alpha}\\frac{\\partial}{\\partial r_\\rho} A_{\\alpha}(\\mathbf{r})\n" }, { "math_id": 118, "text": "\n\\begin{aligned}\n\\Phi(\\mathbf{r}) &= -\\int_V F_\\nu(\\mathbf{r}') \\frac{\\partial}{\\partial r_\\nu}G(\\mathbf{r},\\mathbf{r}') \\,\\mathrm{d}^d \\mathbf{r}'\\\\\nA_{\\alpha} &= \\frac{1}{(d-2)!}\\varepsilon_{\\alpha\\nu\\sigma} \\int_V F_\\nu(\\mathbf{r}') \\frac{\\partial}{\\partial r_\\sigma}G(\\mathbf{r},\\mathbf{r}') \\,\\mathrm{d}^d \\mathbf{r}'\n\\end{aligned}\n" }, { "math_id": 119, "text": "G(\\mathbf{r},\\mathbf{r}')" }, { "math_id": 120, "text": "\\mathbf{r}-\\mathbf{r}'" }, { "math_id": 121, "text": "\\frac{\\partial}{\\partial r_\\mu}\\rightarrow - \\frac{\\partial}{\\partial r'_\\mu}" }, { "math_id": 122, "text": "\n\\begin{aligned}\n\\Phi(\\mathbf{r}) &= \\int_V F_\\nu(\\mathbf{r}') \\frac{\\partial}{\\partial r'_\\nu}G(\\mathbf{r},\\mathbf{r}') \\,\\mathrm{d}^d \\mathbf{r}'\\\\\nA_{\\alpha} &= -\\frac{1}{(d-2)!}\\varepsilon_{\\alpha\\nu\\sigma} \\int_V F_\\nu(\\mathbf{r}') \\frac{\\partial}{\\partial r_\\sigma'}G(\\mathbf{r},\\mathbf{r}') \\,\\mathrm{d}^d \\mathbf{r}'\n\\end{aligned}\n" }, { "math_id": 123, "text": "\n\\begin{aligned}\n\\Phi(\\mathbf{r}) &= -\\int_V G(\\mathbf{r},\\mathbf{r}')\\frac{\\partial}{\\partial r'_\\nu}F_\\nu(\\mathbf{r}') \\,\\mathrm{d}^d \\mathbf{r}' + \\oint_{S} G(\\mathbf{r},\\mathbf{r}') F_\\nu(\\mathbf{r}') \\hat{n}'_\\nu \\,\\mathrm{d}^{d-1} \\mathbf{r}'\\\\\nA_{\\alpha} &= \\frac{1}{(d-2)!}\\varepsilon_{\\alpha\\nu\\sigma} \\int_V G(\\mathbf{r},\\mathbf{r}') \\frac{\\partial}{\\partial r_\\sigma'}F_\\nu(\\mathbf{r}') \\,\\mathrm{d}^d \\mathbf{r}'- \\frac{1}{(d-2)!}\\varepsilon_{\\alpha\\nu\\sigma} \\oint_{S} G(\\mathbf{r},\\mathbf{r}') F_\\nu(\\mathbf{r}') \\hat{n}'_\\sigma \\,\\mathrm{d}^{d-1} \\mathbf{r}'\n\\end{aligned}\n" }, { "math_id": 124, "text": "S=\\partial V" }, { "math_id": 125, "text": "\\delta > 1" }, { "math_id": 126, "text": "\\delta > 0" }, { "math_id": 127, "text": "K'(\\mathbf{r}, \\mathbf{r}') = K(\\mathbf{r}, \\mathbf{r}') - K(0, \\mathbf{r}')" }, { "math_id": 128, "text": "H(\\mathbf{r})" }, { "math_id": 129, "text": "\\Delta H(\\mathbf{r}) = 0" }, { "math_id": 130, "text": "\\Phi(\\mathbf{r})" }, { "math_id": 131, "text": "\\begin{align}\n\\mathbf{G}'(\\mathbf{r}) &= \\nabla (\\Phi(\\mathbf{r}) + H(\\mathbf{r})) = \\mathbf{G}(\\mathbf{r}) + \\nabla H(\\mathbf{r}),\\\\\n\\mathbf{R}'(\\mathbf{r}) &= \\mathbf{R}(\\mathbf{r}) - \\nabla H(\\mathbf{r}).\n\\end{align}" }, { "math_id": 132, "text": "H(\\mathbf{r}) = 0" }, { "math_id": 133, "text": "P\\Delta" }, { "math_id": 134, "text": "\\dot \\mathbf{r} = \\mathbf{F}(\\mathbf{r}) = \\big[a (r_2-r_1), r_1 (b-r_3)-r_2, r_1 r_2-c r_3 \\big]." }, { "math_id": 135, "text": "\\Phi(\\mathbf{r}) = \\tfrac{a}{2} r_1^2 + \\tfrac{1}{2} r_2^2 + \\tfrac{c}{2} r_3^2" }, { "math_id": 136, "text": "\\mathbf{G}(\\mathbf{r}) = \\big[-a r_1, -r_2, -c r_3 \\big]," }, { "math_id": 137, "text": "\\mathbf{R}(\\mathbf{r}) = \\big[+ a r_2, b r_1 - r_1 r_3, r_1 r_2 \\big]." }, { "math_id": 138, "text": " \\nabla \\times \\mathbf{A} " } ]
https://en.wikipedia.org/wiki?curid=820253
8203600
Cotlar–Stein lemma
The Cotlar–Stein almost orthogonality lemma is a mathematical lemma in the field of functional analysis. It may be used to obtain information on the operator norm on an operator, acting from one Hilbert space into another, when the operator can be decomposed into "almost orthogonal" pieces. The original version of this lemma (for self-adjoint and mutually commuting operators) was proved by Mischa Cotlar in 1955 and allowed him to conclude that the Hilbert transform is a continuous linear operator in formula_0 without using the Fourier transform. A more general version was proved by Elias Stein. Statement of the lemma. Let formula_1 be two Hilbert spaces. Consider a family of operators formula_2, formula_3, with each formula_2 a bounded linear operator from formula_4 to formula_5. Denote formula_6 The family of operators formula_7, formula_8 is "almost orthogonal" if formula_9 The Cotlar–Stein lemma states that if formula_2 are almost orthogonal, then the series formula_10 converges in the strong operator topology, and formula_11 Proof. If formula_12 is a finite collection of bounded operators, then formula_13 So under the hypotheses of the lemma, formula_14 It follows that formula_15 and that formula_16 Hence, the partial sums formula_17 form a Cauchy sequence. The sum is therefore absolutely convergent with the limit satisfying the stated inequality. To prove the inequality above set formula_18 with |"a""ij"| ≤ 1 chosen so that formula_19 Then formula_20 Hence formula_21 Taking 2"m"th roots and letting "m" tend to ∞, formula_22 which immediately implies the inequality. Generalization. The Cotlar-Stein lemma has been generalized, with sums being replaced by integrals. Let "X" be a locally compact space and μ a Borel measure on "X". Let "T"("x") be a map from "X" into bounded operators from "E" to "F" which is uniformly bounded and continuous in the strong operator topology. If formula_23 are finite, then the function "T"("x")"v" is integrable for each "v" in "E" with formula_24 The result can be proven by replacing sums with integrals in the previous proof, or by utilizing Riemann sums to approximate the integrals. Example. Here is an example of an orthogonal family of operators. Consider the infinite-dimensional matrices. formula_25 and also formula_26 Then formula_27 for each formula_28, hence the series formula_29 does not converge in the uniform operator topology. Yet, since formula_30 and formula_31 for formula_32, the Cotlar–Stein almost orthogonality lemma tells us that formula_33 converges in the strong operator topology and is bounded by 1. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L^2" }, { "math_id": 1, "text": "E,\\,F" }, { "math_id": 2, "text": "T_j" }, { "math_id": 3, "text": " j\\geq 1" }, { "math_id": 4, "text": "E" }, { "math_id": 5, "text": "F" }, { "math_id": 6, "text": "a_{jk}=\\Vert T_j T_k^\\ast\\Vert,\n\\qquad b_{jk}=\\Vert T_j^\\ast T_k\\Vert." }, { "math_id": 7, "text": "T_j:\\;E\\to F" }, { "math_id": 8, "text": "j\\ge 1," }, { "math_id": 9, "text": "A=\\sup_{j}\\sum_{k}\\sqrt{a_{jk}}<\\infty,\n\\qquad B=\\sup_{j}\\sum_{k}\\sqrt{b_{jk}}<\\infty." }, { "math_id": 10, "text": "\\sum_{j}T_j" }, { "math_id": 11, "text": "\\Vert \\sum_{j}T_j\\Vert \\le\\sqrt{AB}." }, { "math_id": 12, "text": "T_1,\\ldots,T_n" }, { "math_id": 13, "text": "\\displaystyle{\\sum_{i,j} |(T_i v,T_jv)| \\le \\left(\\max_i \\sum_j \\|T_i^*T_j\\|^{1\\over 2}\\right)\\left(\\max_i \\sum_j \\|T_iT_j^*\\|^{1\\over 2}\\right)\\|v\\|^2.}" }, { "math_id": 14, "text": "\\displaystyle{\\sum_{i,j} |(T_i v,T_jv)| \\le AB\\|v\\|^2.}" }, { "math_id": 15, "text": "\\displaystyle{\\|\\sum_{i=1}^n T_iv\\|^2 \\le AB \\|v\\|^2,}" }, { "math_id": 16, "text": "\\displaystyle{\\|\\sum_{i=m}^n T_iv\\|^2 \\le \\sum_{i,j\\ge m} |(T_iv,T_jv)|.}" }, { "math_id": 17, "text": "\\displaystyle{s_n=\\sum_{i=1}^n T_iv}" }, { "math_id": 18, "text": "\\displaystyle{R=\\sum a_{ij}T_i^*T_j}" }, { "math_id": 19, "text": "\\displaystyle{(Rv,v)=|(Rv,v)|=\\sum |(T_iv,T_jv)|.}" }, { "math_id": 20, "text": "\\displaystyle{\\|R\\|^{2m} =\\|(R^*R)^m\\|\\le \\sum \\|T_{i_1}^* T_{i_2} T_{i_3}^* T_{i_4} \\cdots T_{i_{2m}}\\| \\le \\sum \\left(\\|T_{i_1}^*\\|\\|T_{i_1}^*T_{i_2}\\|\\|T_{i_2}T_{i_3}^*\\|\\cdots \\|T_{i_{2m-1}}^* T_{i_{2m}}\\|\\|T_{i_{2m}}\\|\\right)^{1\\over 2}.}" }, { "math_id": 21, "text": "\\displaystyle{\\|R\\|^{2m} \\le n \\cdot \\max \\|T_i\\| \\left(\\max_i \\sum_j \\|T_i^*T_j\\|^{1\\over 2}\\right)^{2m}\\left(\\max_i \\sum_j \\|T_iT_j^*\\|^{1\\over 2}\\right)^{2m-1}.}" }, { "math_id": 22, "text": "\\displaystyle{\\|R\\|\\le \\left(\\max_i \\sum_j \\|T_i^*T_j\\|^{1\\over 2}\\right)\\left(\\max_i \\sum_j \\|T_iT_j^*\\|^{1\\over 2}\\right),}" }, { "math_id": 23, "text": "\\displaystyle{A= \\sup_x \\int_X \\|T(x)^*T(y)\\|^{1\\over 2} \\, d\\mu(y),\\,\\,\\, B= \\sup_x \\int_X \\|T(y)T(x)^*\\|^{1\\over 2}\\, d\\mu(y),}" }, { "math_id": 24, "text": "\\displaystyle{\\|\\int_X T(x)v\\, d\\mu(x)\\| \\le \\sqrt{AB} \\cdot \\|v\\|.}" }, { "math_id": 25, "text": "\nT=\\left[\n\\begin{array}{cccc}\n1&0&0&\\vdots\\\\0&1&0&\\vdots\\\\0&0&1&\\vdots\\\\\\cdots&\\cdots&\\cdots&\\ddots\\end{array}\n\\right]\n" }, { "math_id": 26, "text": "\n\\qquad\nT_1=\\left[\n\\begin{array}{cccc}\n1&0&0&\\vdots\\\\0&0&0&\\vdots\\\\0&0&0&\\vdots\\\\\\cdots&\\cdots&\\cdots&\\ddots\\end{array}\n\\right],\n\\qquad\nT_2=\\left[\n\\begin{array}{cccc}\n0&0&0&\\vdots\\\\0&1&0&\\vdots\\\\0&0&0&\\vdots\\\\\\cdots&\\cdots&\\cdots&\\ddots\\end{array}\n\\right],\n\\qquad\nT_3=\\left[\n\\begin{array}{cccc}\n0&0&0&\\vdots\\\\0&0&0&\\vdots\\\\0&0&1&\\vdots\\\\\\cdots&\\cdots&\\cdots&\\ddots\\end{array}\n\\right],\n\\qquad\n\\dots.\n" }, { "math_id": 27, "text": "\\Vert T_j\\Vert=1" }, { "math_id": 28, "text": "j" }, { "math_id": 29, "text": "\\sum_{j\\in\\mathbb{N}}T_j" }, { "math_id": 30, "text": "\\Vert T_j T_k^\\ast\\Vert=0" }, { "math_id": 31, "text": "\\Vert T_j^\\ast T_k\\Vert=0" }, { "math_id": 32, "text": "j\\ne k" }, { "math_id": 33, "text": "T=\\sum_{j\\in\\mathbb{N}}T_j" } ]
https://en.wikipedia.org/wiki?curid=8203600
8203933
Equivalent spherical diameter
The diameter of a sphere of the same volume as an irregularly-shaped subject The equivalent spherical diameter of an irregularly shaped object is the diameter of a sphere of equivalent geometric, optical, electrical, aerodynamic or hydrodynamic behavior to that of the particle under investigation. The particle size of a perfectly smooth, spherical object can be accurately defined by a single parameter, the particle diameter. However, real-life particles are likely to have irregular shapes and surface irregularities, and their size cannot be fully characterized by a single parameter. The concept of equivalent spherical diameter has been introduced in the field of particle size analysis to enable the representation of the particle size distribution in a simplified, homogenized way. Here, the real-life particle is matched with an imaginary sphere which has the same properties according to a defined principle, enabling the real-life particle to be defined by the diameter of the imaginary sphere.   The principle used to match the real-life particle and the imaginary sphere vary as a function of the measurement technique used to measure the particle. Optical methods. For optical-based particle sizing methods such as microscopy or dynamic image analysis, the analysis is made on the projection of the three-dimensional object on a two-dimensional plane. The most commonly used methods for determining the equivalent spherical diameter from the particle’s projected outline are: Since the particle’s orientation at the time of image capture has a large influence on all these parameters, the equivalent spherical diameter is obtained by averaging a large number of measurements, corresponding to the different particle orientations. Of note, the ISO standards providing guidance for performing particle size determination by static and dynamic image analysis (respectively ISO 13322-1 and 13322-2) recommend to define particle size by a combination of 3 primary measurements, namely the area-equivalent diameter, the maximum Feret diameter, and the minimum Feret diameter. The combination of these parameters is then used to define the shape factor. Sieving. In sieve analysis, the particle size distribution of a granular material is assessed by letting the material pass through a series of sieves of progressively smaller mesh size. In that case the equivalent spherical diameter corresponds to the equivalent sieve diameter, or the diameter of a sphere that just passes through a defined sieve pore. Of note, the equivalent sieve diameter can be significantly smaller than the area-equivalent diameter obtained by optical methods, as particles can pass the sieve apertures in an orientation corresponding to their smallest projection surface. Laser diffraction. Laser diffraction analysis is based on the observation that the angle of the light diffracted by a particle is inversely proportional to its size. Strictly speaking, the laser diffraction equivalent diameter is the diameter of a sphere yielding, on the same detector geometry, the same diffraction pattern as the particle. In the size regimen where the Fraunhofer approximation is valid, this diameter corresponds to the projected area diameter of the particle in random orientation.  For particles  ≤ 0.1 μm, the definition can be extended into volume-equivalent diameter. In this case, the cross-sectional area becomes nearly the same as that of a sphere with equal volume. In addition, the favored mean particle size for laser diffraction results is the D[4,3] or De Brouckere mean diameter, which is typically applied to measurement techniques where the measured signal is proportional to the volume of the particles. Hence, in a simplified way, the laser diffraction equivalent diameter is considered as a volume-equivalent spherical diameter, i.e., the diameter of a sphere of the same volume as that of the particle under investigation. Dynamic light scattering. Dynamic light scattering is based on the principle that light scattered by small particles (Rayleigh scattering) fluctuates as the particles undergo Brownian motion. The equivalent spherical diameter for the technique is termed hydrodynamic diameter (HDD). This corresponds to the diameter of a sphere with the same translational diffusion coefficient "D" as the particle, in the same fluid and under the same conditions. The relationship between the diffusion coefficient "D" and the HDD is  defined by the Stokes–Einstein equation: formula_0 where Sedimentation. Particle size analysis techniques based on gravitational or centrifugal sedimentation (e.g., hydrometer technique used for soil texture) are based on Stokes’ law, and consist in calculating the size of particles from the speed at which they settle in a liquid. In that case the equivalent spherical diameter is appropriately termed Stokes diameter, and corresponds to the diameter of a sphere having the same settling rate as the particle under conditions of Stokes’ law.   References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{HDD} = \\frac{k_\\text{B}T}{3\\pi\\eta D}" } ]
https://en.wikipedia.org/wiki?curid=8203933
8205082
High-leg delta
Type of electrical connection High-leg delta (also known as wild-leg, stinger leg, bastard leg, high-leg, orange-leg, red-leg, dog-leg delta) is a type of electrical service connection for three-phase electric power installations. It is used when both single and three-phase power is desired to be supplied from a three phase transformer (or transformer bank). The three-phase power is connected in the delta configuration, and the center point of one phase is grounded. This creates both a split-phase single-phase supply (L1 or L2 to neutral on diagram at right) and three-phase (L1–L2–L3 at right). It is sometimes called "orange leg" because the L3 wire is required to be color-coded orange in the United States. By convention, the high leg is usually set in the center (B phase) lug in the involved panel, regardless of the L1–L2–L3 designation at the transformer. Supply. High-leg delta service is supplied in one of two ways. One is by a three-phase transformer (or three single-phase transformers), having four wires coming out of the secondary, the three phases, plus a neutral connected as a center-tap on one of the windings. Another method (the open delta configuration) requires two transformers. One transformer is connected to one phase of the overhead primary distribution circuit to provide the "lighting" side of the circuit (this will be the larger of the two transformers), and a second transformer is connected to another phase on the circuit and its secondary is connected to one side of the "lighting" transformer secondary, and the other side of this transformer is brought out as the "high leg". The voltages between the three phases are the same in magnitude, however the voltage magnitudes between a particular phase and the neutral vary. The phase-to-neutral voltage of two of the phases will be half of the phase-to-phase voltage. The remaining phase-to-neutral voltage will be √3/2 the phase-to-phase voltage. So if A–B, B–C and C–A are all 240 volts, then A–N and C–N will both be 120 volts, but B–N will be 208 volts. Other types of three-phase supplies are wye connections, ungrounded delta connections, or corner-grounded delta ("ghost" leg configuration) connections. These connections do not supply split single-phase power, and do not have a high leg. Explanation. Consider the low-voltage side of a 120/240 V high leg delta connected transformer, where the b phase is the "high" leg. The line-to-line voltage magnitudes are all the same: formula_0 Because the winding between the a and c phases is center-tapped, the line-to-neutral voltages for these phases are as follows: formula_1 But the phase-neutral voltage for the b phase is different: formula_2 This can be proven by writing a KVL equation, using angle notation, starting from the grounded neutral: formula_3 or: formula_4 Advantages. If the high leg is not used, the system acts like a split single-phase system, which is a common supply configuration in the United States. Both three-phase and single split-phase power can be supplied from a single transformer bank. Where the three-phase load is small relative to the total load, two individual transformers may be used instead of the three for a "full delta" or a three-phase transformer, thus providing a variety of voltages at a reduced cost. This is called "open-delta high-leg", and has a reduced capacity relative to a full delta. Disadvantages. In cases where the single-phase load is much greater than the three-phase load, load balancing will be poor. Generally, these cases are identified by three transformers supplying the service, two of which are sized significantly smaller than the third, and the third larger transformer will be center tap grounded. One of the phase-to-neutral voltage (usually phase "B") is higher than the other two. The hazard of this is that if single phase loads are connected to the high leg (with the connecting person unaware that leg is higher voltage), excess voltage is supplied to that load. This can easily cause failure of the load. Commonly there is a high-leg to neutral load limit when only two transformers are used. One transformer manufacturer suggests that high-leg-to-neutral loading not exceed 5% of transformer capacity. Applications. It is often found in older and rural installations. This type of service is usually supplied using 240 V line-to-line and 120 V line-to-neutral. In some ways, the high leg delta service provides the best of both worlds: a line-to-line voltage that is higher than the usual 208 V that most three-phase services have, and a line-to-neutral voltage (on two of the phases) sufficient for connecting appliances and lighting. Thus, large pieces of equipment will draw less current than with 208 V, requiring smaller wire and breaker sizes. Lights and appliances requiring 120 V can be connected to phases "A" and "C" without requiring an additional step-down transformer. It is also commonly used in installations in Japan. The distribution transformer output is 200 V line-to-line and 100 V line-to-neutral, while the high-leg to neutral voltage is 173 V. This provides 200 V for both three-phase and split-phase appliances. Even when unmarked, it is generally easy to identify this type of system, because the "B" phase (circuits #3 and #4) and every third circuit afterwards will be either a three-pole breaker or a blank. Current practice is to give separate services for single-phase and three-phase loads, e.g., 120 V split-phase (lighting etc.) and 240 V to 600 V three-phase (for large motors). However, many jurisdictions forbid more than one class for a premises' service, and the choice may come down to 120/240 V split-phase, 208 V single-phase or three-phase (delta), 120/208 V three-phase (wye), or 277/480 V three-phase (wye) (or 347/600 V three-phase (wye) in Canada). References. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; Works cited. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\n V_{ab} = V_{bc} = V_{ca} =\n 240\\, \\text {V}.\n" }, { "math_id": 1, "text": "\n V_{an} = V_{cn} = \\frac{V_{ac}}{2} =\n 120\\, \\text{V}.\n" }, { "math_id": 2, "text": "V_{bn} = \\sqrt{{V_{ab}}^2 - {V_{an}}^2} \\approx 208\\, \\text{V}." }, { "math_id": 3, "text": "\\begin{align}\n &0 + 120 \\angle 0^\\circ + 240 \\angle 120^\\circ \\\\\n = {} &0 + 120 \\angle 0^\\circ + 240(-0.5)\\angle 0^\\circ + 240\\frac{\\sqrt{3}}{2} \\angle 90^\\circ \\\\\n = {} &0 + 120 \\angle 0^\\circ - 120 \\angle 0^\\circ + 240\\frac{\\sqrt{3}}{2} \\angle 90^\\circ \\\\\n = {} &240\\frac{\\sqrt{3}}{2} \\angle 90^\\circ = 120\\sqrt{3} \\angle 90^\\circ,\n\\end{align}" }, { "math_id": 4, "text": "0 + 120 \\sin(0^\\circ) + 240 \\sin(120^\\circ) = 0 + 0 + 240 \\frac{\\sqrt{3}}{2} \\approx 207.8." } ]
https://en.wikipedia.org/wiki?curid=8205082
820724
Direction finding
Measurement of the direction from which a received signal was transmitted Direction finding (DF), or radio direction finding (RDF), is the use of radio waves to determine the direction to a radio source. The source may be a cooperating radio transmitter or may be an inadvertant source, a naturally-occurring radio source, or an illicit or enemy system. Radio direction finding differs from radar in that only the direction is determined by any one receiver; a radar system usually also gives a distance to the object of interest, as well as direction. By triangulation, the location of a radio source can be determined by measuring its direction from two or more locations. Radio direction finding is used in radio navigation for ships and aircraft, to locate emergency transmitters for search and rescue, for tracking wildlife, and to locate illegal or interfering transmitters. During the Second World War, radio direction finding was used by both sides to locate and direct aircraft, surface ships, and submarines. RDF systems can be used with any radio source, although very long wavelengths (low frequencies) require very large antennas, and are generally used only on ground-based systems. These wavelengths are nevertheless used for marine radio navigation as they can travel very long distances "over the horizon", which is valuable for ships when the line-of-sight may be only a few tens of kilometres. For aerial use, where the horizon may extend to hundreds of kilometres, higher frequencies can be used, allowing the use of much smaller antennas. An automatic direction finder, which could be tuned to radio beacons called non-directional beacons or commercial AM radio broadcasters, was in the 20th century a feature of most aircraft, but is being phased out. For the military, RDF is a key tool of signals intelligence. The ability to locate the position of an enemy transmitter has been invaluable since World War I, and played a key role in World War II's Battle of the Atlantic. It is estimated that the UK's advanced "huff-duff" systems were directly or indirectly responsible for 24% of all U-boats sunk during the war. Modern systems often used phased array antennas to allow rapid beamforming for highly accurate results, and are part of a larger electronic warfare suite. Early radio direction finders used mechanically rotated antennas that compared signal strengths, and several electronic versions of the same concept followed. Modern systems use the comparison of phase or doppler techniques which are generally simpler to automate. Early British radar sets were referred to as RDF, which is often stated was a deception. In fact, the Chain Home systems used large RDF receivers to determine directions. Later radar systems generally used a single antenna for broadcast and reception, and determined direction from the direction the antenna was facing. History. Early mechanical systems. The earliest experiments in RDF were carried out in 1888 when Heinrich Hertz discovered the directionality of an open loop of wire used as an antenna. When the antenna was aligned so it pointed at the signal it produced maximum gain, and produced zero signal when face on. This meant there was always an ambiguity in the location of the signal, it would produce the same output if the signal was in front or back of the antenna. Later experimenters also used dipole antennas, which worked in the opposite sense, reaching maximum gain at right angles and zero when aligned. RDF systems using mechanically swung loop or dipole antennas were common by the turn of the 20th century. Prominent examples were patented by John Stone Stone in 1902 (U.S. Patent 716,134) and Lee de Forest in 1904 (U.S. Patent 771,819), among many other examples. By the early 1900s, many experimenters were looking for ways to use this concept for locating the position of a transmitter. Early radio systems generally used medium wave and longwave signals. Longwave in particular had good long-distance transmission characteristics due to their limited interaction with the ground, and thereby provided excellent great circle route ground wave propagation that pointed directly to the transmitter. Methods of performing RDF on longwave signals was a major area of research during the 1900s and 1910s. Antennas are generally sensitive to signals only when they have a length that is a significant portion of the wavelength, or larger. Most antennas are at least &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄4 of the wavelength, more commonly &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 – the half-wave dipole is a very common design. For longwave use, this resulted in loop antennas tens of feet on a side, often with more than one loop connected together to improve the signal. Another solution to this problem was developed by the Marconi company in 1905. This consisted of a number of horizontal wires or rods arranged to point outward from a common center point. A movable switch could connect opposite pairs of these wires to form a dipole, and by rotating the switch the operator could hunt for the strongest signal. The US Navy overcame this problem, to a point, by mounting antennas on ships and sailing in circles. Such systems were unwieldily and impractical for many uses. Bellini–Tosi. A key improvement in the RDF concept was introduced by Ettore Bellini and Alessandro Tosi in 1909 (U.S. Patent 943,960). Their system used two such antennas, typically triangular loops, arranged at right angles. The signals from the antennas were sent into coils wrapped around a wooden frame about the size of a pop can, where the signals were re-created in the area between the coils. A separate loop antenna located in this area could then be used to hunt for the direction, without moving the main antennas. This made RDF so much more practical that it was soon being used for navigation on a wide scale, often as the first form of aerial navigation available, with ground stations homing in on the aircraft's radio set. Bellini–Tosi direction finders were widespread from the 1920s into the 1950s. Early RDF systems were useful largely for long wave signals. These signals are able to travel very long distances, which made them useful for long-range navigation. However, when the same technique was being applied to higher frequencies, unexpected difficulties arose due to the reflection of high frequency signals from the ionosphere. The RDF station might now receive the same signal from two or more locations, especially during the day, which caused serious problems trying to determine the location. This led to the 1919 introduction of the Adcock antenna (UK Patent 130,490), which consisted of four separate monopole antennas instead of two loops, eliminating the horizontal components and thus filtering out the sky waves being reflected down from the ionosphere. Adcock antennas were widely used with Bellini–Tosi detectors from the 1920s on. The US Army Air Corps in 1931 tested a primitive radio compass that used commercial stations as the beacon. Huff-duff. A major improvement in the RDF technique was introduced by Robert Watson-Watt as part of his experiments to locate lightning strikes as a method to indicate the direction of thunderstorms for sailors and airmen. He had long worked with conventional RDF systems, but these were difficult to use with the fleeting signals from the lightning. He had early on suggested the use of an oscilloscope to display these near instantly, but was unable to find one while working at the Met Office. When the office was moved, his new location at a radio research station provided him with both an Adcock antenna and a suitable oscilloscope, and he presented his new system in 1926. In spite of the system being presented publicly, and its measurements widely reported in the UK, its impact on the art of RDF seems to be strangely subdued. Development was limited until the mid-1930s, when the various British forces began widespread development and deployment of these "high-frequency direction finding", or "huff-duff" systems. To avoid RDF, the Germans had developed a method of broadcasting short messages under 30 seconds, less than the 60 seconds that a trained Bellini-Tosi operator would need to determine the direction. However, this was useless against huff-duff systems, which located the signal with reasonable accuracy in seconds. The Germans did not become aware of this problem until the middle of the war, and did not take any serious steps to address it until 1944. By that time huff-duff had helped in about one-quarter of all successful attacks on the U-boat fleet. Post-war systems. Several developments in electronics during and after the Second World War led to greatly improved methods of comparing the phase of signals. In addition, the phase-locked loop (PLL) allowed for easy tuning in of signals, which would not drift. Improved vacuum tubes and the introduction of the transistor allowed much higher frequencies to be used economically, which led to widespread use of VHF and UHF signals. All of these changes led to new methods of RDF, and its much more widespread use. In particular, the ability to compare the phase of signals led to phase-comparison RDF, which is perhaps the most widely used technique today. In this system the loop antenna is replaced with a single square-shaped ferrite core, with loops wound around two perpendicular sides. Signals from the loops are sent into a phase comparison circuit, whose output phase directly indicates the direction of the signal. By sending this to any manner of display, and locking the signal using PLL, the direction to the broadcaster can be continuously displayed. Operation consists solely of tuning in the station, and is so automatic that these systems are normally referred to as automatic direction finder. Other systems have been developed where more accuracy is required. Pseudo-doppler radio direction finder systems use a series of small dipole antennas arranged in a ring and use electronic switching to rapidly select dipoles to feed into the receiver. The resulting signal is processed and produces an audio tone. The phase of that audio tone, compared to the antenna rotation, depends on the direction of the signal. Doppler RDF systems have widely replaced the huff-duff system for location of fleeting signals. 21st century. The various procedures for radio direction finding to determine position at sea are no longer part of the maritime safety system GMDSS, which has been in force since 1999. The striking cross frame antenna with attached auxiliary antenna can only be found on the signal masts of some older ships because they do not interfere there and dismantling would be too expensive. Modern positioning methods such as GPS, DGPS, radar and the now-outdated Loran C have radio direction finding methods that are imprecise for today's needs. Radio direction finding networks also no longer exist. However rescue vessels, such as RNLI lifeboats in the UK, and Search and Rescue helicopters have direction finding receivers for marine VHF signals and the 121.5 MHz homing signals incorporated in EPIRB and PLB beacons, although modern GPS-EPIRBS and AIS beacons are slowly making these redundant. Equipment. A radio direction finder (RDF) is a device for finding the direction, or "bearing", to a radio source. The act of measuring the direction is known as radio direction finding or sometimes simply direction finding (DF). Using two or more measurements from different locations, the location of an unknown transmitter can be determined; alternately, using two or more measurements of known transmitters, the location of a vehicle can be determined. RDF is widely used as a radio navigation system, especially with boats and aircraft. RDF systems can be used with any radio source, although the size of the receiver antennas are a function of the wavelength of the signal; very long wavelengths (low frequencies) require very large antennas, and are generally used only on ground-based systems. These wavelengths are nevertheless very useful for marine navigation as they can travel very long distances and "over the horizon", which is valuable for ships when the line-of-sight may be only a few tens of kilometres. For aircraft, where the horizon at altitude may extend to hundreds of kilometres, higher frequencies can be used, allowing much smaller antennas. An automatic direction finder, often capable of being tuned to commercial AM radio transmitters, is a feature of almost all modern aircraft. For the military, RDF systems are a key component of signals intelligence systems and methodologies. The ability to locate the position of an enemy transmitter has been invaluable since World War I, and it played a key role in World War II's Battle of the Atlantic. It is estimated that the UK's advanced "huff-duff" systems were directly or indirectly responsible for 24% of all U-boats sunk during the war. Modern systems often use phased array antennas to allow rapid beam forming for highly accurate results. These are generally integrated into a wider electronic warfare suite. Several distinct generations of RDF systems have been used over time, following new developments in electronics. Early systems used mechanically rotated antennas that compared signal strengths from different directions, and several electronic versions of the same concept followed. Modern systems use the comparison of phase or doppler techniques which are generally simpler to automate. Modern pseudo-Doppler direction finder systems consist of a number of small antennas fixed to a circular card, with all of the processing performed by software. Early British radar sets were also referred to as RDF, which was a deception tactic. However, the terminology was not inaccurate; the Chain Home systems used separate omnidirectional broadcasters and large RDF receivers to determine the location of the targets. Antennas. In one type of direction finding, a directional antenna is used which is more sensitive in certain directions than in others. Many antenna designs exhibit this property. For example, a Yagi antenna has quite pronounced directionality, so the source of a transmission can be determined by pointing it in the direction where the maximum signal level is obtained. Since the directional characteristics can be very broad, large antennas may be used to improve precision, or null techniques used to improve angular resolution. Null finding with loop antennas. A simple form of directional antenna is the loop aerial. This consists of an open loop of wire on an insulating frame, or a metal ring that forms the antenna's loop element itself; often the diameter of the loop is a tenth of a wavelength or smaller at the target frequency. Such an antenna will be "least" sensitive to signals that are perpendicular to its face and "most" responsive to those arriving edge-on. This is caused by the phase of the received signal: The difference in electrical phase along the rim of the loop at any instant causes a difference in the voltages induced on either side of the loop. Turning the plane of the loop to "face" the signal so that the arriving phases are identical around the entire rim will not induce any current flow in the loop. So simply turning the antenna to produce a "minimum" in the desired signal will establish two possible directions (front and back) from which the radio waves could be arriving. This is called a "null" in the signal, and it is used instead of the strongest signal direction, because small angular deflections of the loop aerial away from its null positions produce much more abrupt changes in received current than similar directional changes around the loop's strongest signal orientation. Since the null direction gives a clearer indication of the signal direction – the null is "sharper" than the max – with loop aerial the null direction is used to locate a signal source. A "sense antenna" is used to resolve the two direction possibilities; the sense aerial is a non-directional antenna configured to have the same sensitivity as the loop aerial. By adding the steady signal from the sense aerial to the alternating signal from the loop signal as it rotates, there is now only one position as the loop rotates 360° at which there is zero current. This acts as a phase reference point, allowing the correct null point to be identified, removing the 180° ambiguity. A dipole antenna exhibits similar properties, as a small loop, although its null direction is not as "sharp". Yagi antenna for higher frequencies. The Yagi-Uda antenna is familiar as the common VHF or UHF television aerial. A Yagi antenna uses multiple dipole elements, which include "reflector" and "director" dipole elements. The "reflector" is the longest dipole element and blocks nearly all the signal coming from behind it, hence a Yagi has no front vs. back directional ambiguity: The maximum signal only occurs when the narrowest end of the Yagi is aimed in the direction from which the radio waves are arriving. With a sufficient number of shorter "director" elements, a Yagi's maximum direction can be made to approach the sharpness of a small loop's null. Parabolic antennas for extremely high frequencies. For much higher frequencies still, such as millimeter waves and microwaves, parabolic antennas or "dish" antennas can be used. Dish antennas are highly directional, with the parabolic shape directing received signals from a very narrow angle into a small receiving element mounted at the focus of the parabola. Electronic analysis of two antennas' signals. More sophisticated techniques such as phased arrays are generally used for highly accurate direction finding systems. The modern systems are called goniometers by analogy to WW II directional circuits used to measure direction by comparing the differences in two or more matched reference antennas' received signals, used in old signals intelligence (SIGINT). A modern helicopter-mounted direction finding system was designed by ESL Incorporated for the U.S. Government as early as 1972. Time difference of arrival techniques compare the arrival time of a radio wave at two or more different antennas and deduce the direction of arrival from this timing information. This method can use mechanically simple non-moving omnidirectional antenna elements fed into a multiple channel receiver system. Operation. One form of radio direction finding works by comparing the signal strength of a directional antenna pointing in different directions. At first, this system was used by land and marine-based radio operators, using a simple rotatable loop antenna linked to a degree indicator. This system was later adopted for both ships and aircraft, and was widely used in the 1930s and 1940s. On pre-World War II aircraft, RDF antennas are easy to identify as the circular loops mounted above or below the fuselage. Later loop antenna designs were enclosed in an aerodynamic, teardrop-shaped fairing. In ships and small boats, RDF receivers first employed large metal loop antennas, similar to aircraft, but usually mounted atop a portable battery-powered receiver. In use, the RDF operator would first tune the receiver to the correct frequency, then manually turn the loop, either listening or watching an S meter to determine the direction of the "null" (the direction at which a given signal is weakest) of a long wave (LW) or medium wave (AM) broadcast beacon or station (listening for the null is easier than listening for a peak signal, and normally produces a more accurate result). This null was symmetrical, and thus identified both the correct degree heading marked on the radio's compass rose as well as its 180-degree opposite. While this information provided a baseline from the station to the ship or aircraft, the navigator still needed to know beforehand if he was to the east or west of the station in order to avoid plotting a course 180-degrees in the wrong direction. By taking bearings to two or more broadcast stations and plotting the intersecting bearings, the navigator could locate the relative position of his ship or aircraft. Later, RDF sets were equipped with rotatable ferrite loopstick antennas, which made the sets more portable and less bulky. Some were later partially automated by means of a motorized antenna (ADF). A key breakthrough was the introduction of a secondary vertical whip or 'sense' antenna that substantiated the correct bearing and allowed the navigator to avoid plotting a bearing 180 degrees opposite the actual heading. The U.S. Navy RDF model SE 995 which used a sense antenna was in use during World War I. After World War II, there were many small and large firms making direction finding equipment for mariners, including Apelco, Aqua Guide, Bendix, Gladding (and its marine division, Pearce-Simpson), Ray Jefferson, Raytheon, and Sperry. By the 1960s, many of these radios were actually made by Japanese electronics manufacturers, such as Panasonic, Fuji Onkyo, and Koden Electronics Co., Ltd. In aircraft equipment, Bendix and Sperry-Rand were two of the larger manufacturers of RDF radios and navigation instruments. Single-channel DF. Single-channel DF uses a multi-antenna array with a single channel radio receiver. This approach to DF offers some advantages and drawbacks. Since it only uses one receiver, mobility and lower power consumption are benefits. Without the ability to look at each antenna simultaneously (which would be the case if one were to use multiple receivers, also known as N-channel DF) more complex operations need to occur at the antenna in order to present the signal to the receiver. The two main categories that a single channel DF algorithm falls into are "amplitude comparison" and "phase comparison". Some algorithms can be hybrids of the two. Pseudo-doppler DF technique. The pseudo-doppler technique is a phase based DF method that produces a bearing estimate on the received signal by measuring the doppler shift induced on the signal by sampling around the elements of a circular array. The original method used a single antenna that physically moved in a circle but the modern approach uses a multi-antenna circular array with each antenna sampled in succession. Watson–Watt, or Adcock-antenna array. The Watson-Watt technique uses two antenna pairs to perform an amplitude comparison on the incoming signal. The popular Watson-Watt method uses an array of two orthogonal coils (magnetic dipoles) in the horizontal plane, often completed with an omnidirectional vertically polarized electric dipole to resolve 180° ambiguities. The Adcock antenna array uses a pair of monopole or dipole antennas that takes the vector difference of the received signal at each antenna so that there is only one output from each pair of antennas. Two of these pairs are co-located but perpendicularly oriented to produce what can be referred to as the N–S (North-South) and E–W (East-West) signals that will then be passed to the receiver. In the receiver, the bearing angle can then be computed by taking the arctangent of the ratio of the N–S to E–W signal. Correlative interferometer. The basic principle of the correlative interferometer consists in comparing the measured phase differences with the phase differences obtained for a DF antenna system of known configuration at a known wave angle (reference data set). For this, at least three antenna elements (with omnidirectional reception characteristics) must form a non-collinear basis. The comparison is made for different azimuth and elevation values of the reference data set. The bearing result is obtained from a correlative and stochastic evaluation for which the correlation coefficient is at a maximum. If the direction finding antenna elements have a directional antenna pattern, then the amplitude may be included in the comparison. Typically, the correlative interferometer DF system consists of more than five antenna elements. These are scanned one after the other via a specific switching matrix. In a multi-channel DF system n antenna elements are combined with m receiver channels to improve the DF-system performance. Applications. Radio navigation. "Radio direction finding", "radio direction finder", or "RDF", was once the primary aviation navigational aid. ("Range and Direction Finding" was the abbreviation used to describe the predecessor to radar.) Beacons were used to mark "airways" intersections and to define departure and approach procedures. Since the signal transmitted contains no information about bearing or distance, these beacons are referred to as "non-directional beacons", or "NDB" in the aviation world. Starting in the 1950s, these beacons were generally replaced by the VOR system, in which the bearing to the navigational aid is measured from the signal itself; therefore no specialized antenna with moving parts is required. Due to relatively low purchase, maintenance and calibration cost, NDBs are still used to mark locations of smaller aerodromes and important helicopter landing sites. Similar beacons located in coastal areas are also used for maritime radio navigation, as almost every ship was equipped with a direction finder (Appleyard 1988). Very few maritime radio navigation beacons remain active today (2008) as ships have abandoned navigation via RDF in favor of GPS navigation. In the United Kingdom a radio direction finding service is available on 121.5 MHz and 243.0 MHz to aircraft pilots who are in distress or are experiencing difficulties. The service is based on a number of radio DF units located at civil and military airports and certain HM Coastguard stations. These stations can obtain a "fix" of the aircraft and transmit it by radio to the pilot. Maritime and aircraft navigation. Radio transmitters for air and sea navigation are known as "beacons" and are the radio equivalent to a lighthouse. The transmitter sends a Morse Code transmission on a Long wave (150 – 400 kHz) or Medium wave (520 – 1720 kHz) frequency incorporating the station's identifier that is used to confirm the station and its operational status. Since these radio signals are broadcast in all directions (omnidirectional) during the day, the signal itself does not include direction information, and these beacons are therefore referred to as non-directional beacons, or NDBs. As the commercial medium wave broadcast band lies within the frequency capability of most RDF units, these stations and their transmitters can also be used for navigational fixes. While these commercial radio stations can be useful due to their high power and location near major cities, there may be several miles between the location of the station and its transmitter, which can reduce the accuracy of the 'fix' when approaching the broadcast city. A second factor is that some AM radio stations are omnidirectional during the day, and switch to a reduced power, directional signal at night. RDF was once the primary form of aircraft and marine navigation. Strings of beacons formed "airways" from airport to airport, while marine NDBs and commercial AM broadcast stations provided navigational assistance to small watercraft approaching a landfall. In the United States, commercial AM radio stations were required to broadcast their station identifier once per hour for use by pilots and mariners as an aid to navigation. In the 1950s, aviation NDBs were augmented by the VOR system, in which the direction to the beacon can be extracted from the signal itself, hence the distinction with non-directional beacons. Use of marine NDBs was largely supplanted in North America by the development of LORAN in the 1970s. Today many NDBs have been decommissioned in favor of faster and far more accurate GPS navigational systems. However the low cost of ADF and RDF systems, and the continued existence of AM broadcast stations (as well as navigational beacons in countries outside North America) has allowed these devices to continue to function, primarily for use in small boats, as an adjunct or backup to GPS. Location of illegal, secret or hostile transmitters – SIGINT. In World War II considerable effort was expended on identifying secret transmitters in the United Kingdom (UK) by direction finding. The work was undertaken by the Radio Security Service (RSS also MI8). Initially three U Adcock HF DF stations were set up in 1939 by the General Post Office. With the declaration of war, MI5 and RSS developed this into a larger network. One of the problems with providing coverage of an area the size of the UK was installing sufficient DF stations to cover the entire area to receive skywave signals reflected back from the ionised layers in the upper atmosphere. Even with the expanded network, some areas were not adequately covered and for this reason up to 1700 voluntary interceptors (radio amateurs) were recruited to detect illicit transmissions by ground wave. In addition to the fixed stations, RSS ran a fleet of mobile DF vehicles around the UK. If a transmitter was identified by the fixed DF stations or voluntary interceptors, the mobile units were sent to the area to home in on the source. The mobile units were HF Adcock systems. By 1941 only a couple of illicit transmitters had been identified in the UK; these were German agents that had been "turned" and were transmitting under MI5 control. Many illicit transmissions had been logged emanating from German agents in occupied and neutral countries in Europe. The traffic became a valuable source of intelligence, so the control of RSS was subsequently passed to MI6 who were responsible for secret intelligence originating from outside the UK. The direction finding and interception operation increased in volume and importance until 1945. The HF Adcock stations consisted of four 10m vertical antennas surrounding a small wooden operators hut containing a receiver and a radio-goniometer which was adjusted to obtain the bearing. MF stations were also used which used four guyed 30m lattice tower antennas. In 1941, RSS began experimenting with spaced loop direction finders, developed by the Marconi company and the UK National Physical Laboratories. These consisted of two parallel loops 1 to 2m square on the ends of a rotatable 3 to 8m beam. The angle of the beam was combined with results from a radiogoniometer to provide a bearing. The bearing obtained was considerably sharper than that obtained with the U Adcock system, but there were ambiguities which prevented the installation of 7 proposed S.L DF systems. The operator of an SL system was in a metal underground tank below the antennas. Seven underground tanks were installed, but only two SL systems were installed at Wymondham, Norfolk and Weaverthorp in Yorkshire. Problems were encountered resulting in the remaining five underground tanks being fitted with Adcock systems. The rotating SL antenna was turned by hand which meant successive measurements were a lot slower than turning the dial of a goniometer. Another experimental spaced loop station was built near Aberdeen in 1942 for the Air Ministry with a semi-underground concrete bunker. This, too, was abandoned because of operating difficulties. By 1944, a mobile version of the spaced loop had been developed and was used by RSS in France following the D-Day invasion of Normandy. The US military used a shore based version of the spaced loop DF in World War II called "DAB". The loops were placed at the ends of a beam, all of which was located inside a wooden hut with the electronics in a large cabinet with cathode ray tube display at the centre of the beam and everything being supported on a central axis. The beam was rotated manually by the operator. The Royal Navy introduced a variation on the shore based HF DF stations in 1944 to track U-boats in the North Atlantic. They built groups of five DF stations, so that bearings from individual stations in the group could be combined and a mean taken. Four such groups were built in Britain at Ford End, Essex, Goonhavern, Cornwall, Anstruther and Bowermadden in the Scottish Highlands. Groups were also built in Iceland, Nova Scotia and Jamaica. The anticipated improvements were not realised but later statistical work improved the system and the Goonhavern and Ford End groups continued to be used during the Cold War. The Royal Navy also deployed direction finding equipment on ships tasked to anti-submarine warfare in order to try to locate German submarines, e.g. Captain class frigates were fitted with a medium frequency direction finding antenna (MF/DF) (the antenna was fitted in front of the bridge) and high frequency direction finding (HF/DF, "Huffduff") Type FH 4 antenna (the antenna was fitted on top of the mainmast). A comprehensive reference on World War II wireless direction finding was written by Roland Keen, who was head of the engineering department of RSS at Hanslope Park. The DF systems mentioned here are described in detail in his 1947 book "Wireless Direction Finding". At the end of World War II a number of RSS DF stations continued to operate into the Cold War under the control of GCHQ the British SIGINT organisation. Most direction finding effort within the UK now (2009) is directed towards locating unauthorised "pirate" FM broadcast radio transmissions. A network of remotely operated VHF direction finders are used mainly located around the major cities. The transmissions from mobile telephone handsets are also located by a form of direction finding using the comparative signal strength at the surrounding local "cell" receivers. This technique is often offered as evidence in UK criminal prosecutions and, almost certainly, for SIGINT purposes. Emergency aid. Emergency position-indicating rescue beacons are widely deployed on civil aircraft and ships. Historically emergency location transmitters only sent a tone signal and relied on direction finding by search aircraft to locate the beacon. Modern emergency beacons transmit a unique identification signal that can include GPS location data that can aid in finding the exact location of the transmitter. Avalanche transceivers operate on a standard 457 kHz, and are designed to help locate people and equipment buried by avalanches. Since the power of the beacon is so low the directionality of the radio signal is dominated by small scale field effects and can be quite complicated to locate. Wildlife tracking. Location of radio-tagged animals by triangulation is a widely applied research technique for studying the movement of animals. The technique was first used in the early 1960s, when radio transmitters and batteries became small enough to attach to wildlife, and is now widely deployed for a variety of wildlife studies. Most tracking of wild animals that have been affixed with radio transmitter equipment is done by a field researcher using a handheld radio direction finding device. When the researcher wants to locate a particular animal, the location of the animal can be triangulated by determining the direction to the transmitter from several locations. Reconnaissance. Phased arrays and other advanced antenna techniques are utilized to track launches of rocket systems and their resulting trajectories. These systems can be used for defensive purposes and also to gain intelligence on operation of missiles belonging to other nations. These same techniques are used for detection and tracking of conventional aircraft. Astronomy. Earth-based receivers can detect radio signals emanating from distant stars or regions of ionized gas. Receivers in radio telescopes can detect the general direction of such naturally-occurring radio sources, sometimes correlating their location with objects visible with optical telescopes. Accurate measurement of the arrival time of radio impulses by two radio telescopes at different places on Earth, or the same telescope at different times in Earth's orbit around the Sun, may also allow estimation of the distance to a radio object. Sport. Events hosted by groups and organizations that involve the use of radio direction finding skills to locate transmitters at unknown locations have been popular since the end of World War II. Many of these events were first promoted in order to practice the use of radio direction finding techniques for disaster response and civil defense purposes, or to practice locating the source of radio frequency interference. The most popular form of the sport, worldwide, is known as Amateur Radio Direction Finding or by its international abbreviation ARDF. Another form of the activity, known as "transmitter hunting", "mobile T-hunting" or "fox hunting" takes place in a larger geographic area, such as the metropolitan area of a large city, and most participants travel in motor vehicles while attempting to locate one or more radio transmitters with radio direction-finding techniques. Direction finding at microwave frequencies. DF techniques for microwave frequencies were developed in the 1940s, in response to the growing numbers of transmitters operating at these higher frequencies. This required the design of new antennas and receivers for the DF systems. In Naval systems, the DF capability became part of the Electronic Support Measures suite (ESM), where the directional information obtained augments other signal identification processes. In aircraft, a DF system provides additional information for the Radar Warning Receiver (RWR). Over time, it became necessary to improve the performance of microwave DF systems in order to counter the evasive tactics being employed by some operators, such as low-probability-of-intercept radars and covert Data links. Brief history of microwave development. Earlier in the century, vacuum tubes (thermionic valves) were used extensively in transmitters and receivers, but their high frequency performance was limited by transit time effects. Even with special processes to reduce lead lengths, such as frame grid construction, as used in the EF50, and planar construction, very few tubes could operate above UHF. Intensive research work was carried out in the 1930s in order to develop transmitting tubes specifically for the microwave band which included, in particular, the klystron the cavity magnetron and the travelling wave tube (TWT). Following the successful development of these tubes, large scale production occurred in the following decade. The advantages of microwave operation. Microwave signals have short wavelengths, which results in greatly improved target resolution when compared to RF systems. This permits better identification of multiple targets and, also, gives improved directional accuracy. Also, the antennas are small so they can be assembled into compact arrays and, in addition, they can achieve well defined beam patterns which can provide the narrow beams with high gain favoured by radars and Data links. Other advantages of the newly available microwave band were the absence of fading (often a problem in the Shortwave radio (SW) band) and great increase in signal spectrum, compared to the congested RF bands already in use. In addition to being able to accommodate many more signals, the ability to use Spread spectrum and frequency hopping techniques now became possible. Once microwave techniques had become established, there was rapid expansion into the band by both military and commercial users. Antennas for DF. Antennas for DF have to meet different requirements from those for a radar or communication link, where an antenna with a narrow beam and high gain is usually an advantage. However, when carrying out direction finding, the bearing of the source may be unknown, so antennas with wide beamwidths are usually chosen, even though they have lower antenna boresight gain. In addition, the antennas are required to cover a wide band of frequencies. The figure shows the normalized polar plot of a typical antenna gain characteristic, in the horizontal plane. The half-power beamwidth of the main beam is 2 × Ψ0. Preferably, when using amplitude comparison methods for direction finding, the main lobe should approximate to a Gaussian characteristic. Although the figure also shows the presence of sidelobes, these are not a major concern when antennas are used in a DF array. Typically, the boresight gain of an antenna is related to the beam width. For a rectangular horn, Gain ≈ 30000/BWh.BWv, where BWh and BWv are the horizontal and vertical antenna beamwidths, respectively, in degrees. For a circular aperture, with beamwidth BWc, it is Gain ≈ 30000/BWc2. Two antenna types, popular for DF, are cavity-backed spirals and horn antennas. Spiral antennas are capable of very wide bandwidths and have a nominal half-power beamwidth of about 70deg, making them very suitable for antenna arrays containing 4, 5 or 6 antennas. For larger arrays, needing narrower beamwidths, horns may be used. The bandwidths of horn antennas may be increased by using double-ridged waveguide feeds and by using horns with internal ridges. Microwave receivers. Early receivers. Early microwave receivers were usually simple "crystal-video" receivers, which use a crystal detector followed by a video amplifier with a compressive characteristic to extend the dynamic range. Such a receiver was wideband but not very sensitive. However, this lack of sensitivity could be tolerated because of the "range advantage" enjoyed by the DF receiver (see below). Klystron and TWT preamplifiers. The klystron and TWT are linear devices and so, in principle, could be used as receiver preamplifiers. However, the klystron was quite unsuitable as it was a narrow-band device and extremely noisy and the TWT, although potentially more suitable, has poor matching characteristics and large bulk, which made it unsuitable for multi-channel systems using a preamplifier per antenna. However, a system has been demonstrated, in which a single TWT preamplifier selectively selects signals from an antenna array. Transistor preamplifiers. Transistors suitable for microwave frequencies became available towards the end of the 1950s. The first of these was the metal oxide semiconductor field effect transistor (MOSFET). Others followed, for example, the metal-semiconductor field-effect transistor and the high electron mobility transistor (HEMT). Initially, discrete transistors were embedded in stripline or microstrip circuits, but microwave integrated circuits followed. With these new devices, low-noise receiver preamplifiers became possible, which greatly increased the sensitivity, and hence the detection range, of DF systems. Range advantage. "Source:" The DF receiver enjoys a detection range advantage over that of the radar receiver. This is because the signal strength at the DF receiver, due to a radar transmission, is proportional to 1/R2 whereas that at the radar receiver from the reflected return is proportional to σ/R4, where R is the range and σ is the radar cross-section of the DF system. This results in the signal strength at the radar receiver being very much smaller than that at the DF receiver. Consequently, in spite of its poor sensitivity, a simple crystal-video DF receiver is, usually, able to detect the signal transmission from a radar at a greater range than that at which the Radar's own receiver is able to detect the presence of the DF system. In practice, the advantage is reduced by the ratio of antenna gains (typically they are 36 dB and 10 dB for the Radar and ESM, respectively) and the use of Spread spectrum techniques, such as Chirp compression, by the Radar, to increase the processing gain of its receiver. On the other hand, the DF system can regain some advantage by using sensitive, low-noise, receivers and by using Stealth practices to reduce its radar cross-section, as with Stealth aircraft and Stealth ships. The new demands on DF systems. The move to microwave frequencies meant a reappraisal of the requirements of a DF system. Now, the receiver could no longer rely on a continuous signal stream on which to carry out measurements. Radars with their narrow beams would only illuminate the antennas of the DF system infrequently. Furthermore, some radars wishing to avoid detection (those of smugglers, hostile ships and missiles) would radiate their signals infrequently and often at low power. Such a system is referred to as a low-probability-of-intercept radar. In other applications, such as microwave links, the transmitter's antenna may never point at the DF receiver at all, so reception is only possible by means of the signal leakage from antenna side lobes. In addition, covert Data links may only radiate a high data rate sequence very occasionally. In general, in order to cater for modern circumstances, a broadband microwave DF system is required to have high sensitivity and have 360° coverage in order to have the ability to detect single pulses (often called amplitude monopulse) and achieve a high "Probability of Intercept" (PoI). DF by amplitude comparison. Amplitude comparison has been popular as a method for DF because systems are relatively simple to implement, have good sensitivity and, very importantly, a high probability of signal detection. Typically, an array of four, or more, squinted directional antennas is used to give 360 degree coverage. DF by phase comparison methods can give better bearing accuracy, but the processing is more complex. Systems using a single rotating dish antenna are more sensitive, small and relatively easy to implement, but have poor PoI. Usually, the signal amplitudes in two adjacent channels of the array are compared, to obtain the bearing of an incoming wavefront but, sometimes, three adjacent channels are used to give improved accuracy. Although the gains of the antennas and their amplifying chains have to be closely matched, careful design and construction and effective calibration procedures can compensate for shortfalls in the hardware. Overall bearing accuracies of 2° to 10° (rms) have been reported using the method. Two-channel DF. Two-channel DF, using two adjacent antennas of a circular array, is achieved by comparing the signal power of the largest signal with that of the second largest signal. The direction of an incoming signal, within the arc described by two antennas with a squint angle of Φ, may be obtained by comparing the relative powers of the signals received. When the signal is on the boresight of one of the antennas, the signal at the other antenna will be about 12 dB lower. When the signal direction is halfway between the two antennas, signal levels will be equal and approximately 3 dB lower than the boresight value. At other bearing angles, φ, some intermediate ratio of the signal levels will give the direction. If the antenna main lobe patterns have a Gaussian characteristic, and the signal powers are described in logarithmic terms (e.g. decibels (dB) relative to the boresight value), then there is a linear relationship between the bearing angle φ and the power level difference, i.e. φ ∝ (P1(dB) - P2(dB)), where P1(dB) and P2(dB) are the outputs of two adjacent channels. The thumbnail shows a typical plot. To give 360° coverage, antennas of a circular array are chosen, in pairs, according to the signal levels received at each antenna. If there are N antennas in the array, at angular spacing (squint angle) Φ, then Φ = 2π/N radians (= 360/N degrees). Basic equations for two-port DF. If the main lobes of the antennas have a Gausian characteristic, then the output P1(φ), as a function of bearing angle φ, is given by formula_0 where G0 is the antenna boresight gain (i.e. when ø = 0), Ψ0 is one half the half-power beamwidth A = -\ln(0.5), so that P1(ø)/P10 = 0.5 when ø = Ψ0 and angles are in radians. The second antenna, squinted at Phi and with the same boresight gain G0 gives an output formula_1 Comparing signal levels, formula_2 The natural logarithm of the ratio is formula_3 Rearranging formula_4 This shows the linear relationship between the output level difference, expressed logarithmically, and the bearing angle ø. Natural logarithms can be converted to decibels (dBs) (where dBs are referred to boresight gain) by using ln(X) = X(dB)/(10.\log10(e)), so the equation can be written formula_5 Three-channel DF. Improvements in bearing accuracy may be achieved if amplitude data from a third antenna are included in the bearing processing. For three-channel DF, with three antennas squinted at angles Φ, the direction of the incoming signal is obtained by comparing the signal power of the channel containing the largest signal with the signal powers of the two adjacent channels, situated at each side of it. For the antennas in a circular array, three antennas are selected according to the signal levels received, with the largest signal present at the central channel. When the signal is on the boresight of Antenna 1 (φ = 0), the signal from the other two antennas will equal and about 12 dB lower. When the signal direction is halfway between two antennas (φ = 30°), their signal levels will be equal and approximately 3 dB lower than the boresight value, with the third signal now about 24 dB lower. At other bearing angles, ø, some intermediate ratios of the signal levels will give the direction. Basic equations for three-port DF. For a signal incoming at a bearing ø, taken here to be to the right of boresight of Antenna 1: Channel 1 output is formula_6 Channel 2 output is formula_7 Channel 3 output is formula_8 where GT is the overall gain of each channel, including antenna boresight gain, and is assumed to be the same in all three channels. As before, in these equations, angles are in radians, Φ = 360/N degrees = 2 π/N radians and A = -ln(0.5). As earlier, these can be expanded and combined to give: formula_9 formula_10 Eliminating A/Ψ02 and rearranging formula_11 where Δ1,3 = \ln(P1) - ln(P3), Δ1,2 = \ln(P1) - \ln(P2) and Δ2,3 = \ln(P2) - \ln(P3), The difference values here are in nepers but could be in decibels. The bearing value, obtained using this equation, is independent of the antenna beamwidth (= 2.Ψ0), so this value does not have to be known for accurate bearing results to be obtained. Also, there is a smoothing affect, for bearing values near to the boresight of the middle antenna, so there is no discontinuity in bearing values there, as an incoming signals moves from left to right (or vice versa) through boresight, as can occur with 2-channel processing. Bearing uncertainty due to noise. Many of the causes of bearing error, such as mechanical imperfections in the antenna structure, poor gain matching of receiver gains, or non-ideal antenna gain patterns may be compensated by calibration procedures and corrective look-up tables, but thermal noise will always be a degrading factor. As all systems generate thermal noise then, when the level of the incoming signal is low, the signal-to-noise ratios in the receiver channels will be poor, and the accuracy of the bearing prediction will suffer. In general, a guide to bearing uncertainty is given by &gt; formula_12 degrees for a signal at crossover, but where SNR0 is the signal-to-noise ratio that would apply at boresight. To obtain more precise predictions at a given bearing, the actual S:N ratios of the signals of interest are used. (The results may be derived assuming that noise induced errors are approximated by relating differentials to uncorrelated noise). For adjacent processing using, say, Channel 1 and Channel 2, the bearing uncertainty (angle noise), Δø (rms), is given below. In these results, square-law detection is assumed and the SNR figures are for signals at video (baseband), for the bearing angle φ. formula_13 rads where SNR1 and SNR2 are the video (base-band) signal-to-noise values for the channels for Antenna 1 and Antenna 2, when square-law detection is used. In the case of 3-channel processing, an expression which is applicable when the S:N ratios in all three channels exceeds unity (when ln(1 + 1/SNR) ≈ 1/SNR is true in all three channels), is formula_14 where SNR1, SNR2 and SNR3 are the video signal-to-noise values for Channel 1, Channel 2, and Channel 3 respectively, for the bearing angle φ. A typical DF system with six antennas. A schematic of a possible DF system, employing six antennas, is shown in the figure. The signals received by the antennas are first amplified by a low-noise preamplifier before detection by detector-log-video-amplifiers (DLVAs). The signal levels from the DLVAs are compared to determine the angle of arrival. By considering the signal levels on a logarithmic scale, as provided by the DLVAs, a large dynamic range is achieved and, in addition, the direction finding calculations are simplified when the main lobes of antenna patterns have a Gaussian characteristic, as shown earlier. A necessary part of the DF analysis is to identify the channel which contains the largest signal and this is achieved by means of a fast comparator circuit. In addition to the DF process, other properties of the signal may be investigated, such as pulse duration, frequency, pulse repetition frequency (PRF) and modulation characteristics. The comparator operation usually includes hysteresis, to avoid jitter in the selection process when the bearing of the incoming signal is such that two adjacent channels contain signals of similar amplitude. Often, the wideband amplifiers are protected from local high power sources (as on a ship) by input limiters and/or filters. Similarly the amplifiers might contain notch filters to remove known, but unwanted, signals which could impairs the system's ability to process weaker signals. Some of these issues are covered in RF chain. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Bibliography. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " P_1(\\phi)= G_0.\\exp \\Bigr [ -A. \\Big ( \\frac{\\phi}{\\Psi_0} \\Big )^2 \\Bigr ] " }, { "math_id": 1, "text": " P_2 = G_0 .\\exp \\Bigr [ -A. \\Big ( \\frac{\\Phi - \\phi}{\\Psi_0} \\Big )^2 \\Bigr ] " }, { "math_id": 2, "text": " \\frac{P_1}{P_2} = \\frac{\\exp \\big [-A.(\\phi/\\Psi_0)^2 \\big ]}{\\exp \\Big [-A \\big [ (\\Phi - \\phi)/ \\Psi_0 \\big ]^2 \\Big ]} = \\exp \\Big [ \\frac{A}{\\Psi_0^2}.(\\Phi^2 - 2 \\Phi \\phi) \\Big ] " }, { "math_id": 3, "text": "\\ln \\Big ( \\frac{P_1}{P_2} \\Big ) = \\ln(P_1) - \\ln(P_2) = \\frac{A}{\\Psi_0^2}.(\\Phi^2 - 2 \\Phi \\phi) " }, { "math_id": 4, "text": " \\phi = \\frac{\\Psi_0^2}{2A.\\Phi}. \\big [ \\ln(P_2) -\\ln(P_1) \\big ] + \\frac{\\Phi}{2} " }, { "math_id": 5, "text": " \\phi = \\frac{\\Psi_0^2}{6.0202 \\Phi} . \\big [ P_2(dB) - P_1(dB) \\big ] +\\frac{\\Phi}{2} " }, { "math_id": 6, "text": " P_1 = G_T .\\exp \\Bigr [ -A. \\Big ( \\frac{\\phi}{\\Psi_0} \\Big )^2 \\Bigr ] " }, { "math_id": 7, "text": " P_2 = G_T .\\exp \\Bigr [ -A. \\Big ( \\frac{\\Phi - \\phi}{\\Psi_0} \\Big )^2 \\Bigr ] " }, { "math_id": 8, "text": " P_3 = G_T .\\exp \\Bigr [ -A. \\Big ( \\frac{\\Phi + \\phi}{\\Psi_0} \\Big )^2 \\Bigr ] " }, { "math_id": 9, "text": " \\ln(P_1) - \\ln(P_2) = \\frac{A}{\\Psi_0^2}.(\\Phi^2 - 2 \\Phi \\phi) " }, { "math_id": 10, "text": " \\ln(P_1) - \\ln(P_3) = \\frac{A}{\\Psi_0^2}.(\\Phi^2 + 2 \\Phi \\phi) " }, { "math_id": 11, "text": " \\phi = \\frac{\\Delta_{1,2} -\\Delta_{1,3}}{\\Delta_{1,2} + \\Delta_{1,3}}.\\frac{\\Phi}{2} = \\frac{\\Delta_{2,3}}{\\Delta_{1,2} + \\Delta_{1,3}}.\\frac{\\Phi}{2} " }, { "math_id": 12, "text": " \\Delta \\phi_{RMS} = 0.724 \\frac{2. \\Psi_0}{ \\sqrt{SNR_0}} " }, { "math_id": 13, "text": " \\Delta \\phi_{RMS} = \\frac{\\Phi}{2}.\\frac{\\Psi_0^2}{-ln(0.5).\\Phi}.\\sqrt{\\frac{1}{SNR_1} + \\frac {1}{SNR_2}} " }, { "math_id": 14, "text": " \\Delta \\phi_{rms} = \\frac{1}{-2.ln(0.5)}. \\frac{\\Psi_0^2}{\\Phi^2}. \\sqrt { \\bigg ( \\phi + \\frac{\\Phi}{2} \\bigg ) ^2 .\\frac{1}{SNR_2} + \\frac{4. \\phi ^2}{SNR_1} + \\bigg ( \\phi - \\frac{\\Phi}{2} \\bigg ) ^2 .\\frac{1}{SNR_3}} " } ]
https://en.wikipedia.org/wiki?curid=820724
8208248
Sunflower (mathematics)
&lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: For any sunflower size, does every set of uniformly sized sets which is of cardinality greater than some exponential in the set size contain a sunflower? Collection of sets in which every two sets have the same intersection In the mathematical fields of set theory and extremal combinatorics, a sunflower or formula_0-system is a collection of sets in which all possible distinct pairs of sets share the same intersection. This common intersection is called the kernel of the sunflower. The naming arises from a visual similarity to the botanical sunflower, arising when a Venn diagram of a sunflower set is arranged in an intuitive way. Suppose the shared elements of a sunflower set are clumped together at the centre of the diagram, and the nonshared elements are distributed in a circular pattern around the shared elements. Then when the Venn diagram is completed, the lobe-shaped subsets, which encircle the common elements and one or more unique elements, take on the appearance of the petals of a flower. The main research question arising in relation to sunflowers is: under what conditions does there exist a "large" sunflower (a sunflower with many sets) in a given collection of sets? The formula_0-lemma, sunflower lemma, and the Erdős-Rado sunflower conjecture give successively weaker conditions which would imply the existence of a large sunflower in a given collection, with the latter being one of the most famous open problems of extremal combinatorics. Formal definition. Suppose formula_1 is a set system over formula_2, that is, a collection of subsets of a set formula_2. The collection formula_1 is a "sunflower" (or "formula_0-system") if there is a subset formula_3 of formula_2 such that for each distinct formula_4 and formula_5 in formula_1, we have formula_6. In other words, a set system or collection of sets formula_1 is a sunflower if all sets in formula_1 share the same common subset of elements. An element in formula_2 is either found in the common subset formula_3 or else appears in at most one of the formula_1 elements. No element of formula_2 is shared by just some of the formula_1 subset, but not others. Note that this intersection, formula_3, may be empty; a collection of pairwise disjoint subsets is also a sunflower. Similarly, a collection of sets each containing the same elements is also trivially a sunflower. Sunflower lemma and conjecture. The study of sunflowers generally focuses on when set systems contain sunflowers, in particular, when a set system is sufficiently large to necessarily contain a sunflower. Specifically, researchers analyze the function formula_7 for nonnegative integers formula_8, which is defined to be the smallest nonnegative integer formula_9 such that, for any set system formula_1 such that every set formula_10 has cardinality at most formula_11, if formula_1 has more than formula_9 sets, then formula_1 contains a sunflower of formula_12 sets. Though it is not obvious that such an formula_9 must exist, a basic and simple result of Erdős and Rado, the Delta System Theorem, indicates that it does. Erdos-Rado Delta System Theorem(corollary of the Sunflower lemma): For each formula_13, formula_14, there is an integer formula_7 such that if a set system formula_15 of formula_11-sets is of cardinality greater than formula_7, then formula_15 contains a sunflower of size formula_12. In the literature, formula_1 is often assumed to be a set rather than a collection, so any set can appear in formula_1 at most once. By adding dummy elements, it suffices to only consider set systems formula_1 such that every set in formula_1 has cardinality formula_11, so often the sunflower lemma is equivalently phrased as holding for "formula_11-uniform" set systems. Sunflower lemma. proved the sunflower lemma, which states that formula_16 That is, if formula_11 and formula_12 are positive integers, then a set system formula_1 of cardinality greater than or equal to formula_17 of sets of cardinality formula_11 contains a sunflower with at least formula_12 sets. The Erdős-Rado sunflower lemma can be proved directly through induction. First, formula_18, since the set system formula_1 must be a collection of distinct sets of size one, and so formula_12 of these sets make a sunflower. In the general case, suppose formula_1 has no sunflower with formula_12 sets. Then consider formula_19 to be a maximal collection of pairwise disjoint sets (that is, formula_20 is the empty set unless formula_21, and every set in formula_1 intersects with some formula_22). Because we assumed that formula_1 had no sunflower of size formula_12, and a collection of pairwise disjoint sets is a sunflower, formula_23. Let formula_24. Since each formula_22 has cardinality formula_11, the cardinality of formula_4 is bounded by formula_25. Define formula_26 for some formula_27 to be formula_28 Then formula_26 is a set system, like formula_1, except that every element of formula_26 has formula_29 elements. Furthermore, every sunflower of formula_26 corresponds to a sunflower of formula_1, simply by adding back formula_30 to every set. This means that, by our assumption that formula_1 has no sunflower of size formula_12, the size of formula_26 must be bounded by formula_31. Since every set formula_10 intersects with one of the formula_22's, it intersects with formula_4, and so it corresponds to at least one of the sets in a formula_26: formula_32 Hence, if formula_33, then formula_1 contains an formula_12 set sunflower of size formula_11 sets. Hence, formula_34 and the theorem follows. Erdős-Rado sunflower conjecture. The sunflower conjecture is one of several variations of the conjecture of that for each formula_35, formula_36 for some constant formula_37 depending only on formula_12. The conjecture remains wide open even for fixed low values of formula_12; for example formula_38; it is not known whether formula_39 for some formula_37. A 2021 paper by Alweiss, Lovett, Wu, and Zhang gives the best progress towards the conjecture, proving that formula_40 for formula_41. A month after the release of the first version of their paper, Rao sharpened the bound to formula_42; the current best-known bound is formula_43. Sunflower lower bounds. Erdős and Rado proved the following lower bound on formula_7. It is equal to the statement that the original sunflower lemma is optimal in formula_12. Theorem. formula_44 Proof. For formula_45 a set of formula_46 sequence of distinct elements is not a sunflower. Let formula_47 denote the size of the largest set of formula_29-sets with no formula_12 sunflower. Let formula_48 be such a set. Take an additional set of formula_46 elements and add one element to each set in one of formula_46 disjoint copies of formula_48. Take the union of the formula_46 disjoint copies with the elements added and denote this set formula_49. The copies of formula_48 with an element added form an formula_46 partition of formula_49. We have that,formula_50. formula_49 is sunflower free since any selection of formula_12 sets if in one of the disjoint partitions is sunflower free by assumption of H being sunflower free. Otherwise, if formula_12 sets are selected from across multiple sets of the partition, then two must be selected from one partition since there are only formula_46 partitions. This implies that at least two sets and not all the sets will have an element in common. Hence this is not a sunflower of formula_12 sets. A stronger result is the following theorem: Theorem. formula_51 Proof. Let formula_15 and formula_52 be two sunflower free families. For each set formula_4 in F, append every set in formula_52 to formula_4 to produce formula_53 many sets. Denote this family of sets formula_54. Take the union of formula_54 over all formula_4 in formula_15. This produces a family of formula_55 sets which is sunflower free. The best existing lower bound for the Erdos-Rado Sunflower problem for formula_56 is formula_57, due to Abott, Hansen, and Sauer. This bound has not been improved in over 50 years. Applications of the sunflower lemma. The sunflower lemma has numerous applications in theoretical computer science. For example, in 1986, Razborov used the sunflower lemma to prove that the Clique language required formula_58 (superpolynomial) size monotone circuits, a breakthrough result in circuit complexity theory at the time. Håstad, Jukna, and Pudlák used it to prove lower bounds on depth-formula_59 formula_60 circuits. It has also been applied in the parameterized complexity of the hitting set problem, to design fixed-parameter tractable algorithms for finding small sets of elements that contain at least one element from a given family of sets. Analogue for infinite collections of sets. A version of the formula_0-lemma which is essentially equivalent to the Erdős-Rado formula_0-system theorem states that a countable collection of k-sets contains a countably infinite sunflower or formula_0-system. The formula_0-lemma states that every uncountable collection of finite sets contains an uncountable formula_0-system. The formula_0-lemma is a combinatorial set-theoretic tool used in proofs to impose an upper bound on the size of a collection of pairwise incompatible elements in a forcing poset. It may for example be used as one of the ingredients in a proof showing that it is consistent with Zermelo–Fraenkel set theory that the continuum hypothesis does not hold. It was introduced by Shanin (1946). If formula_1 is an formula_61-sized collection of countable subsets of formula_61, and if the continuum hypothesis holds, then there is an formula_61-sized formula_0-subsystem. Let formula_62 enumerate formula_1. For formula_63, let formula_64. By Fodor's lemma, fix formula_3 stationary in formula_61 such that formula_65 is constantly equal to formula_66 on formula_3. Build formula_67 of cardinality formula_61 such that whenever formula_68 are in formula_69 then formula_70. Using the continuum hypothesis, there are only formula_71-many countable subsets of formula_66, so by further thinning we may stabilize the kernel. References. &lt;templatestyles src="Refbegin/styles.css" /&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta" }, { "math_id": 1, "text": "W" }, { "math_id": 2, "text": "U" }, { "math_id": 3, "text": "S" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "B" }, { "math_id": 6, "text": "A \\cap B = S" }, { "math_id": 7, "text": "f(k,r)" }, { "math_id": 8, "text": "k, r" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "S \\in W" }, { "math_id": 11, "text": "k" }, { "math_id": 12, "text": "r" }, { "math_id": 13, "text": "k>0" }, { "math_id": 14, "text": "r>0" }, { "math_id": 15, "text": "F" }, { "math_id": 16, "text": "f(k,r)\\le k!(r-1)^k." }, { "math_id": 17, "text": "k!(r-1)^{k}" }, { "math_id": 18, "text": "f(1,r)\\le r-1" }, { "math_id": 19, "text": "A_1,A_2,\\ldots,A_t \\in W" }, { "math_id": 20, "text": "A_i \\cap A_j" }, { "math_id": 21, "text": "i = j" }, { "math_id": 22, "text": "A_i" }, { "math_id": 23, "text": "t < r" }, { "math_id": 24, "text": "A = A_1 \\cup A_2 \\cup \\cdots \\cup A_t" }, { "math_id": 25, "text": "kt \\leq k(r-1)" }, { "math_id": 26, "text": "W_a" }, { "math_id": 27, "text": "a \\in A" }, { "math_id": 28, "text": "W_a = \\{S \\setminus \\{a\\} \\mid a \\in S,\\, S \\in W\\}." }, { "math_id": 29, "text": "k-1" }, { "math_id": 30, "text": "a" }, { "math_id": 31, "text": "f(k-1,r)-1" }, { "math_id": 32, "text": "|W| \\leq \\sum_{a \\in A} |W_a| \\leq |A| (f(k-1, r)-1) \\leq k(r-1)f(k-1, r) - |A| \\leq k(r-1)f(k-1, r) - 1." }, { "math_id": 33, "text": "|W| \\ge k(r-1)f(k-1,r)" }, { "math_id": 34, "text": "f(k,r) \\le \nk(r-1)f(k-1,r)" }, { "math_id": 35, "text": "r>2" }, { "math_id": 36, "text": "f(k,r)\\le C^k" }, { "math_id": 37, "text": "C>0" }, { "math_id": 38, "text": "r=3" }, { "math_id": 39, "text": "f(k,3)\\le C^k" }, { "math_id": 40, "text": "f(k,r)\\le C^k " }, { "math_id": 41, "text": "C = O(r^3\\log(k)\\log\\log(k))" }, { "math_id": 42, "text": "C=O(r\\log(rk))" }, { "math_id": 43, "text": "C=O(r\\log k)" }, { "math_id": 44, "text": " (r-1)^k \\le f(k,r). " }, { "math_id": 45, "text": " k = 1 " }, { "math_id": 46, "text": "r-1" }, { "math_id": 47, "text": "h(k-1,r)" }, { "math_id": 48, "text": "H" }, { "math_id": 49, "text": "H^*" }, { "math_id": 50, "text": "(r-1)|H| \\le |H^*|" }, { "math_id": 51, "text": "f(a+b,r) \\ge (f(a,r)-1)(f(b,r)-1)" }, { "math_id": 52, "text": "F^*" }, { "math_id": 53, "text": "|F^*|" }, { "math_id": 54, "text": "F_A" }, { "math_id": 55, "text": "|F^*||F|" }, { "math_id": 56, "text": " r=3 " }, { "math_id": 57, "text": " 10^{{\\frac{k}{2}}} \\le f(k,3) " }, { "math_id": 58, "text": "n^{\\log(n)}" }, { "math_id": 59, "text": "3" }, { "math_id": 60, "text": "AC_0" }, { "math_id": 61, "text": "\\omega_2" }, { "math_id": 62, "text": "\\langle A_\\alpha:\\alpha<\\omega_2\\rangle" }, { "math_id": 63, "text": "\\operatorname{cf}(\\alpha)=\\omega_1" }, { "math_id": 64, "text": "f(\\alpha) = \\sup(A_\\alpha \\cap \\alpha)" }, { "math_id": 65, "text": "f" }, { "math_id": 66, "text": "\\beta" }, { "math_id": 67, "text": "S'\\subseteq S" }, { "math_id": 68, "text": "i < j" }, { "math_id": 69, "text": "S'" }, { "math_id": 70, "text": "A_i \\subseteq j" }, { "math_id": 71, "text": "\\omega_1" } ]
https://en.wikipedia.org/wiki?curid=8208248
8208352
Gabriel Pareyon
Gabriel Pareyon (born October 23, 1974, Zapopan, Jalisco) is a polymathic Mexican composer and musicologist, who has published literature on topics of philosophy and semiotics. He has a Ph.D. in musicology from the University of Helsinki, where he studied with Solomon Marcus and Eero Tarasti (2006–2011). He received bachelor's and master's degrees in composition at the Royal Conservatoire, The Hague (2000–2004), where he studied with Clarence Barlow. He previously studied at the Composers’ Workshop of the National Conservatoire of Music, Mexico City (1995–1998), led by Mario Lavista. Composer. Pareyon's output is specially known by "Xochicuicatl cuecuechtli" (2011), the first modern opera in the Americas that exclusively uses a Native American language (Nahuatl in this case) as well as music instruments native to Mexico. This work was awarded by the UNESCO and the International Theatre Institute, in 2015. More recently, his "Eight Songs in Nahuatl" ("Chicueyicuicatl"), for solo voice and percussion quartet, made themselves known simultaneously on an international live tour (awarded at the Classical:NEXT Festival Schauspiel Hannover, 2022) and a series of viewings in film version (best musical feature in an indigenous language, PARAI Festival, Chennai, India, 2022, and Wairoa Māori Film Festival, New Zealand, 2022). As young composer (from 2006 and earlier), several works written by Pareyon were selected for the Thailand International Saxophone Competition for Composers (Bangkok, 2006, I Prize), the 2nd International Jurgenson Competition for young composers (Moscow, 2003, II Prize) and the 3rd Andrzej Panufnik International Composition Competition (Kraków, 2001, III Prize). His earlier production includes works for Classical instruments and ensembles. He also experimented with Mexican traditional instruments (such as huehuetl, teponaztli and a wide variety of woodwinds), and metre and phonetics from Nahuatl and Hñähñu, also known as the Otomí language. His music also combines wider aspects of linguistics and human speech, mathematical models (series, patterns, algorithms, etc.), and models coming from bird vocalization and nonverbal communication. Musicologist. As musicologist, the publications of Pareyon contributed to recognize aspects of the new music from Mexico in his own country and abroad, e.g. in the explanation and extension of Julio Estrada's work (see McHard 2006, 2008:264). Accordingly, his work is quoted, as early as from 2000, by international compilations about the music of Mexico (see e.g. Olsen &amp; Sheehy 2000:108; Nattiez et al. 2006:125, 137, 1235) and specialised literature (see e.g. Brenner 2000:177; Madrid &amp; Moore 2013:94, 126). The Preface to the book "Musicians' Migratory Patterns: American-Mexican Border Lands" starts with the following statement: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;"“In Pareyon's "Diccionario Enciclopédico de Música en México" (2007), the phrase Estados Unidos de América appears 1,338 times. Pareyon's two-volume compendium totaling 1,134 pages is a paramount bibliographic recount of the lives and musical experiences of Mexican musicians and artists from preColumbian times to today. The artistic, academic, cultural, social, and economic ties of Mexican musicians to the United States cannot be simply understood by such a significant number; however, this mere statistical value no doubt informs us about the strong musical relationships between the two countries.”" Systematic Musicology. In the field of systematic musicology, Pareyon’s book "On Musical Self-Similarity" (Helsinki, 2011) predicts the role of analogy as one of the capital issues for future musicology and cognitive science, foreseeing conclusions of Hofstadter &amp; Sander's "Surfaces and Essences" (2013). According to Curtis Roads (2015:316), "On Musical Self-Similarity" "is an intriguing treatise in which repetition is generalized to several modes of self-similarity that are ubiquitous in musical discourse.". The book is frequently referenced in monographs, journals and dissertations, mainly in the fields of representation of temporal groups and semigroups, machine learning and human-machine hybrid composition, non-linear cognitive studies of musical processes, neural dynamic programming, and self-repetition algorithmic modelling. Textiles understood as Musical Patterns. Grandson of a textile worker from La Experiencia (Zapopan, Jalisco), Pareyon’s article “Traditional patterns and textures as values for meaningful automatization in music”, published in Finland, in 2010, is a seminal work proposing that textiles and traditional fabrics, generalized as frieze group patterns, may be and indeed are instructive as musical contents. This idea inspired a PhD dissertation from Durham University (2016), and contributed to a framework for systematizing the catalog of harmonic styles developed as an interactive inmersion by the University of Science and Technology of China USTC (Huawei Technologies Co Ltd). A clarifying chapter in these terms appears entitled “A matter of complementarity” within "On Musical Self-Similarity", pages 458-461. Music as an Ecology. Another pioneering writing is “The Ecologic Foundations of Stylistics in Music and in Language”, published by the Aristotle University and the University of Edinburgh, in 2009. There, the conclusions lead to conceiving culture as an intersection between the semiosphere and the ecological niche’s complexity: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;"“In the wake of considering style as a result of dynamic relations in music and in language, it might be questioned whether its cycles are involved into greater systems of biological complexity. (...) This would explain at least general aspects of attraction, repulsion and bifurcation in musical constraints, preferences and correctness rules.”" The latter cannot be disengaged from the political and social dimensions of music, as Pareyon states at the end of another paper, “How Music Can Signify Politics in the Postmodern Era” (Helsinki, 2011): &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;"“Musical nationalism, and postmodern corporatism equally play a role in colonisation and ecological deterioration, making trivial what was sacred or traditional in its original context. In musicology, a big question arises: can composers such as Chávez, Bartók, Sibelius, or Villa-Lobos, be judged as conquerors of local traditions, for the sake of the expansion of Classical music? Or are they rather cathartic agents of musical synthesis, attempting to save diversity in spite of an unavoidable, progressive unitarism as a process of cultural self-transformation?”" Finally, this idea of "diversity of music" is developed in a later book, "Resonancias del abismo como nación" (in Spanish, 2021), as follows (page 372): &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;"“Faced with the disaster of monoculture, the metaphor of bioecological disaster is concomitant: the rapid increase in the rate of destruction of languages, traditions and traditional ways of life, quickly replaced by a single way of existence, usually called “progress” or “development”, and inspired by the North American model of egotistical and irrational consumption of resources, in the midst of the cultural desolation that derives from the reproduction of this system, leads humanity towards a form of generalized poverty never seen before.”" Semiotician. Pareyon’s output in the field of semiotics is significant mainly through his capital contributions of "polar semiotics", "intersemiotic continuum" and "intersemiotic synecdoche". Polar semiotics. Probably Pareyon’s most important contribution, both to semiotics and musicology, is his construction of Polar semiotics (also "Polar semiology") within the mathematical domain of Category theory. Thomas Sebeok’ famous statement "the sign is bifacial" (1976:117; with noticeable antecedent in Peircean semiotics) remained obscure in the context of interdisciplinary studies, until Pareyon’s formal generalization, in a fashion that makes possible harmonizing cultural semiotics within a range of Group theory. This theorization has an impact on the methods for social history, as a bond between the abstract and the "socially real" and the pathos, since, as Pareyon concludes: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;"“The more important a concept does appear in the human mind and in social history, the more heterogeneousness of thoughts around it. This, which is a "problem" for classical philosophy, is the greatest virtue for the concepts of philosophy behind polar semiotics. Yoneda lemma facilitates this recognition for philosophy in general, and for musicology in particular.”" Intersemiotic continuum. Pareyon’s theorization on the "intersemiotic continuum" is an elaboration over Lotman (1984: 5-6) semiosphere and Sebeok "semiotic continuum" (formula_0). The latter expressed that “no semiotic system can exist or function unless it is ‘immersed in the semiotic continuum’—which is what Lotman terms the "semiosphere"”. However, the formula_0 concept emphasizes the fact that there is no any gap along or across the sign network and its interpretant (of any sign). This is deeply related to the semiotic quiddity "aliquid stat pro aliquo", conventionally translated and adapted to the terms: “[A sign is] everything that stands for something else”. Furthermore, Kotov and Kull (2011:183) specifies that (The) “semiosphere can be described as a "semiotic continuum", a heterogeneous yet bounded space that is in constant interaction with other similar structures.”. Congruently, the "intersemiotic continuum theory" (formula_1 theory), introduced in chapter 3.8.1. of Pareyon’s "On Musical Self-Similarity", expands this notion to the principle that “there is no any gap along or across the semiotic dimensions and its interpretants”. Subsumed within the field of formal categories, this theorization adopted the rule of satisfying the Snake lemma. Subsequently, this theorization strengthened the complementary concepts of "intersemiotic synecdoche" and "polar semiotics". Within the first years after the publication of these concepts in "On Musical Self-Similarity" (2011), the formula_1 theory was extended to several scientific disciplines, mainly in Eastern Europe and Russia. Intersemiotic synecdoche. The classical concept of synecdoche, in which a term for a part of something is used to refer to the whole, or vice versa, here is embedded into a multidimensional semiotic depth. Thus, whether “classical synecdoche” dwells within rhetorics and speech theoretical contexts, the intersemiotic synecdoche is the analogous operation, transversal to formula_2 number of semiotic dimensions. It is, also and necessarily, a subgroup of the "intersemiotic continuum" wholeness. Among other features, this framework expands the approach to abstract synesthesia in different conceptual domains, for instance, connecting partial codes or signs to complete codes or sign systems of potentially infinite semiotic varieties. A first order example would be as follows: let formula_3"formula_4" be part of pitch formula_5"formula_6" which in turn makes part of a chord formula_7"formula_8" existent with specific timbre formula_9"ω" (i.e. Fourier spectrum) that represents specific combinatorics for a Dirichlet L-function, formula_10. Thus, sumarizing: formula_11 ∝ formula_12 Although merely substituting a symbol by another symbol or a code by another parallel code obviously results trivial, when embedding this sort of relations as connected morphisms (see: Category theory), semiotics can be understood as the realm of signs, symbols and associated operations, characterizable as the ‘visible display’ (i.e. perception of the signs and signic processes: the ‘color’ in the previous example), in contrast with its transversal constraints (‘invisible’ or hidden to the senses). Nevertheless, both "perceptible" and "imperceptible" plots of signs integrate the same intersemiotic continuum (being ‘explicit’ the "pars pro toto", and ‘implicit’ the "toto pro pars"). References. &lt;templatestyles src="Reflist/styles.css" /&gt; See also. List of musicologists Mexicayotl
[ { "math_id": 0, "text": "SC" }, { "math_id": 1, "text": "IC" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "color" }, { "math_id": 4, "text": "k" }, { "math_id": 5, "text": "p" }, { "math_id": 6, "text": "j" }, { "math_id": 7, "text": "ch" }, { "math_id": 8, "text": "i" }, { "math_id": 9, "text": "Fs" }, { "math_id": 10, "text": "L" }, { "math_id": 11, "text": "L(Fs) = \\frac{ch(log)}{p(log)}" }, { "math_id": 12, "text": "color_k." } ]
https://en.wikipedia.org/wiki?curid=8208352
8211999
Eilenberg–Ganea conjecture
Conjecture in algebraic topology The Eilenberg–Ganea conjecture is a claim in algebraic topology. It was formulated by Samuel Eilenberg and Tudor Ganea in 1957, in a short, but influential paper. It states that if a group "G" has cohomological dimension 2, then it has a 2-dimensional Eilenberg–MacLane space formula_0. For "n" different from 2, a group "G" of cohomological dimension "n" has an "n"-dimensional Eilenberg–MacLane space. It is also known that a group of cohomological dimension 2 has a 3-dimensional Eilenberg−MacLane space. In 1997, Mladen Bestvina and Noel Brady constructed a group "G" so that either "G" is a counterexample to the Eilenberg–Ganea conjecture, or there must be a counterexample to the Whitehead conjecture; in other words, it is not possible for both conjectures to be true..
[ { "math_id": 0, "text": "K(G,1)" } ]
https://en.wikipedia.org/wiki?curid=8211999
8212425
Pennate muscle
Muscle with fascicles that attach obliquely to its tendon A pennate or pinnate muscle (also called a penniform muscle) is a type of skeletal muscle with fascicles that attach obliquely (in a slanting position) to its tendon. This type of muscle generally allows higher force production but a smaller range of motion. When a muscle contracts and shortens, the pennation angle increases. Etymology. The term "pennate" comes from the Latin "pinnātus" ("feathered, winged"), from "pinna" ("feather, wing"). Types of pennate muscle. In skeletal muscle tissue, 10-100 endomysium-sheathed muscle fibers are organized into perimysium-wrapped bundles known as fascicles. Each muscle is composed of a number of fascicles grouped together by a sleeve of connective tissue, known as an epimysium. In a pennate muscle, aponeuroses run along each side of the muscle and attach to the tendon. The fascicles attach to the aponeuroses and form an angle (the pennation angle) to the load axis of the muscle. If all the fascicles are on the same side of the tendon, the pennate muscle is called unipennate (Fig. 1A). Examples of this include certain muscles in the hand. If there are fascicles on both sides of the central tendon, the pennate muscle is called bipennate (Fig. 1B). The rectus femoris, a large muscle in the quadriceps, is typical. If the central tendon branches within a pennate muscle, the muscle is called multipennate (Fig. 1C), as seen in the deltoid muscle in the shoulder. Consequences of pennate muscle architecture. Physiological cross sectional area (PCSA). One advantage of pennate muscles is that more muscle fibers can be packed in parallel, thus allowing the muscle to produce more force, although the fiber angle to the direction of action means that the maximum force in that direction is somewhat less than the maximum force in the fiber direction. The muscle cross sectional area (blue line in figure 1, also known as anatomical cross section area, or ACSA) does not accurately represent the number of muscle fibers in the muscle. A better estimate is provided by the total area of the cross sections perpendicular to the muscle fibers (green lines in figure 1). This measure is known as the physiological cross sectional area (PCSA), and is commonly calculated and defined by the following formula (an alternative definition is provided in the main article): formula_0 where ρ is the density of the muscle: formula_1 PCSA increases with pennation angle, and with muscle length. In a pennate muscle, PCSA is always larger than ACSA. In a non-pennate muscle, it coincides with ACSA. Relationship between PCSA and muscle force. The total force exerted by the fibers along their oblique direction is proportional to PCSA. If the "specific tension" of the muscle fibers is known (force exerted by the fibers per unit of PCSA), it can be computed as follows: formula_2 However, only a component of that force can be used to pull the tendon in the desired direction. This component, which is the true "muscle force" (also called "tendon force"), is exerted along the direction of action of the muscle: formula_3 The other component, orthogonal to the direction of action of the muscle (Orthogonal force = Total force × sinΦ) is not exerted on the tendon, but simply squeezes the muscle, by pulling its aponeuroses toward each other. Notice that, although it is practically convenient to compute PCSA based on volume or mass and fiber length, PCSA (and therefore the total fiber force, which is proportional to PCSA) is not proportional to muscle mass or fiber length alone. Namely, the maximum (tetanic) force of a muscle fiber simply depends on its thickness (cross-section area) and type. By no means it depends on its mass or length alone. For instance, when muscle mass increases due to physical development during childhood, this may be only due to an increase in length of the muscle fibers, with no change in fiber thickness (PCSA) or fiber type. In this case, an increase in mass does not produce an increase in force. Lower velocity of shortening. In a pennate muscle, as a consequence of their arrangement, fibers are shorter than they would be if they ran from one end of the muscle to the other. This implies that each fiber is composed of a smaller number "N" of sarcomeres in series. Moreover, the larger the pennation angle is, the shorter are the fibers. The speed at which a muscle fiber can shorten is partly determined by the length of the muscle fiber (i.e., by "N"). Thus, a muscle with a large pennation angle will contract more slowly than a similar muscle with a smaller pennation angle. Architectural gear ratio. Architectural gear ratio, also called anatomical gear ratio, (AGR) is a feature of pennate muscle defined by the ratio between the longitudinal strain of the muscle and muscle fiber strain. It is sometimes also defined as the ratio between muscle-shortening velocity and fiber-shortening velocity: AGR = εx/εf where εx = longitudinal strain (or muscle-shortening velocity) and εf is fiber strain (or fiber-shortening velocity). It was originally thought that the distance between aponeuroses did not change during the contraction of a pennate muscle, thus requiring the fibers to rotate as they shorten. However, recent work has shown this is false, and that the degree of fiber angle change varies under different loading conditions. This dynamic gearing automatically shifts in order to produce either maximal velocity under low loads or maximal force under high loads. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{PCSA} = {\\text{muscle volume} \\over \\text{fiber length}} =\n {\\text{muscle mass} \\over {\\rho \\cdot \\text{fiber length}}}," }, { "math_id": 1, "text": "\\rho = {\\text{muscle mass} \\over \\text{muscle volume}}." }, { "math_id": 2, "text": "\\text{Total force} = \\text{PCSA} \\cdot \\text{Specific tension}" }, { "math_id": 3, "text": "\\text{Muscle force} = \\text{Total force} \\cdot \\cos \\Phi" } ]
https://en.wikipedia.org/wiki?curid=8212425
8212957
Fractional excretion of sodium
Percentage of kidney-filtered sodium excreted in urine The fractional excretion of sodium (FENa) is the percentage of the sodium filtered by the kidney which is excreted in the urine. It is measured in terms of plasma and urine sodium, rather than by the interpretation of urinary sodium concentration alone, as urinary sodium concentrations can vary with water reabsorption. Therefore, the urinary and plasma concentrations of sodium must be compared to get an accurate picture of kidney clearance. In clinical use, the fractional excretion of sodium can be calculated as part of the evaluation of acute kidney failure in order to determine if hypovolemia or decreased effective circulating plasma volume is a contributor to the kidney failure. Calculation. FENa is calculated in two parts—figuring out how much sodium is excreted in the urine, and then finding its ratio to the total amount of sodium that passed through (aka "filtered by") the kidney. First, the actual amount of sodium excreted is calculated by multiplying the urine sodium concentration by the urinary flow rate (UFR). This is the numerator in the equation. The denominator is the total amount of sodium filtered by the kidneys. This is calculated by multiplying the plasma sodium concentration by the glomerular filtration rate (GFR) calculated using creatinine filtration. The flow rates then cancel out, simplifying to the standard equation: formula_0 Sodium (mmol/L) Creatinine (mg/dL) For ease of recall, one can just remember the fractional excretion of sodium is the clearance of sodium divided by the glomerular filtration rate (i.e. the "fraction" excreted). Interpretation. FENa can be useful in the evaluation of acute kidney failure in the context of low urine output. Low fractional excretion indicates sodium retention by the kidney, suggesting pathophysiology extrinsic to the urinary system such as volume depletion or decrease in effective circulating volume (e.g. low output heart failure). Higher values can suggest sodium wasting due to acute tubular necrosis or other causes of intrinsic kidney failure. The FENa may be affected or invalidated by diuretic use, since many diuretics act by altering the kidney's handling of sodium. Exceptions in children and neonates. While the above values are useful for older children and adults, the FENa must be interpreted more cautiously in younger pediatric patients due to the limited ability of immature tubules to reabsorb sodium maximally. Thus, in term neonates, a FENa of &lt;3% represents volume depletion, and a FENa as high as 4% may represent maximal sodium conservation in critically ill preterm neonates. The FENa may also be spuriously elevated in children with adrenal insufficiency or pre-existing kidney disease (such as obstructive uropathy) due to salt wasting. Exceptions in adults. The FENa is generally less than 1% in patients with hepatorenal syndrome and acute glomerulonephropathy. Although often reliable at discriminating between prerenal azotemia and acute tubular necrosis, the FENa has been reported to be &lt;1% occasionally with oliguric and nonoliguric acute tubular necrosis, urinary tract obstruction, acute glomerulonephritis, renal allograft rejection, sepsis, and drug-related alterations in renal hemodynamics. Therefore, the utility of the test is best when used in conjunction with other clinical data. Alternatives. Fractional excretion of other substances can be measured to determine kidney clearance including urea, uric acid, and lithium. These can be used in patients undergoing diuretic therapy, since diuretics induce a natriuresis. Thus, the urinary sodium concentration and FENa may be higher in patients receiving diuretics in spite of prerenal pathology. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n\n \\text{FE}_\\ce{Na} &= 100 \\times\n \\frac{ [\\ce{Na}]_\\text{urinary} \\times \\text{UFR} }{ [\\ce{Na}]_\\text{plasma} \\times \\text{GFR} } \\\\[4pt]\n\n &= 100 \\times\n \\frac{ [\\ce{Na}]_\\text{urinary} \\times \\text{UFR} }{ [\\ce{Na}]_\\text{plasma} \\times \\left(\\frac{ [\\text{creatinine}]_\\text{urinary} \\times \\text{UFR} }{ [\\text{creatinine}]_\\text{plasma} }\\right) } \\\\[4pt]\n\n &= 100 \\times \\frac{ [\\ce{Na}]_\\text{urinary} \\times [\\text{creatinine}]_\\text{plasma} }{ [\\ce{Na}]_\\text{plasma} \\times [\\text{creatinine}]_\\text{urinary} }\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=8212957
8213034
Whitehead conjecture
The Whitehead conjecture (also known as the Whitehead asphericity conjecture) is a claim in algebraic topology. It was formulated by J. H. C. Whitehead in 1941. It states that every connected subcomplex of a two-dimensional aspherical CW complex is aspherical. A group presentation formula_0 is called "aspherical" if the two-dimensional CW complex formula_1 associated with this presentation is aspherical or, equivalently, if formula_2. The Whitehead conjecture is equivalent to the conjecture that every sub-presentation of an aspherical presentation is aspherical. In 1997, Mladen Bestvina and Noel Brady constructed a group "G" so that either "G" is a counterexample to the Eilenberg–Ganea conjecture, or there must be a counterexample to the Whitehead conjecture; in other words, it is not possible for both conjectures to be true.
[ { "math_id": 0, "text": "G=(S\\mid R)" }, { "math_id": 1, "text": "K(S\\mid R)" }, { "math_id": 2, "text": "\\pi_2(K(S\\mid R))=0" } ]
https://en.wikipedia.org/wiki?curid=8213034
8214
Decimal
Number in base-10 numeral system The decimal numeral system (also called the base-ten positional numeral system and denary or decanary) is the standard system for denoting integer and non-integer numbers. It is the extension to non-integer numbers ("decimal fractions") of the Hindu–Arabic numeral system. The way of denoting numbers in the decimal system is often referred to as "decimal notation". A decimal numeral (also often just "decimal" or, less correctly, "decimal number"), refers generally to the notation of a number in the decimal numeral system. Decimals may sometimes be identified by a decimal separator (usually "." or "," as in 25.9703 or 3,1415). "Decimal" may also refer specifically to the digits after the decimal separator, such as in "3.14 is the approximation of π to "two decimals"". Zero-digits after a decimal separator serve the purpose of signifying the precision of a value. The numbers that may be represented in the decimal system are the decimal fractions. That is, fractions of the form "a"/10"n", where "a" is an integer, and "n" is a non-negative integer. Decimal fractions also result from the addition of an integer and a "fractional part"; the resulting sum sometimes is called a "fractional number". Decimals are commonly used to approximate real numbers. By increasing the number of digits after the decimal separator, one can make the approximation errors as small as one wants, when one has a method for computing the new digits. Originally and in most uses, a decimal has only a finite number of digits after the decimal separator. However, the decimal system has been extended to "infinite decimals" for representing any real number, by using an infinite sequence of digits after the decimal separator (see decimal representation). In this context, the usual decimals, with a finite number of non-zero digits after the decimal separator, are sometimes called terminating decimals. A "repeating decimal" is an infinite decimal that, after some place, repeats indefinitely the same sequence of digits (e.g., 5.123144144144144... = 5.123144). An infinite decimal represents a rational number, the quotient of two integers, if and only if it is a repeating decimal or has a finite number of non-zero digits. Origin. Many numeral systems of ancient civilizations use ten and its powers for representing numbers, possibly because there are ten fingers on two hands and people started counting by using their fingers. Examples are firstly the Egyptian numerals, then the Brahmi numerals, Greek numerals, Hebrew numerals, Roman numerals, and Chinese numerals. Very large numbers were difficult to represent in these old numeral systems, and only the best mathematicians were able to multiply or divide large numbers. These difficulties were completely solved with the introduction of the Hindu–Arabic numeral system for representing integers. This system has been extended to represent some non-integer numbers, called "decimal fractions" or "decimal numbers", for forming the "decimal numeral system". Decimal notation. For writing numbers, the decimal system uses ten decimal digits, a decimal mark, and, for negative numbers, a minus sign "−". The decimal digits are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9; the decimal separator is the dot "." in many countries (mostly English-speaking), and a comma "," in other countries. For representing a non-negative number, a decimal numeral consists of formula_1. If "m" &gt; 0, that is, if the first sequence contains at least two digits, it is generally assumed that the first digit "a""m" is not zero. In some circumstances it may be useful to have one or more 0's on the left; this does not change the value represented by the decimal: for example, 3.14 = 03.14 = 003.14. Similarly, if the final digit on the right of the decimal mark is zero—that is, if "b""n" = 0—it may be removed; conversely, trailing zeros may be added after the decimal mark without changing the represented number; for example, 15 = 15.0 = 15.00 and 5.2 = 5.20 = 5.200. For representing a negative number, a minus sign is placed before "a""m". The numeral formula_1 represents the number formula_2. The "integer part" or "integral part" of a decimal numeral is the integer written to the left of the decimal separator (see also truncation). For a non-negative decimal numeral, it is the largest integer that is not greater than the decimal. The part from the decimal separator to the right is the "fractional part", which equals the difference between the numeral and its integer part. When the integral part of a numeral is zero, it may occur, typically in computing, that the integer part is not written (for example, .1234, instead of 0.1234). In normal writing, this is generally avoided, because of the risk of confusion between the decimal mark and other punctuation. In brief, the contribution of each digit to the value of a number depends on its position in the numeral. That is, the decimal system is a positional numeral system. Decimal fractions. Decimal fractions (sometimes called decimal numbers, especially in contexts involving explicit fractions) are the rational numbers that may be expressed as a fraction whose denominator is a power of ten. For example, the decimal expressions formula_3 represent the fractions , , , and , and therefore denote decimal fractions. An example of a fraction that cannot be represented by a decimal expression (with a finite number of digits) is , 3 not being a power of 10. More generally, a decimal with "n" digits after the separator (a point or comma) represents the fraction with denominator 10"n", whose numerator is the integer obtained by removing the separator. It follows that a number is a decimal fraction if and only if it has a finite decimal representation. Expressed as fully reduced fractions, the decimal numbers are those whose denominator is a product of a power of 2 and a power of 5. Thus the smallest denominators of decimal numbers are formula_4 Approximation using decimal numbers. Decimal numerals do not allow an exact representation for all real numbers. Nevertheless, they allow approximating every real number with any desired accuracy, e.g., the decimal 3.14159 approximates π, being less than 10−5 off; so decimals are widely used in science, engineering and everyday life. More precisely, for every real number x and every positive integer n, there are two decimals "L" and "u" with at most "n" digits after the decimal mark such that "L" ≤ "x" ≤ "u" and ("u" − "L") = 10−"n". Numbers are very often obtained as the result of measurement. As measurements are subject to measurement uncertainty with a known upper bound, the result of a measurement is well-represented by a decimal with "n" digits after the decimal mark, as soon as the absolute measurement error is bounded from above by 10−"n". In practice, measurement results are often given with a certain number of digits after the decimal point, which indicate the error bounds. For example, although 0.080 and 0.08 denote the same number, the decimal numeral 0.080 suggests a measurement with an error less than 0.001, while the numeral 0.08 indicates an absolute error bounded by 0.01. In both cases, the true value of the measured quantity could be, for example, 0.0803 or 0.0796 (see also significant figures). Infinite decimal expansion. For a real number x and an integer "n" ≥ 0, let ["x"]"n" denote the (finite) decimal expansion of the greatest number that is not greater than "x" that has exactly n digits after the decimal mark. Let "d""i" denote the last digit of ["x"]"i". It is straightforward to see that ["x"]"n" may be obtained by appending "d""n" to the right of ["x"]"n"−1. This way one has ["x"]"n" = ["x"]0."d"1"d"2..."d""n"−1"d""n", and the difference of ["x"]"n"−1 and ["x"]"n" amounts to formula_5, which is either 0, if "d""n" = 0, or gets arbitrarily small as "n" tends to infinity. According to the definition of a limit, "x" is the limit of ["x"]"n" when "n" tends to infinity. This is written asformula_6or "x" = ["x"]0."d"1"d"2..."d""n"..., which is called an infinite decimal expansion of "x". Conversely, for any integer ["x"]0 and any sequence of digitsformula_7 the (infinite) expression ["x"]0."d"1"d"2..."d""n"... is an "infinite decimal expansion" of a real number "x". This expansion is unique if neither all "d""n" are equal to 9 nor all "d""n" are equal to 0 for "n" large enough (for all "n" greater than some natural number N). If all "d""n" for "n" &gt; "N" equal to 9 and ["x"]"n" = ["x"]0."d"1"d"2..."d""n", the limit of the sequenceformula_8 is the decimal fraction obtained by replacing the last digit that is not a 9, i.e.: "d""N", by "d""N" + 1, and replacing all subsequent 9s by 0s (see 0.999...). Any such decimal fraction, i.e.: "d""n" = 0 for "n" &gt; "N", may be converted to its equivalent infinite decimal expansion by replacing "d""N" by "d""N" − 1 and replacing all subsequent 0s by 9s (see 0.999...). In summary, every real number that is not a decimal fraction has a unique infinite decimal expansion. Each decimal fraction has exactly two infinite decimal expansions, one containing only 0s after some place, which is obtained by the above definition of ["x"]"n", and the other containing only 9s after some place, which is obtained by defining ["x"]"n" as the greatest number that is "less" than x, having exactly "n" digits after the decimal mark. Rational numbers. Long division allows computing the infinite decimal expansion of a rational number. If the rational number is a decimal fraction, the division stops eventually, producing a decimal numeral, which may be prolongated into an infinite expansion by adding infinitely many zeros. If the rational number is not a decimal fraction, the division may continue indefinitely. However, as all successive remainders are less than the divisor, there are only a finite number of possible remainders, and after some place, the same sequence of digits must be repeated indefinitely in the quotient. That is, one has a "repeating decimal". For example, = 0. 012345679 012... (with the group 012345679 indefinitely repeating). The converse is also true: if, at some point in the decimal representation of a number, the same string of digits starts repeating indefinitely, the number is rational. or, dividing both numerator and denominator by 6, . Decimal computation. Most modern computer hardware and software systems commonly use a binary representation internally (although many early computers, such as the ENIAC or the IBM 650, used decimal representation internally). For external use by computer specialists, this binary representation is sometimes presented in the related octal or hexadecimal systems. For most purposes, however, binary values are converted to or from the equivalent decimal values for presentation to or input from humans; computer programs express literals in decimal by default. (123.1, for example, is written as such in a computer program, even though many computer languages are unable to encode that number precisely.) Both computer hardware and software also use internal representations which are effectively decimal for storing decimal values and doing arithmetic. Often this arithmetic is done on data which are encoded using some variant of binary-coded decimal, especially in database implementations, but there are other decimal representations in use (including decimal floating point such as in newer revisions of the IEEE 754 Standard for Floating-Point Arithmetic). Decimal arithmetic is used in computers so that decimal fractional results of adding (or subtracting) values with a fixed length of their fractional part always are computed to this same length of precision. This is especially important for financial calculations, e.g., requiring in their results integer multiples of the smallest currency unit for book keeping purposes. This is not possible in binary, because the negative powers of formula_9 have no finite binary fractional representation; and is generally impossible for multiplication (or division). See Arbitrary-precision arithmetic for exact calculations. History. Many ancient cultures calculated with numerals based on ten, perhaps because two human hands have ten fingers. Standardized weights used in the Indus Valley Civilisation (c. 3300–1300 BCE) were based on the ratios: 1/20, 1/10, 1/5, 1/2, 1, 2, 5, 10, 20, 50, 100, 200, and 500, while their standardized ruler – the "Mohenjo-daro ruler" – was divided into ten equal parts. Egyptian hieroglyphs, in evidence since around 3000 BCE, used a purely decimal system, as did the Linear A script (c. 1800–1450 BCE) of the Minoans and the Linear B script (c. 1400–1200 BCE) of the Mycenaeans. The Únětice culture in central Europe (2300-1600 BC) used standardised weights and a decimal system in trade. The number system of classical Greece also used powers of ten, including an intermediate base of 5, as did Roman numerals. Notably, the polymath Archimedes (c. 287–212 BCE) invented a decimal positional system in his Sand Reckoner which was based on 108. Hittite hieroglyphs (since 15th century BCE) were also strictly decimal. The Egyptian hieratic numerals, the Greek alphabet numerals, the Hebrew alphabet numerals, the Roman numerals, the Chinese numerals and early Indian Brahmi numerals are all non-positional decimal systems, and required large numbers of symbols. For instance, Egyptian numerals used different symbols for 10, 20 to 90, 100, 200 to 900, 1000, 2000, 3000, 4000, to 10,000. The world's earliest positional decimal system was the Chinese rod calculus. History of decimal fractions. Starting from the 2nd century BCE, some Chinese units for length were based on divisions into ten; by the 3rd century CE these metrological units were used to express decimal fractions of lengths, non-positionally. Calculations with decimal fractions of lengths were performed using positional counting rods, as described in the 3rd–5th century CE "Sunzi Suanjing". The 5th century CE mathematician Zu Chongzhi calculated a 7-digit approximation of π. Qin Jiushao's book "Mathematical Treatise in Nine Sections" (1247) explicitly writes a decimal fraction representing a number rather than a measurement, using counting rods. The number 0.96644 is denoted . Historians of Chinese science have speculated that the idea of decimal fractions may have been transmitted from China to the Middle East. Al-Khwarizmi introduced fractions to Islamic countries in the early 9th century CE, written with a numerator above and denominator below, without a horizontal bar. This form of fraction remained in use for centuries. Positional decimal fractions appear for the first time in a book by the Arab mathematician Abu'l-Hasan al-Uqlidisi written in the 10th century. The Jewish mathematician Immanuel Bonfils used decimal fractions around 1350 but did not develop any notation to represent them. The Persian mathematician Jamshid al-Kashi used, and claimed to have discovered, decimal fractions in the 15th century. A forerunner of modern European decimal notation was introduced by Simon Stevin in the 16th century. Stevin's influential booklet "De Thiende" ("the art of tenths") was first published in Dutch in 1585 and translated into French as "La Disme". John Napier introduced using the period (.) to separate the integer part of a decimal number from the fractional part in his book on constructing tables of logarithms, published posthumously in 1620. Natural languages. A method of expressing every possible natural number using a set of ten symbols emerged in India. Several Indian languages show a straightforward decimal system. Dravidian languages have numbers between 10 and 20 expressed in a regular pattern of addition to 10. The Hungarian language also uses a straightforward decimal system. All numbers between 10 and 20 are formed regularly (e.g. 11 is expressed as "tizenegy" literally "one on ten"), as with those between 20 and 100 (23 as "huszonhárom" = "three on twenty"). A straightforward decimal rank system with a word for each order (10 , 100 , 1000 , 10,000 ), and in which 11 is expressed as "ten-one" and 23 as "two-ten-three", and 89,345 is expressed as 8 (ten thousands) 9 (thousand) 3 (hundred) 4 (tens) 5 is found in Chinese, and in Vietnamese with a few irregularities. Japanese, Korean, and Thai have imported the Chinese decimal system. Many other languages with a decimal system have special words for the numbers between 10 and 20, and decades. For example, in English 11 is "eleven" not "ten-one" or "one-teen". Incan languages such as Quechua and Aymara have an almost straightforward decimal system, in which 11 is expressed as "ten with one" and 23 as "two-ten with three". Some psychologists suggest irregularities of the English names of numerals may hinder children's counting ability. Other bases. Some cultures do, or did, use other bases of numbers. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a_ma_{m-1}\\ldots a_0" }, { "math_id": 1, "text": "a_ma_{m-1}\\ldots a_0.b_1b_2\\ldots b_n" }, { "math_id": 2, "text": "a_m10^m+a_{m-1}10^{m-1}+\\cdots+a_{0}10^0+\\frac{b_1}{10^1}+\\frac{b_2}{10^2}+\\cdots+\\frac{b_n}{10^n}" }, { "math_id": 3, "text": "0.8, 14.89, 0.00079, 1.618, 3.14159" }, { "math_id": 4, "text": "1=2^0\\cdot 5^0, 2=2^1\\cdot 5^0, 4=2^2\\cdot 5^0, 5=2^0\\cdot 5^1, 8=2^3\\cdot 5^0, 10=2^1\\cdot 5^1, 16=2^4\\cdot 5^0, 20=2^2\\cdot5^1, 25=2^0\\cdot 5^2, \\ldots" }, { "math_id": 5, "text": "\\left\\vert \\left [ x \\right ]_n-\\left [ x \\right ]_{n-1} \\right\\vert=d_n\\cdot10^{-n}<10^{-n+1}" }, { "math_id": 6, "text": "\\; x = \\lim_{n\\rightarrow\\infty} [x]_n \\;" }, { "math_id": 7, "text": "\\;(d_n)_{n=1}^{\\infty}" }, { "math_id": 8, "text": "\\;([x]_n)_{n=1}^{\\infty}" }, { "math_id": 9, "text": "10" } ]
https://en.wikipedia.org/wiki?curid=8214
821611
Complex harmonic motion
Complicated realm of physics based on simple harmonic motion In physics, complex harmonic motion is a complicated realm based on the simple harmonic motion. The word "complex" refers to different situations. Unlike simple harmonic motion, which is regardless of air resistance, friction, etc., complex harmonic motion often has additional forces to dissipate the initial energy and lessen the speed and amplitude of an oscillation until the energy of the system is totally drained and the system comes to rest at its equilibrium point. Types. Damped harmonic motion. Introduction. Damped harmonic motion is a real oscillation, in which an object is hanging on a spring. Because of the existence of internal friction and air resistance, the system will over time experience a decrease in amplitude. The decrease of amplitude is due to the fact that the energy goes into thermal energy. Damped harmonic motion happens because the spring is not very efficient at storing and releasing energy so that the energy dies out. The damping force is proportional to the velocity of the object and is at the opposite direction of the motion so that the object slows down quickly. Specifically, when an object is damping, the damping force formula_0 will be related to velocity formula_1 by a coefficient formula_2: formula_3 The diagram shown on the right indicates three types of damped harmonic motion. Difference between damped and forced oscillation. An object or a system is oscillating in its own natural frequency without the interference of an external periodic force or initial motion. Damped oscillation is similar to forced oscillation except that it has continuous and repeated force. Hence, these are two motions that have opposite results. Resonance. Introduction. Resonance occurs when the frequency of the external force (applied) is the same as the natural frequency (resonant frequency) of the system. When such a situation occurs, the external force always acts in the same direction as the motion of the oscillating object, with the result that the amplitude of the oscillation increases indefinitely, as it's shown in the adjacent diagram. Away from the value of resonant frequency, either greater or lesser, the amplitude of the corresponding frequency is smaller. In a set of driving pendulums with different length of strings hanging objects, the one pendulum with the same length of string as the driver gets the biggest amplitude of swinging. Examples. See video: https://www.youtube.com/watch?v=aCocQa2Bcuc Double pendulum. Introduction. A double pendulum is a simple pendulum hanging under another one; the epitome of the compound pendulum system. It shows abundant dynamic behavior. The motion of a double pendulum seems chaotic. We can hardly see a regulated routine that it is following, making it complicated. Varying lengths and masses of the two arms can make it hard to identify the centers of the two rods. Moreover, a double pendulum may exert motion without the restriction of only a two-dimensional (usually vertical) plane. In other words, the complex pendulum can move to anywhere within the sphere, which has the radius of the total length of the two pendulums. However, for a small angle, the double pendulum can act similarly to the simple pendulum because the motion is determined by sine and cosine functions as well. Examples. The image shows a marine clock with motor springs and double pendulum sheel. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F" }, { "math_id": 1, "text": "v" }, { "math_id": 2, "text": "c" }, { "math_id": 3, "text": "F=-cv." } ]
https://en.wikipedia.org/wiki?curid=821611
8217750
Tanh-sinh quadrature
Tanh-sinh quadrature is a method for numerical integration introduced by Hidetoshi Takahashi and Masatake Mori in 1974. It is especially applied where singularities or infinite derivatives exist at one or both endpoints. The method uses hyperbolic functions in the change of variables formula_0 to transform an integral on the interval "x" ∈ (−1, 1) to an integral on the entire real line "t" ∈ (−∞, ∞), the two integrals having the same value. After this transformation, the integrand decays with a double exponential rate, and thus, this method is also known as the double exponential (DE) formula. For a given step size formula_1, the integral is approximated by the sum formula_2 with the abscissas formula_3 and the weights formula_4 Use. The Tanh-Sinh method is quite insensitive to endpoint behavior. Should singularities or infinite derivatives exist at one or both endpoints of the (−1, 1) interval, these are mapped to the (−∞,∞) endpoints of the transformed interval, forcing the endpoint singularities and infinite derivatives to vanish. This results in a great enhancement of the accuracy of the numerical integration procedure, which is typically performed by the Trapezoidal rule. In most cases, the transformed integrand displays a rapid roll-off (decay), enabling the numerical integrator to quickly achieve convergence. Like Gaussian quadrature, Tanh-Sinh quadrature is well suited for arbitrary-precision integration, where an accuracy of hundreds or even thousands of digits is desired. The convergence is exponential (in the discretization sense) for sufficiently well-behaved integrands: doubling the number of evaluation points roughly doubles the number of correct digits. However, Tanh-Sinh quadrature is not as efficient as Gaussian quadrature for smooth integrands; but unlike Gaussian quadrature, tends to work equally well with integrands having singularities or infinite derivatives at one or both endpoints of the integration interval as already noted. Furthermore, Tanh-Sinh quadrature can be implemented in a progressive manner, with the step size halved each time the rule level is raised, and reusing the function values calculated on previous levels. A further advantage is that the abscissas and weights are relatively simple to compute. The cost of calculating abscissa–weight pairs for "n"-digit accuracy is roughly "n"2 log2 "n" compared to "n"3 log "n" for Gaussian quadrature. Bailey and others have done extensive research on Tanh-Sinh quadrature, Gaussian quadrature and Error Function quadrature, as well as several of the classical quadrature methods, and found that the classical methods are not competitive with the first three methods, particularly when high-precision results are required. In a conference paper presented at RNC5 on Real Numbers and Computers (Sept 2003), when comparing Tanh-Sinh quadrature with Gaussian quadrature and Error Function quadrature, Bailey and Li found: "Overall, the Tanh-Sinh scheme appears to be the best. It combines uniformly excellent accuracy with fast run times. "It is the nearest we have to a truly all-purpose quadrature scheme at the present time."" Upon comparing the scheme to Gaussian quadrature and Error Function quadrature, Bailey et al. (2005) found that the Tanh-Sinh scheme "appears to be the best for integrands of the type most often encountered in experimental math research". Bailey (2006) found that: "The Tanh-Sinh quadrature scheme "is the fastest currently known high-precision quadrature scheme", particularly when one counts the time for computing abscissas and weights. It has been successfully employed for quadrature calculations of up to 20,000-digit precision." In summary, the Tanh-Sinh quadrature scheme is designed so that it gives the most accurate result for the minimum number of function evaluations. In practice, the Tanh-Sinh quadrature rule is almost invariably the best rule and is often the only effective rule when extended precision results are sought. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x = \\tanh\\left(\\frac{1}{2}\\pi\\sinh t\\right)\\," }, { "math_id": 1, "text": "h" }, { "math_id": 2, "text": "\\int_{-1}^1 f(x) \\, dx \\approx \\sum_{k=-\\infty}^\\infty w_k f(x_k)," }, { "math_id": 3, "text": "x_k = \\tanh\\left(\\frac{1}{2}\\pi\\sinh kh\\right)" }, { "math_id": 4, "text": "w_k = \\frac{\\frac{1}{2}h\\pi\\cosh kh}{\\cosh^2\\left(\\frac{1}{2}\\pi\\sinh kh\\right)}." } ]
https://en.wikipedia.org/wiki?curid=8217750
8218622
Jay Hambidge
American painter Jay Hambidge (1867–1924) was an American artist who formulated the theory of "dynamic symmetry", a system defining compositional rules, which was adopted by several notable American and Canadian artists in the early 20th century. Early life and theory. He was a pupil at the Art Students' League in New York and of William Merritt Chase, and a thorough student of classical art. He conceived the idea that the study of arithmetic with the aid of geometrical designs was the foundation of the proportion and symmetry in Greek architecture, sculpture and ceramics. Careful examination and measurements of classical buildings in Greece, among them the Parthenon, the temple of Apollo at Bassæ, of Zeus at Olympia and Athenæ at Ægina, prompted him to formulate the theory of "dynamic symmetry" as demonstrated in his works "Dynamic Symmetry: The Greek Vase" (1920) and "The Elements of Dynamic Symmetry" (1926). It created a great deal of discussion. He found a disciple in Dr. Lacey D. Caskey, the author of "Geometry of Greek Vases" (1922). In 1921, articles critical of Hambidge's theories were published by Edwin M. Blake in "Art Bulletin", and by Rhys Carpenter in "American Journal of Archaeology". Art historian Michael Quick says Blake and Carpenter "used different methods to expose the basic fallacy of Hambidge's use of his system on Greek art—that in its more complicated constructions, the system could describe any shape at all." In 1979 Lee Malone said Hambidge's theories were discredited, but that they had appealed to many American artists in the early 20th century because "he was teaching precisely the things that certain artists wanted to hear, especially those who had blazed so brief a trail in observing the American scene and now found themselves displaced by the force of contemporary European trends." He was married to the American weaver Mary Crovatt. Dynamic symmetry. Dynamic symmetry is a proportioning system and natural design methodology described in Hambidge's books. The system uses "dynamic rectangles", including "root rectangles" based on ratios such as √2, √3, √5, the golden ratio (φ = 1.618...), its square root (√φ = 1.272...), and its square (φ2 = 2.618...), and the silver ratio (formula_0). From the study of phyllotaxis and the related Fibonacci sequence (1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ...), Hambidge says that "a much closer representation would be obtained by a substitute series such as 118, 191, 309, 500, 809, 1309, 2118, 3427, 5545, 8972, 14517, etc. One term of this series divided into the other equals 1.6180, which is the ratio needed to explain the plant design system." This substitute sequence is a generalization of the Fibonacci sequence that chooses 118 and 191 as the beginning numbers to generate the rest. In fact, the standard Fibonacci sequence provides the best possible rational approximations to the golden ratio for numbers of a given size. A number of notable American and Canadian artists have used dynamic symmetry in their painting, including George Bellows (1882–1925), Maxfield Parrish (1870–1966), The "New Yorker" cartoonist Helen Hokinson (1893–1949), Al Nestler (1900–1971), Kathleen Munn (1887–1974), the children's book illustrator and author Robert McCloskey (1914–2003), and Clay Wagstaff (b. 1964). Elizabeth Whiteley has used dynamic symmetry for works on paper. Applications. Photography. The application and psychology of Dynamic Symmetry in such a fast and modern medium such as photography, in particular Digital Photography, is challenging but not impossible. The Rule of Thirds has been the composition of choice for a majority of new and experienced photographers alike. Although this method is effective, Dynamic Symmetry can be applied to compositions to create a level of in depth creativity and control over the image. According to Bob Holmes, a photographer from National Geographic, a photographer must "be responsible for everything in the frame". Using diagonals to align subjects and the reciprocal diagonals associated to the size of the frame, one would be able to create a highly intricate work of fine art. For example, world renowned portrait photographer Annie Liebovitz used this method to create an image, among many others, for Vanity Fair Magazine. The image correctly posed each of the models to intersect the subject with a corresponding diagonal to draw the viewer to the main idea of the photograph. This powerful process was used regularly by French painter turned film photographer: Henri Cartier-Bresson. Using Dynamic Symmetry, Henri was able to create engaging and interesting photographs that he deemed were made with the idea of "The Decisive Moment", a photographic psychology that describes "when the visual and psychological elements of people in a real life scene to spontaneously and briefly come together in perfect resonance to express the essence of that situation". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\delta_s=2.414..." } ]
https://en.wikipedia.org/wiki?curid=8218622
821877
Inductively coupled plasma
Type of plasma source An inductively coupled plasma (ICP) or transformer coupled plasma (TCP) is a type of plasma source in which the energy is supplied by electric currents which are produced by electromagnetic induction, that is, by time-varying magnetic fields. Operation. There are three types of ICP geometries: planar (Fig. 3 (a)), cylindrical (Fig. 3 (b)), and half-toroidal (Fig. 3 (c)). In planar geometry, the electrode is a length of flat metal wound like a spiral (or coil). In cylindrical geometry, it is like a helical spring. In half-toroidal geometry, it is a toroidal solenoid cut along its main diameter to two equal halves. When a time-varying electric current is passed through the coil, it creates a time-varying magnetic field around it, with flux formula_0, where "r" is the distance to the center of coil (and of the quartz tube). According to the Faraday–Lenz's law of induction, this creates azimuthal electromotive force in the rarefied gas: formula_1, which corresponds to electric field strengths of formula_2, leading to the formation of the electron trajectories providing a plasma generation. The dependence on "r" suggests that the gas ion motion is most intense in the outer region of the flame, where the temperature is the greatest. In the real torch, the flame is cooled by the cooling gas from the outside , so the hottest outer part is at thermal equilibrium. Temperature there reaches 5 000 – 6 000 K. For more rigorous description, see Hamilton–Jacobi equation in electromagnetic fields. The frequency of alternating current used in the RLC circuit which contains the coil is usually 27–41 MHz. To induce plasma, a spark is produced at the electrodes at the gas outlet. Argon is one example of a commonly used rarefied gas. The high temperature of the plasma allows the atomization of molecules and thus determination of many elements, and in addition, for about 60 elements the degree of ionization in the torch exceeds 90%. The ICP torch consumes c. 1250–1550 W of power, and this depends on the element composition of the sample (due to different ionization energies). The ICPs have two operation modes, called capacitive (E) mode with low plasma density and inductive (H) mode with high plasma density. Transition from E to H heating mode occurs with external inputs. Applications. Plasma electron temperatures can range between ~6,000 K and ~10,000 K and are usually several orders of magnitude greater than the temperature of the neutral species. Temperatures of argon ICP plasma discharge are typically ~5,500 to 6,500 K and are therefore comparable to those reached at the surface (photosphere) of the sun (~4,500 K to ~6,000 K). ICP discharges are of relatively high electron density, on the order of 1015 cm−3. As a result, ICP discharges have wide applications wherever a high-density plasma (HDP) is needed. Another benefit of ICP discharges is that they are relatively free of contamination, because the electrodes are completely outside the reaction chamber. By contrast, in a capacitively coupled plasma (CCP), the electrodes are often placed inside the reactor chamber and are thus exposed to the plasma and to subsequent reactive chemical species. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Phi=\\pi r^2 H=\\pi r^2 H_0 \\cos \\omega t" }, { "math_id": 1, "text": "U=-\\frac{d \\Phi}{dt}" }, { "math_id": 2, "text": "E=\\frac{U}{2 \\pi r}=\\frac{\\omega r H_0}{2} \\sin \\omega t" } ]
https://en.wikipedia.org/wiki?curid=821877
8218773
Brinkman number
The Brinkman number (Br) is a dimensionless number related to heat conduction from a wall to a flowing viscous fluid, commonly used in polymer processing. It is named after the Dutch mathematician and physicist Henri Brinkman. There are several definitions; one is formula_0 where It is the ratio between heat produced by viscous dissipation and heat transported by molecular conduction. i.e., the ratio of viscous heat generation to external heating. The higher its value, the slower the conduction of heat produced by viscous dissipation and hence the larger the temperature rise. In, for example, a screw extruder, the energy supplied to the polymer melt comes primarily from two sources: The former is supplied by the motor turning the screw, the latter by heaters. The Brinkman number is a measure of the ratio of the two. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathrm{Br} = \\frac {\\mu u^2}{\\kappa(T_w-T_0)} = \\mathrm{Pr} \\, \\mathrm{Ec}" } ]
https://en.wikipedia.org/wiki?curid=8218773
8219318
Bühlmann decompression algorithm
Mathematical model of tissue inert gas uptake and release with pressure change The Bühlmann decompression model is a neo-Haldanian model which uses Haldane's or Schreiner's formula for inert gas uptake, a linear expression for tolerated inert gas pressure coupled with a simple parameterised expression for alveolar inert gas pressure and expressions for combining Nitrogen and Helium parameters to model the way inert gases enter and leave the human body as the ambient pressure and inspired gas changes. Different parameter sets are used to create decompression tables and in personal dive computers to compute no-decompression limits and decompression schedules for dives in real-time, allowing divers to plan the depth and duration for dives and the required decompression stops. The model (Haldane, 1908) assumes perfusion limited gas exchange and multiple parallel tissue compartments and uses an exponential formula for in-gassing and out-gassing, both of which are assumed to occur in the dissolved phase. Buhlmann, however, assumes that safe dissolved inert gas levels are defined by a critical difference instead of a critical ratio. Multiple sets of parameters were developed by Swiss physician Dr. Albert A. Bühlmann, who did research into decompression theory at the Laboratory of Hyperbaric Physiology at the University Hospital in Zürich, Switzerland. The results of Bühlmann's research that began in 1959 were published in a 1983 German book whose English translation was entitled "Decompression-Decompression Sickness". The book was regarded as the most complete public reference on decompression calculations and was used soon after in dive computer algorithms. Principles. Building on the previous work of John Scott Haldane (The Haldane model, Royal Navy, 1908) and Robert Workman (M-Values, US-Navy, 1965) and working off funding from Shell Oil Company, Bühlmann designed studies to establish the longest half-times of nitrogen and helium in human tissues. These studies were confirmed by the "Capshell" experiments in the Mediterranean Sea in 1966. Alveolar inert gas pressure. The Bühlmann model uses a simplified version of the alveolar gas equation to calculate alveolar inert gas pressure formula_0 Where formula_1 is the water vapour pressure at 37 degrees centigrade (conventionally defined as 0.0627 bar), formula_2 the carbon dioxide pressure (conventionally defined as 0.0534 bar), formula_3 the inspired inert gas fraction, and formula_4 the respiratory coefficient: the ratio of carbon dioxide production to oxygen consumption. The Buhlmann model sets formula_4 to 1, simplifying the equation to formula_5 Tissue inert gas exchange. Inert gas exchange in haldanian models is assumed to be perfusion limited and is governed by the ordinary differential equation formula_6 This equation can be solved for constant formula_7 to give the Haldane equation: formula_8 and for constant rate of change of alveolar gas pressure formula_9 to give the Schreiner equation: formula_10 Tissue inert gas limits. Similarly to Workman, the Bühlmann model specifies an affine relationship between ambient pressure and inert gas saturation limits. However, the Buhlmann model expresses this relationship in terms of absolute pressure formula_11 Where formula_12 is the inert gas saturation limit for a given tissue and formula_13 and formula_14 constants for that tissue and inert gas. The constants formula_13 and formula_14, were originally derived from the saturation half-time using the following expressions: formula_15 formula_16 The formula_14 values calculated do not precisely correspond to those used by Bühlmann for tissue compartments 4 (0.7825 instead of 0.7725) and 5 (0.8126 instead of 0.8125). Versions B and C have manually modified the coefficient formula_13. In addition to this formulation, the Bühlmann model also specifies how the constants for multiple inert gas saturation combine when both Nitrogen and Helium are present in a given tissue. formula_17 formula_18 where formula_19 and formula_20 are the tissue's formula_13 Nitrogen and Helium coefficients and formula_9 the ratio of dissolved Helium to total dissolved inert gas. Ascent rates. Ascent rate is intrinsically a variable, and may be selected by the programmer or user for table generation or simulations, and measured as real-time input in dive computer applications. The rate of ascent to the first stop is limited to 3 bar per minute for compartments 1 to 5, 2 bar per minute for compartments 6 and 7, and 1 bar per minute for compartments 8 to 16. Chamber decompression may be continuous, or if stops are preferred they may be done at intervals of 1 or 3 m. Applications. The Buhlmann model has been used within dive computers and to create tables. Tables. Since precomputed tables cannot take into account the actual diving conditions, Buhlmann specifies a number of initial values and recommendations. In addition, Buhlmann recommended that the calculations be based on a slightly deeper bottom depth. Dive computers. Buhlmann assumes no initial values and makes no other recommendations for the application of the model within dive computers, hence all pressures and depths and gas fractions are either read from the computer sensors or specified by the diver and grouped dives do not require any special treatment. Versions. Several versions and extensions of the Bühlmann model have been developed, both by Bühlmann and by later workers. The naming convention used to identify the set of parameters is a code starting ZH-L, from Zürich (ZH), Linear (L) followed by the number of different (a,b) couples (ZH-L 12 and ZH-L 16)) or the number of tissue compartments (ZH-L 6, ZH-L 8), and other unique identifiers. ZH-L 12 (1983) ZH-L 16 (1986) ZH-L 6 (1988) ZH-L 8 ADT (1992) References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt; External links. Many articles on the Bühlmann tables are available on the web.
[ { "math_id": 0, "text": "P_{alv} = [P_{amb} - P_{H_{2}0} + \\frac{1 - RQ}{RQ} P_{CO_{2}}]\\cdot Q" }, { "math_id": 1, "text": "P_{H_{2}0}" }, { "math_id": 2, "text": "P_{CO_{2}}" }, { "math_id": 3, "text": "Q" }, { "math_id": 4, "text": "RQ" }, { "math_id": 5, "text": "P_{alv} = [P_{amb} - P_{H_{2}0}]\\cdot Q" }, { "math_id": 6, "text": "\\dfrac{\\mathrm{d}P_t}{\\mathrm{d}t} = k(P_{alv} - P_t)" }, { "math_id": 7, "text": "P_{alv}" }, { "math_id": 8, "text": "P_t(t) = P_{alv} + (P_{t}(0) - P_{alv}) \\cdot e^{-kt}" }, { "math_id": 9, "text": "R" }, { "math_id": 10, "text": "\nP_t(t) = P_{alv}(0) + R(t - \\dfrac{1}{k}) - (P_{alv}(0) - P_{t}(0) - \\dfrac{R}{k}) e^{-kt}\n" }, { "math_id": 11, "text": "P_{igtol} = a + \\frac{P_{amb}}{b}" }, { "math_id": 12, "text": "P_{igtol}" }, { "math_id": 13, "text": "a" }, { "math_id": 14, "text": "b" }, { "math_id": 15, "text": "a = \\frac{2\\,\\text{bar}}{\\sqrt[3]{t_{1/2}}}" }, { "math_id": 16, "text": "b = 1.005 - \\frac{1}{\\sqrt[2]{t_{1/2}}}" }, { "math_id": 17, "text": "a = a_{N_2} (1 - R) + a_{He} R" }, { "math_id": 18, "text": "b = b_{N_2} (1 - R) + b_{He} R" }, { "math_id": 19, "text": "a_{N_2}" }, { "math_id": 20, "text": "a_{He}" } ]
https://en.wikipedia.org/wiki?curid=8219318
8219348
Wilks's lambda distribution
Probability distribution used in multivariate hypothesis testing In statistics, Wilks' lambda distribution (named for Samuel S. Wilks), is a probability distribution used in multivariate hypothesis testing, especially with regard to the likelihood-ratio test and multivariate analysis of variance (MANOVA). Definition. Wilks' lambda distribution is defined from two independent Wishart distributed variables as the ratio distribution of their determinants, given formula_0 independent and with formula_1 formula_2 where "p" is the number of dimensions. In the context of likelihood-ratio tests "m" is typically the error degrees of freedom, and "n" is the hypothesis degrees of freedom, so that formula_3 is the total degrees of freedom. Approximations. Computations or tables of the Wilks' distribution for higher dimensions are not readily available and one usually resorts to approximations. One approximation is attributed to M. S. Bartlett and works for large "m" allows Wilks' lambda to be approximated with a chi-squared distribution formula_4 Another approximation is attributed to C. R. Rao. Properties. There is a symmetry among the parameters of the Wilks distribution, formula_5 Related distributions. The distribution can be related to a product of independent beta-distributed random variables formula_6 formula_7 As such it can be regarded as a multivariate generalization of the beta distribution. It follows directly that for a one-dimension problem, when the Wishart distributions are one-dimensional with formula_8 (i.e., chi-squared-distributed), then the Wilks' distribution equals the beta-distribution with a certain parameter set, formula_9 From the relations between a beta and an F-distribution, Wilks' lambda can be related to the F-distribution when one of the parameters of the Wilks lambda distribution is either 1 or 2, e.g., formula_10 and formula_11 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbf{A} \\sim W_p(\\Sigma, m) \\qquad \\mathbf{B} \\sim W_p(\\Sigma, n)" }, { "math_id": 1, "text": "m \\ge p" }, { "math_id": 2, "text": "\\lambda = \\frac{\\det(\\mathbf{A})}{\\det(\\mathbf{A+B})} = \\frac{1}{\\det(\\mathbf{I}+\\mathbf{A}^{-1}\\mathbf{B})} \\sim \\Lambda(p,m,n)" }, { "math_id": 3, "text": "n+m" }, { "math_id": 4, "text": "\\left(\\frac{p-n+1}{2}-m\\right)\\log \\Lambda(p,m,n) \\sim \\chi^2_{np}." }, { "math_id": 5, "text": "\\Lambda(p, m, n) \\sim \\Lambda(n, m + n - p, p)" }, { "math_id": 6, "text": "u_i \\sim B\\left(\\frac{m+i-p}{2},\\frac{p}{2}\\right)" }, { "math_id": 7, "text": "\\prod_{i=1}^n u_i \\sim \\Lambda(p,m,n)." }, { "math_id": 8, "text": "p=1" }, { "math_id": 9, "text": "\\Lambda(1,m,n) \\sim B\\left(\\frac{m}{2},\\frac{n}{2}\\right)." }, { "math_id": 10, "text": "\\frac{1 - \\Lambda(p, m, 1)}{\\Lambda(p, m, 1)} \\sim \\frac{p}{m - p + 1} F_{p, m-p+1}," }, { "math_id": 11, "text": "\\frac{1 - \\sqrt{\\Lambda(p, m, 2)}}{\\sqrt{\\Lambda(p, m, 2)}} \\sim \\frac{p}{m - p + 1} F_{2p, 2(m-p+1)}." } ]
https://en.wikipedia.org/wiki?curid=8219348
822043
Gerbe
In mathematics, a gerbe (; ) is a construct in homological algebra and topology. Gerbes were introduced by Jean Giraud following ideas of Alexandre Grothendieck as a tool for non-commutative cohomology in degree 2. They can be seen as an analogue of fibre bundles where the fibre is the classifying stack of a group. Gerbes provide a convenient, if highly abstract, language for dealing with many types of deformation questions especially in modern algebraic geometry. In addition, special cases of gerbes have been used more recently in differential topology and differential geometry to give alternative descriptions to certain cohomology classes and additional structures attached to them. "Gerbe" is a French (and archaic English) word that literally means wheat sheaf. Definitions. Gerbes on a topological space. A gerbe on a topological space formula_0 is a stack formula_1 of groupoids over formula_0 that is "locally non-empty" (each point formula_2 has an open neighbourhood formula_3 over which the section category formula_4 of the gerbe is not empty) and "transitive" (for any two objects formula_5 and formula_6 of formula_7 for any open set formula_8, there is an open covering formula_9 of formula_8 such that the restrictions of formula_5 and formula_6 to each formula_10 are connected by at least one morphism). A canonical example is the gerbe formula_11 of principal bundles with a fixed structure group formula_12: the section category over an open set formula_8 is the category of principal formula_12-bundles on formula_8 with isomorphism as morphisms (thus the category is a groupoid). As principal bundles glue together (satisfy the descent condition), these groupoids form a stack. The trivial bundle formula_13 shows that the local non-emptiness condition is satisfied, and finally as principal bundles are locally trivial, they become isomorphic when restricted to sufficiently small open sets; thus the transitivity condition is satisfied as well. Gerbes on a site. The most general definition of gerbes are defined over a site. Given a site formula_14 a formula_14-gerbe formula_15 is a category fibered in groupoids formula_16 such that Note that for a site formula_14 with a final object formula_21, a category fibered in groupoids formula_16 is a formula_14-gerbe admits a local section, meaning satisfies the first axiom, if formula_22. Motivation for gerbes on a site. One of the main motivations for considering gerbes on a site is to consider the following naive question: if the Cech cohomology group formula_23 for a suitable covering formula_24 of a space formula_25 gives the isomorphism classes of principal formula_15-bundles over formula_25, what does the iterated cohomology functor formula_26 represent? Meaning, we are gluing together the groups formula_27 via some one cocycle. Gerbes are a technical response for this question: they give geometric representations of elements in the higher cohomology group formula_28. It is expected this intuition should hold for higher gerbes. Cohomological classification. One of the main theorems concerning gerbes is their cohomological classification whenever they have automorphism groups given by a fixed sheaf of abelian groups formula_29, called a band. For a gerbe formula_1 on a site formula_14, an object formula_30, and an object formula_31, the automorphism group of a gerbe is defined as the automorphism group formula_32. Notice this is well defined whenever the automorphism group is always the same. Given a covering formula_33, there is an associated classformula_34representing the isomorphism class of the gerbe formula_1 banded by formula_35. For example, in topology, many examples of gerbes can be constructed by considering gerbes banded by the group formula_36. As the classifying space formula_37 is the second Eilenberg–Maclane space for the integers, a bundle gerbe banded by formula_36 on a topological space formula_25 is constructed from a homotopy class of maps informula_38,which is exactly the third singular homology group formula_39. It has been found that all gerbes representing torsion cohomology classes in formula_39 are represented by a bundle of finite dimensional algebras formula_40 for a fixed complex vector space formula_41. In addition, the non-torsion classes are represented as infinite-dimensional principal bundles formula_42 of the projective group of unitary operators on a fixed infinite dimensional separable Hilbert space formula_43. Note this is well defined because all separable Hilbert spaces are isomorphic to the space of square-summable sequences formula_44. The homotopy-theoretic interpretation of gerbes comes from looking at the homotopy fiber squareformula_45analogous to how a line bundle comes from the homotopy fiber squareformula_46where formula_47, giving formula_48 as the group of isomorphism classes of line bundles on formula_0. Examples. C*-algebras. There are natural examples of Gerbes that arise from studying the algebra of compactly supported complex valued functions on a paracompact space formula_25pg 3. Given a cover formula_49 of formula_25 there is the Cech groupoid defined asformula_50with source and target maps given by the inclusionsformula_51and the space of composable arrows is justformula_52Then a degree 2 cohomology class formula_53 is just a mapformula_54We can then form a non-commutative C*-algebra formula_55, which is associated to the set of compact supported complex valued functions of the spaceformula_56It has a non-commutative product given byformula_57where the cohomology class formula_58 twists the multiplication of the standard formula_59-algebra product. Algebraic geometry. Let formula_60 be a variety over an algebraically closed field formula_61, formula_15 an algebraic group, for example formula_62. Recall that a "G"-torsor over formula_60 is an algebraic space formula_63 with an action of formula_15 and a map formula_64, such that locally on formula_60 (in étale topology or fppf topology) formula_65 is a direct product formula_66. A G"-gerbe over "M may be defined in a similar way. It is an Artin stack formula_67 with a map formula_68, such that locally on "M" (in étale or fppf topology) formula_65 is a direct product formula_69. Here formula_70 denotes the classifying stack of formula_15, i.e. a quotient formula_71 of a point by a trivial formula_15-action. There is no need to impose the compatibility with the group structure in that case since it is covered by the definition of a stack. The underlying topological spaces of formula_67 and formula_60 are the same, but in formula_67 each point is equipped with a stabilizer group isomorphic to formula_15. From two-term complexes of coherent sheaves. Every two-term complex of coherent sheavesformula_72on a scheme formula_73 has a canonical sheaf of groupoids associated to it, where on an open subset formula_74 there is a two-term complex of formula_75-modulesformula_76giving a groupoid. It has objects given by elements formula_77 and a morphism formula_78 is given by an element formula_79 such thatformula_80In order for this stack to be a gerbe, the cohomology sheaf formula_81 must always have a section. This hypothesis implies the category constructed above always has objects. Note this can be applied to the situation of comodules over Hopf-algebroids to construct algebraic models of gerbes over affine or projective stacks (projectivity if a graded Hopf-algebroid is used). In addition, two-term spectra from the stabilization of the derived category of comodules of Hopf-algebroids formula_82 with formula_83 flat over formula_84 give additional models of gerbes that are non-strict. Moduli stack of stable bundles on a curve. Consider a smooth projective curve formula_85 over formula_61 of genus formula_86. Let formula_87 be the moduli stack of stable vector bundles on formula_85 of rank formula_88 and degree formula_89. It has a coarse moduli space formula_90, which is a quasiprojective variety. These two moduli problems parametrize the same objects, but the stacky version remembers automorphisms of vector bundles. For any stable vector bundle formula_91 the automorphism group formula_92 consists only of scalar multiplications, so each point in a moduli stack has a stabilizer isomorphic to formula_62. It turns out that the map formula_93 is indeed a formula_62-gerbe in the sense above. It is a trivial gerbe if and only if formula_88 and formula_89 are coprime. Root stacks. Another class of gerbes can be found using the construction of root stacks. Informally, the formula_88-th root stack of a line bundle formula_94 over a scheme is a space representing the formula_88-th root of formula_35 and is denotedformula_95pg 52 The formula_88-th root stack of formula_35 has the propertyformula_96as gerbes. It is constructed as the stackformula_97sending an formula_0-scheme formula_98 to the category whose objects are line bundles of the formformula_99and morphisms are commutative diagrams compatible with the isomorphisms formula_100. This gerbe is banded by the algebraic group of roots of unity formula_101, where on a cover formula_98 it acts on a point formula_102 by cyclically permuting the factors of formula_60 in formula_103. Geometrically, these stacks are formed as the fiber product of stacksformula_104where the vertical map of formula_105 comes from the Kummer sequenceformula_106This is because formula_107 is the moduli space of line bundles, so the line bundle formula_94 corresponds to an object of the category formula_108 (considered as a point of the moduli space). Root stacks with sections. There is another related construction of root stacks with sections. Given the data above, let formula_109 be a section. Then the formula_88-th root stack of the pair formula_110 is defined as the lax 2-functorformula_111sending an formula_0-scheme formula_98 to the category whose objects are line bundles of the formformula_112and morphisms are given similarly. These stacks can be constructed very explicitly, and are well understood for affine schemes. In fact, these form the affine models for root stacks with sections. Given an affine scheme formula_113, all line bundles are trivial, hence formula_114 and any section formula_115 is equivalent to taking an element formula_116. Then, the stack is given by the stack quotientformula_117withformula_118If formula_119 then this gives an infinitesimal extension of formula_120. Examples throughout algebraic geometry. These and more general kinds of gerbes arise in several contexts as both geometric spaces and as formal bookkeeping tools: History. Gerbes first appeared in the context of algebraic geometry. They were subsequently developed in a more traditional geometric framework by Brylinski . One can think of gerbes as being a natural step in a hierarchy of mathematical objects providing geometric realizations of integral cohomology classes. A more specialised notion of gerbe was introduced by Murray and called bundle gerbes. Essentially they are a smooth version of abelian gerbes belonging more to the hierarchy starting with principal bundles than sheaves. Bundle gerbes have been used in gauge theory and also string theory. Current work by others is developing a theory of non-abelian bundle gerbes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "\\mathcal{X}" }, { "math_id": 2, "text": "p \\in S" }, { "math_id": 3, "text": "U_p" }, { "math_id": 4, "text": "\\mathcal{X}(U_p)" }, { "math_id": 5, "text": "a" }, { "math_id": 6, "text": "b" }, { "math_id": 7, "text": "\\mathcal{X}(U)" }, { "math_id": 8, "text": "U" }, { "math_id": 9, "text": "\\mathcal{U} = \\{U_i \\}_{i \\in I}" }, { "math_id": 10, "text": "U_i" }, { "math_id": 11, "text": "BH" }, { "math_id": 12, "text": "H" }, { "math_id": 13, "text": "X \\times H \\to X" }, { "math_id": 14, "text": "\\mathcal{C}" }, { "math_id": 15, "text": "G" }, { "math_id": 16, "text": "G \\to \\mathcal{C}" }, { "math_id": 17, "text": "\\mathcal{C}'" }, { "math_id": 18, "text": "S \\in \\text{Ob}(\\mathcal{C}')" }, { "math_id": 19, "text": "G_S" }, { "math_id": 20, "text": "S \\in \\text{Ob}(\\mathcal{C})" }, { "math_id": 21, "text": "e" }, { "math_id": 22, "text": "\\text{Ob}(G_e) \\neq \\varnothing" }, { "math_id": 23, "text": "H^1(\\mathcal{U},G)" }, { "math_id": 24, "text": "\\mathcal{U} = \\{U_i\\}_{i \\in I}" }, { "math_id": 25, "text": "X" }, { "math_id": 26, "text": "H^1(-,H^1(-,G))" }, { "math_id": 27, "text": "H^1(U_i,G)" }, { "math_id": 28, "text": "H^2(\\mathcal{U},G)" }, { "math_id": 29, "text": "\\underline{L}" }, { "math_id": 30, "text": "U \\in \\text{Ob}(\\mathcal{C})" }, { "math_id": 31, "text": "x \\in \\text{Ob}(\\mathcal{X}(U))" }, { "math_id": 32, "text": "L = \\underline{\\text{Aut}}_{\\mathcal{X}(U)}(x)" }, { "math_id": 33, "text": "\\mathcal{U} = \\{U_i \\to X \\}_{i \\in I}" }, { "math_id": 34, "text": "c(\\underline{L}) \\in H^3(X,\\underline{L})" }, { "math_id": 35, "text": "L" }, { "math_id": 36, "text": "U(1)" }, { "math_id": 37, "text": "B(U(1)) = K(\\mathbb{Z},2)" }, { "math_id": 38, "text": "[X, B^2(U(1))] = [X,K(\\mathbb{Z},3)]" }, { "math_id": 39, "text": "H^3(X,\\mathbb{Z})" }, { "math_id": 40, "text": "\\text{End}(V)" }, { "math_id": 41, "text": "V" }, { "math_id": 42, "text": "PU(\\mathcal{H})" }, { "math_id": 43, "text": "\\mathcal{H}" }, { "math_id": 44, "text": "\\ell^2" }, { "math_id": 45, "text": "\\begin{matrix}\n\\mathcal{X} & \\to & * \\\\\n\\downarrow & & \\downarrow \\\\\nS & \\xrightarrow{f} & B^2U(1)\n\\end{matrix}" }, { "math_id": 46, "text": "\\begin{matrix}\nL & \\to & * \\\\\n\\downarrow & & \\downarrow \\\\\nS & \\xrightarrow{f} & BU(1)\n\\end{matrix}" }, { "math_id": 47, "text": "BU(1) \\simeq K(\\mathbb{Z},2)" }, { "math_id": 48, "text": "H^2(S,\\mathbb{Z})" }, { "math_id": 49, "text": "\\mathcal{U} = \\{U_i\\}" }, { "math_id": 50, "text": "\\mathcal{G} = \\left\\{ \\coprod_{i,j}U_{ij} \\rightrightarrows \\coprod U_i \\right\\} " }, { "math_id": 51, "text": "\\begin{align}\ns: U_{ij} \\hookrightarrow U_j \\\\\nt: U_{ij} \\hookrightarrow U_i\n\\end{align}" }, { "math_id": 52, "text": "\\coprod_{i,j,k}U_{ijk}" }, { "math_id": 53, "text": "\\sigma \\in H^2(X;U(1))" }, { "math_id": 54, "text": "\\sigma: \\coprod U_{ijk} \\to U(1)" }, { "math_id": 55, "text": "C_c(\\mathcal{G}(\\sigma))" }, { "math_id": 56, "text": "\\mathcal{G}_1 = \\coprod_{i,j}U_{ij}" }, { "math_id": 57, "text": "a* b(x,i,k) := \\sum_j a(x,i,j)b(x,j,k)\\sigma(x,i,j,k)" }, { "math_id": 58, "text": "\\sigma" }, { "math_id": 59, "text": "C^*" }, { "math_id": 60, "text": "M" }, { "math_id": 61, "text": "k" }, { "math_id": 62, "text": "\\mathbb{G}_m" }, { "math_id": 63, "text": "P" }, { "math_id": 64, "text": "\\pi:P\\to M" }, { "math_id": 65, "text": "\\pi" }, { "math_id": 66, "text": "\\pi|_U:G\\times U\\to U" }, { "math_id": 67, "text": "\\mathcal{M}" }, { "math_id": 68, "text": "\\pi\\colon\\mathcal{M} \\to M" }, { "math_id": 69, "text": "\\pi|_U\\colon \\mathrm{B}G \\times U \\to U" }, { "math_id": 70, "text": "BG" }, { "math_id": 71, "text": "[ * / G ]" }, { "math_id": 72, "text": "\\mathcal{E}^\\bullet = [\\mathcal{E}^{-1} \\xrightarrow{d} \\mathcal{E}^0]" }, { "math_id": 73, "text": "X \\in \\text{Sch}" }, { "math_id": 74, "text": "U \\subseteq X" }, { "math_id": 75, "text": "X(U)" }, { "math_id": 76, "text": "\\mathcal{E}^{-1}(U) \\xrightarrow{d} \\mathcal{E}^0(U)" }, { "math_id": 77, "text": "x \\in \\mathcal{E}^0(U)" }, { "math_id": 78, "text": "x \\to x'" }, { "math_id": 79, "text": "y \\in \\mathcal{E}^{-1}(U)" }, { "math_id": 80, "text": "dy + x = x' " }, { "math_id": 81, "text": "\\mathcal{H}^0(\\mathcal{E})" }, { "math_id": 82, "text": "(A,\\Gamma)" }, { "math_id": 83, "text": "\\Gamma" }, { "math_id": 84, "text": "A" }, { "math_id": 85, "text": "C" }, { "math_id": 86, "text": "g > 1" }, { "math_id": 87, "text": "\\mathcal{M}^s_{r, d}" }, { "math_id": 88, "text": "r" }, { "math_id": 89, "text": "d" }, { "math_id": 90, "text": "M^s_{r, d}" }, { "math_id": 91, "text": "E" }, { "math_id": 92, "text": "Aut(E)" }, { "math_id": 93, "text": "\\mathcal{M}^s_{r, d} \\to M^{s}_{r, d}" }, { "math_id": 94, "text": "L \\to S" }, { "math_id": 95, "text": "\\sqrt[r]{L/S}.\\," }, { "math_id": 96, "text": "\\bigotimes^r\\sqrt[{r}]{L/S} \\cong L" }, { "math_id": 97, "text": "\\sqrt[r]{L/S}: (\\operatorname{Sch}/S)^{op} \\to \\operatorname{Grpd}" }, { "math_id": 98, "text": "T \\to S" }, { "math_id": 99, "text": "\\left\\{\n(M \\to T,\\alpha_M) : \\alpha_M: M^{\\otimes r} \\xrightarrow{\\sim} L\\times_ST\n\\right\\}" }, { "math_id": 100, "text": "\\alpha_M" }, { "math_id": 101, "text": "\\mu_r" }, { "math_id": 102, "text": "(M\\to T,\\alpha_M)" }, { "math_id": 103, "text": "M^{\\otimes r}" }, { "math_id": 104, "text": "\\begin{matrix}\nX\\times_{B\\mathbb{G}_m} B\\mathbb{G}_m & \\to & B\\mathbb{G}_m \\\\\n\\downarrow & & \\downarrow \\\\\nX & \\to & B\\mathbb{G}_m\n\\end{matrix}" }, { "math_id": 105, "text": "B\\mathbb{G}_m \\to B\\mathbb{G}_m" }, { "math_id": 106, "text": "1 \\xrightarrow{} \\mu_r \\xrightarrow{} \\mathbb{G}_m \\xrightarrow{ (\\cdot)^r} \\mathbb{G}_m \\xrightarrow{} 1" }, { "math_id": 107, "text": "B\\mathbb{G}_m" }, { "math_id": 108, "text": "B\\mathbb{G}_m(S)" }, { "math_id": 109, "text": "s: S \\to L" }, { "math_id": 110, "text": "(L\\to S,s)" }, { "math_id": 111, "text": "\\sqrt[r]{(L,s)/S}: (\\operatorname{Sch}/S)^{op} \\to \\operatorname{Grpd}" }, { "math_id": 112, "text": "\\left\\{\n(M \\to T,\\alpha_M, t) :\n\\begin{align}\n&\\alpha_M: M^{\\otimes r} \\xrightarrow{\\sim} L\\times_ST \\\\\n& t \\in \\Gamma(T,M) \\\\\n&\\alpha_M(t^{\\otimes r}) = s\n\\end{align}\n\\right\\}" }, { "math_id": 113, "text": "S = \\text{Spec}(A)" }, { "math_id": 114, "text": "L \\cong \\mathcal{O}_S" }, { "math_id": 115, "text": "s" }, { "math_id": 116, "text": "s \\in A" }, { "math_id": 117, "text": "\\sqrt[r]{(L,s)/S} = [\\text{Spec}(B)/\\mu_r]" }, { "math_id": 118, "text": "B = \\frac{A[x]}{x^r - s}" }, { "math_id": 119, "text": "s = 0" }, { "math_id": 120, "text": "[\\text{Spec}(A)/\\mu_r]" }, { "math_id": 121, "text": "\\mathcal{O}_X^*" } ]
https://en.wikipedia.org/wiki?curid=822043
8220913
ADALINE
Early single-layer artificial neural network ADALINE (Adaptive Linear Neuron or later Adaptive Linear Element) is an early single-layer artificial neural network and the name of the physical device that implemented this network. It was developed by professor Bernard Widrow and his doctoral student Ted Hoff at Stanford University in 1960. It is based on the perceptron. It consists of a weight, a bias and a summation function. The weights and biases were implemented by rheostats (as seen in the "knobby ADALINE"), and later, memistors. The difference between Adaline and the standard (McCulloch–Pitts) perceptron is in how they learn. Adaline unit weights are adjusted to match a teacher signal, before applying the Heaviside function (see figure), but the standard perceptron unit weights are adjusted to match the correct output, after applying the Heaviside function. A multilayer network of ADALINE units is a MADALINE. Definition. Adaline is a single layer neural network with multiple nodes where each node accepts multiple inputs and generates one output. Given the following variables as: then we find that the output is formula_5. If we further assume that then the output further reduces to: formula_8 Learning rule. The learning rule used by ADALINE is the LMS ("least mean squares") algorithm, a special case of gradient descent. Define the following notations: The LMS algorithm updates the weights by formula_12 This update rule minimizes formula_13, the square of the error, and is in fact the stochastic gradient descent update for linear regression. MADALINE. MADALINE (Many ADALINE) is a three-layer (input, hidden, output), fully connected, feed-forward artificial neural network architecture for classification that uses ADALINE units in its hidden and output layers, i.e. its activation function is the sign function. The three-layer network uses memistors. Three different training algorithms for MADALINE networks, which cannot be learned using backpropagation because the sign function is not differentiable, have been suggested, called Rule I, Rule II and Rule III. Despite many attempts, they never succeeded in training more than a single layer of weights in a MADALINE. This was until Widrow saw the backpropagation algorithm in a 1985 Snowbird conference. MADALINE Rule 1 (MRI) - The first of these dates back to 1962. It consists of two layers. The first layer is made of ADALINE units. Let the output of the i-th ADALINE unit be formula_14. The second layer has two units. One is a majority-voting unit: it takes in all formula_14, and if there are more positives than negatives, then the unit outputs +1, and vice versa. Another is a "job assigner". Suppose the desired output is different from the majority-voted output, say the desired output is -1, then the job assigner calculates the minimal number of ADALINE units that must change their outputs from positive to negative, then picks those ADALINE units that are "closest" to being negative, and make them update their weights, according to the ADALINE learning rule. It was thought of as a form of "minimal disturbance principle". The largest MADALINE machine built had 1000 weights, each implemented by a memistor. It was built in 1963 and used MRI for learning. Some MADALINE machines were demonstrated to perform inverted pendulum balancing, weather prediction, speech recognition, etc. MADALINE Rule 2 (MRII) - The second training algorithm improved on Rule I and was described in 1988. The Rule II training algorithm is based on a principle called "minimal disturbance". It proceeds by looping over training examples, then for each example, it: MADALINE Rule 3 - The third "Rule" applied to a modified network with sigmoid activations instead of signum; it was later found to be equivalent to backpropagation. Additionally, when flipping single units' signs does not drive the error to zero for a particular example, the training algorithm starts flipping pairs of units' signs, then triples of units, etc. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "w" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "\\theta" }, { "math_id": 4, "text": "y" }, { "math_id": 5, "text": "y=\\sum_{j=1}^{n} x_j w_j + \\theta" }, { "math_id": 6, "text": " x_0 = 1" }, { "math_id": 7, "text": "w_0 = \\theta" }, { "math_id": 8, "text": "y=\\sum_{j=0}^{n} x_j w_j" }, { "math_id": 9, "text": "\\eta" }, { "math_id": 10, "text": "o" }, { "math_id": 11, "text": "E=(o - y)^2" }, { "math_id": 12, "text": "w \\leftarrow w + \\eta(o - y)x." }, { "math_id": 13, "text": "E" }, { "math_id": 14, "text": "o_i" } ]
https://en.wikipedia.org/wiki?curid=8220913
8221194
Tryptophan hydroxylase
Class of enzymes &lt;templatestyles src="Stack/styles.css"/&gt; Tryptophan hydroxylase (TPH) is an enzyme (EC 1.14.16.4) involved in the synthesis of the monoamine neurotransmitter serotonin. Tyrosine hydroxylase, phenylalanine hydroxylase, and tryptophan hydroxylase together constitute the family of biopterin-dependent aromatic amino acid hydroxylases. TPH catalyzes the following chemical reaction L-tryptophan + tetrahydrobiopterin + O2 formula_0 5-Hydroxytryptophan + dihydrobiopterin + H2O It employs one additional cofactor, iron. Function. It is responsible for addition of the -OH group (hydroxylation) to the 5 position to form the amino acid 5-hydroxytryptophan (5-HTP), which is the initial and rate-limiting step in the synthesis of the neurotransmitter serotonin. It is also the first enzyme in the synthesis of melatonin. Tryptophan hydroxylase (TPH), tyrosine hydroxylase (TH) and phenylalanine hydroxylase (PAH) are members of a superfamily of aromatic amino acid hydroxylases, catalyzing key steps in important metabolic pathways. Analogously to phenylalanine hydroxylase and tyrosine hydroxylase, this enzyme uses (6R)-L-erythro-5,6,7,8-tetrahydrobiopterin (BH4) and dioxygen as cofactors. In humans, the stimulation of serotonin production by administration of tryptophan has an antidepressant effect and inhibition of tryptophan hydroxylase (e.g. by p-Chlorophenylalanine) may precipitate depression. The activity of tryptophan hydroxylase (i.e. the rate at which it converts L-tryptophan into the serotonin precursor L-5-hydroxytryptophan) can be increased when it undergoes phosphorylation. Protein Kinase A, for example, can phosphorylate tryptophan hydroxylase, thus increasing its activity. Isoforms. In humans, as well as in other mammals, there are two distinct TPH genes. In humans, these genes are located on chromosomes 11 and 12 and encode two different homologous enzymes "TPH1" and "TPH2" (sequence identity 71%). References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=8221194
8221717
Information-based complexity
Information-based complexity (IBC) studies optimal algorithms and computational complexity for the continuous problems that arise in physical science, economics, engineering, and mathematical finance. IBC has studied such continuous problems as path integration, partial differential equations, systems of ordinary differential equations, nonlinear equations, integral equations, fixed points, and very-high-dimensional integration. All these problems involve functions (typically multivariate) of a real or complex variable. Since one can never obtain a closed-form solution to the problems of interest one has to settle for a numerical solution. Since a function of a real or complex variable cannot be entered into a digital computer, the solution of continuous problems involves "partial" information. To give a simple illustration, in the numerical approximation of an integral, only samples of the integrand at a finite number of points are available. In the numerical solution of partial differential equations the functions specifying the boundary conditions and the coefficients of the differential operator can only be sampled. Furthermore, this partial information can be expensive to obtain. Finally the information is often "contaminated" by noise. The goal of information-based complexity is to create a theory of computational complexity and optimal algorithms for problems with partial, contaminated and priced information, and to apply the results to answering questions in various disciplines. Examples of such disciplines include physics, economics, mathematical finance, computer vision, control theory, geophysics, medical imaging, weather forecasting and climate prediction, and statistics. The theory is developed over abstract spaces, typically Hilbert or Banach spaces, while the applications are usually for multivariate problems. Since the information is partial and contaminated, only approximate solutions can be obtained. IBC studies computational complexity and optimal algorithms for approximate solutions in various settings. Since the worst case setting often leads to negative results such as unsolvability and intractability, settings with weaker assurances such as average, probabilistic and randomized are also studied. A fairly new area of IBC research is continuous quantum computing. Overview. We illustrate some important concepts with a very simple example, the computation of formula_0 For most integrands we can't use the fundamental theorem of calculus to compute the integral analytically; we have to approximate it numerically. We compute the values of formula_1 at "n" points formula_2 The "n" numbers are the partial information about the true integrand formula_3 We combine these "n" numbers by a combinatory algorithm to compute an approximation to the integral. See the monograph Complexity and Information for particulars. Because we have only partial information we can use an "adversary argument" to tell us how large "n" has to be to compute an formula_4-approximation. Because of these information-based arguments we can often obtain tight bounds on the complexity of continuous problems. For discrete problems such as integer factorization or the travelling salesman problem we have to settle for conjectures about the complexity hierarchy. The reason is that the input is a number or a vector of numbers and can thus be entered into the computer. Thus there is typically no adversary argument at the information level and the complexity of a discrete problem is rarely known. The univariate integration problem was for illustration only. Significant for many applications is multivariate integration. The number of variables is in the hundreds or thousands. The number of variables may even be infinite; we then speak of path integration. The reason that integrals are important in many disciplines is that they occur when we want to know the expected behavior of a continuous process. See for example, the application to mathematical finance below. Assume we want to compute an integral in "d" dimensions (dimensions and variables are used interchangeably) and that we want to guarantee an error at most formula_4 for any integrand in some class. The computational complexity of the problem is known to be of order formula_5 (Here we are counting the number of function evaluations and the number of arithmetic operations so this is the time complexity.) This would take many years for even modest values of formula_6 The exponential dependence on "d" is called the "curse of dimensionality". We say the problem is intractable. We stated the curse of dimensionality for integration. But exponential dependence on "d" occurs for almost every continuous problem that has been investigated. How can we try to vanquish the curse? There are two possibilities: An example: mathematical finance. Very high dimensional integrals are common in finance. For example, computing expected cash flows for a collateralized mortgage obligation (CMO) requires the calculation of a number of formula_7 dimensional integrals, the formula_7 being the number of months in formula_8 years. Recall that if a worst case assurance is required the time is of order formula_9 time units. Even if the error is not small, say formula_10 this is formula_11 time units. People in finance have long been using the Monte Carlo method (MC), an instance of a randomized algorithm. Then in 1994 a research group at Columbia University (Papageorgiou, Paskov, Traub, Woźniakowski) discovered that the quasi-Monte Carlo (QMC) method using low discrepancy sequences beat MC by one to three orders of magnitude. The results were reported to a number of Wall Street finance to considerable initial skepticism. The results were first published by Paskov and Traub, "Faster Valuation of Financial Derivatives", Journal of Portfolio Management 22, 113-120. Today QMC is widely used in the financial sector to value financial derivatives. These results are empirical; where does computational complexity come in? QMC is not a panacea for all high dimensional integrals. What is special about financial derivatives? Here's a possible explanation. The formula_7 dimensions in the CMO represent monthly future times. Due to the discounted value of money variables representing times for in the future are less important than the variables representing nearby times. Thus the integrals are non-isotropic. Sloan and Woźniakowski introduced the very powerful idea of weighted spaces, which is a formalization of the above observation. They were able to show that with this additional domain knowledge high dimensional integrals satisfying certain conditions were tractable even in the worst case! In contrast the Monte Carlo method gives only a stochastic assurance. See Sloan and Woźniakowski "When are Quasi-Monte Carlo Algorithms Efficient for High Dimensional Integration?" J. Complexity 14, 1-33, 1998. For which classes of integrals is QMC superior to MC? This continues to be a major research problem. Brief history. Precursors to IBC may be found in the 1950s by Kiefer, Sard, and Nikolskij. In 1959 Traub had the key insight that the optimal algorithm and the computational complexity of solving a continuous problem depended on the available information. He applied this insight to the solution of nonlinear equations, which started the area of optimal iteration theory. This research was published in the 1964 monograph "Iterative Methods for the Solution of Equations." The general setting for information-based complexity was formulated by Traub and Woźniakowski in 1980 in "A General Theory of Optimal Algorithms." For a list of more recent monographs and pointers to the extensive literature see To Learn More below. Prizes. There are a number of prizes for IBC research. References. Extensive bibliographies may be found in the monographs N (1988), TW (1980), TWW (1988) and TW (1998). The IBC website has a searchable data base of some 730 items.
[ { "math_id": 0, "text": "\\int_0^1 f(x)\\,dx. " }, { "math_id": 1, "text": " f" }, { "math_id": 2, "text": "[f(t_1),\\dots,f(t_n)]." }, { "math_id": 3, "text": "f(x)." }, { "math_id": 4, "text": "\\epsilon" }, { "math_id": 5, "text": "\\epsilon^{-d}." }, { "math_id": 6, "text": "d." }, { "math_id": 7, "text": "360" }, { "math_id": 8, "text": "30" }, { "math_id": 9, "text": "\\epsilon^{-d}" }, { "math_id": 10, "text": "\\epsilon=10^{-2}," }, { "math_id": 11, "text": "10^{720}" } ]
https://en.wikipedia.org/wiki?curid=8221717
82238
Articulatory phonetics
A branch of linguistics studying how humans make sounds The field of articulatory phonetics is a subfield of phonetics that studies articulation and ways that humans produce speech. Articulatory phoneticians explain how humans produce speech sounds via the interaction of different physiological structures. Generally, articulatory phonetics is concerned with the transformation of aerodynamic energy into acoustic energy. Aerodynamic energy refers to the airflow through the vocal tract. Its potential form is air pressure; its kinetic form is the actual dynamic airflow. Acoustic energy is variation in the air pressure that can be represented as sound waves, which are then perceived by the human auditory system as sound. Respiratory sounds can be produced by expelling air from the lungs. However, to vary the sound quality in a way useful for speaking, two speech organs normally move towards each other to contact each other to create an obstruction that shapes the air in a particular fashion. The point of maximum obstruction is called the "place of articulation", and the way the obstruction forms and releases is the "manner of articulation". For example, when making a "p" sound, the lips come together tightly, blocking the air momentarily and causing a buildup of air pressure. The lips then release suddenly, causing a burst of sound. The place of articulation of this sound is therefore called "bilabial", and the manner is called "stop" (also known as a "plosive"). Components. The vocal tract can be viewed through an aerodynamic-biomechanic model that includes three main components: Air cavities are containers of air molecules of specific volumes and masses. The main air cavities present in the articulatory system are the supraglottal cavity and the subglottal cavity. They are so-named because the glottis, the openable space between the vocal folds internal to the larynx, separates the two cavities. The supraglottal cavity or the orinasal cavity is divided into an oral subcavity (the cavity from the glottis to the lips excluding the nasal cavity) and a nasal subcavity (the cavity from the velopharyngeal port, which can be closed by raising the velum). The subglottal cavity consists of the trachea and the lungs. The atmosphere external to the articulatory stem may also be considered an air cavity whose potential connecting points with respect to the body are the nostrils and the lips. Pistons are initiators. The term "initiator" refers to the fact that they are used to initiate a change in the volumes of air cavities, and, by Boyle's Law, the corresponding air pressure of the cavity. The term "initiation" refers to the change. Since changes in air pressures between connected cavities lead to airflow between the cavities, initiation is also referred to as an "airstream mechanism". The three pistons present in the articulatory system are the larynx, the tongue body, and the physiological structures used to manipulate lung volume (in particular, the floor and the walls of the chest). The lung pistons are used to initiate a pulmonic airstream (found in all human languages). The larynx is used to initiate the glottalic airstream mechanism by changing the volume of the supraglottal and subglottal cavities via vertical movement of the larynx (with a closed glottis). Ejectives and implosives are made with this airstream mechanism. The tongue body creates a velaric airstream by changing the pressure within the oral cavity: the tongue body changes the mouth subcavity. Click consonants use the velaric airstream mechanism. Pistons are controlled by various muscles. Valves regulate airflow between cavities. Airflow occurs when an air valve is open and there is a pressure difference between the connecting cavities. When an air valve is closed, there is no airflow. The air valves are the vocal folds (the glottis), which regulate between the supraglottal and subglottal cavities, the velopharyngeal port, which regulates between the oral and nasal cavities, the tongue, which regulates between the oral cavity and the atmosphere, and the lips, which also regulate between the oral cavity and the atmosphere. Like the pistons, the air valves are also controlled by various muscles. Initiation. To produce any kind of sound, there must be movement of air. To produce sounds that people can interpret as spoken words, the movement of air must pass through the vocal folds, up through the throat and, into the mouth or nose to then leave the body. Different sounds are formed by different positions of the mouth—or, as linguists call it, "the oral cavity" (to distinguish it from the nasal cavity). Consonants. Consonants are speech sounds that are articulated with a complete or partial closure of the vocal tract. They are generally produced by the modification of an airstream exhaled from the lungs. The respiratory organs used to create and modify airflow are divided into three regions: the vocal tract (supralaryngeal), the larynx, and the subglottal system. The airstream can be either egressive (out of the vocal tract) or ingressive (into the vocal tract). In pulmonic sounds, the airstream is produced by the lungs in the subglottal system and passes through the larynx and vocal tract. Glottalic sounds use an airstream created by movements of the larynx without airflow from the lungs. Click consonants are articulated through the rarefaction of air using the tongue, followed by releasing the forward closure of the tongue. Place of articulation. Consonants are pronounced in the vocal tract, usually in the mouth. In order to describe the place of articulation, the active and passive articulator need to be known. In most cases, the active articulators are the lips and tongue. The passive articulator is the surface on which the constriction is created. Constrictions made by the lips are called labials. Constrictions can be made in several parts of the vocal tract, broadly classified into coronal, dorsal and radical places of articulation. Coronal articulations are made with the front of the tongue, dorsal articulations are made with the back of the tongue, and radical articulations are made in the pharynx. These divisions are not sufficient for distinguishing and describing all speech sounds. For example, in English the sounds and are both coronal, but they are produced in different places of the mouth. To account for this, more detailed places of articulation are needed based upon the area of the mouth in which the constriction occurs. Labial consonants. Articulations involving the lips can be made in three different ways: with both lips (bilabial), with one lip and the teeth (labiodental), and with the tongue and the upper lip (linguolabial). Depending on the definition used, some or all of these kinds of articulations may be categorized into the class of labial articulations. Ladefoged and Maddieson (1996) propose that linguolabial articulations be considered coronals rather than labials, but make clear this grouping, like all groupings of articulations, is equivocal and not cleanly divided. Linguolabials are included in this section as labials given their use of the lips as a place of articulation. Bilabial consonants are made with both lips. In producing these sounds the lower lip moves farthest to meet the upper lip, which also moves down slightly, though in some cases the force from air moving through the aperture (opening between the lips) may cause the lips to separate faster than they can come together. Unlike most other articulations, both articulators are made from soft tissue, and so bilabial stops are more likely to be produced with incomplete closures than articulations involving hard surfaces like the teeth or palate. Bilabial stops are also unusual in that an articulator in the upper section of the vocal tract actively moves downwards, as the upper lip shows some active downward movement. Labiodental consonants are made by the lower lip rising to the upper teeth. Labiodental consonants are most often fricatives while labiodental nasals are also typologically common. There is debate as to whether true labiodental plosives occur in any natural language, though a number of languages are reported to have labiodental plosives including Zulu, Tonga, and Shubi. Labiodental affricates are reported in Tsonga which would require the stop portion of the affricate to be a labiodental stop, though Ladefoged and Maddieson (1996) raise the possibility that labiodental affricates involve a bilabial closure like "pf" in German. Unlike plosives and affricates, labiodental nasals are common across languages. Linguolabial consonants are made with the blade of the tongue approaching or contacting the upper lip. Like in bilabial articulations, the upper lip moves slightly towards the more active articulator. Articulations in this group do not have their own symbols in the International Phonetic Alphabet, rather, they are formed by combining an apical symbol with a diacritic implicitly placing them in the coronal category. They exist in a number of languages indigenous to Vanuatu such as Tangoa, though early descriptions referred to them as apical-labial consonants. The name "linguolabial" was suggested by Floyd Lounsbury given that they are produced with the blade rather than the tip of the tongue. Coronal consonants. Coronal consonants are made with the tip or blade of the tongue and, because of the agility of the front of the tongue, represent a variety not only in place but in the posture of the tongue. The coronal places of articulation represent the areas of the mouth where the tongue contacts or makes a constriction, and include dental, alveolar, and post-alveolar locations. Tongue postures using the tip of the tongue can be apical if using the top of the tongue tip, laminal if made with the blade of the tongue, or sub-apical if the tongue tip is curled back and the bottom of the tongue is used. Coronals are unique as a group in that every manner of articulation is attested. Australian languages are well known for the large number of coronal contrasts exhibited within and across languages in the region. Dental consonants are made with the tip or blade of the tongue and the upper teeth. They are divided into two groups based upon the part of the tongue used to produce them: apical dental consonants are produced with the tongue tip touching the teeth; interdental consonants are produced with the blade of the tongue as the tip of the tongue sticks out in front of the teeth. No language is known to use both contrastively though they may exist allophonically. Alveolar consonants are made with the tip or blade of the tongue at the alveolar ridge just behind the teeth and can similarly be apical or laminal. Crosslinguistically, dental consonants and alveolar consonants are frequently contrasted leading to a number of generalizations of crosslinguistic patterns. The different places of articulation tend to also be contrasted in the part of the tongue used to produce them: most languages with dental stops have laminal dentals, while languages with alveolar stops usually have apical stops. Languages rarely have two consonants in the same place with a contrast in laminality, though Taa (ǃXóõ) is a counterexample to this pattern. If a language has only one of a dental stop or an alveolar stop, it will usually be laminal if it is a dental stop, and the stop will usually be apical if it is an alveolar stop, though for example Temne and Bulgarian do not follow this pattern. If a language has both an apical and laminal stop, then the laminal stop is more likely to be affricated like in Isoko, though Dahalo show the opposite pattern with alveolar stops being more affricated. Retroflex consonants have several different definitions depending on whether the position of the tongue or the position on the roof of the mouth is given prominence. In general, they represent a group of articulations in which the tip of the tongue is curled upwards to some degree. In this way, retroflex articulations can occur in several different locations on the roof of the mouth including alveolar, post-alveolar, and palatal regions. If the underside of the tongue tip makes contact with the roof of the mouth, it is sub-apical though apical post-alveolar sounds are also described as retroflex. Typical examples of sub-apical retroflex stops are commonly found in Dravidian languages, and in some languages indigenous to the southwest United States the contrastive difference between dental and alveolar stops is a slight retroflexion of the alveolar stop. Acoustically, retroflexion tends to affect the higher formants. Articulations taking place just behind the alveolar ridge, known as post-alveolar consonants, have been referred to using a number of different terms. Apical post-alveolar consonants are often called retroflex, while laminal articulations are sometimes called palato-alveolar; in the Australianist literature, these laminal stops are often described as 'palatal' though they are produced further forward than the palate region typically described as palatal. Because of individual anatomical variation, the precise articulation of palato-alveolar stops (and coronals in general) can vary widely within a speech community. Dorsal consonants. Dorsal consonants are those consonants made using the tongue body rather than the tip or blade. Palatal consonants are made using the tongue body against the hard palate on the roof of the mouth. They are frequently contrasted with velar or uvular consonants, though it is rare for a language to contrast all three simultaneously, with Jaqaru as a possible example of a three-way contrast. Velar consonants are made using the tongue body against the velum. They are incredibly common cross-linguistically; almost all languages have a velar stop. Because both velars and vowels are made using the tongue body, they are highly affected by coarticulation with vowels and can be produced as far forward as the hard palate or as far back as the uvula. These variations are typically divided into front, central, and back velars in parallel with the vowel space. They can be hard to distinguish phonetically from palatal consonants, though are produced slightly behind the area of prototypical palatal consonants. Uvular consonants are made by the tongue body contacting or approaching the uvula. They are rare, occurring in an estimated 19 percent of languages, and large regions of the Americas and Africa have no languages with uvular consonants. In languages with uvular consonants, stops are most frequent followed by continuants (including nasals). Radical consonants. Radical consonants either use the root of the tongue or the epiglottis during production. Pharyngeal consonants are made by retracting the root of the tongue far enough to touch the wall of the pharynx. Due to production difficulties, only fricatives and approximants can be produced this way. Epiglottal consonants are made with the epiglottis and the back wall of the pharynx. Epiglottal stops have been recorded in Dahalo. Voiced epiglottal consonants are not deemed possible due to the cavity between the glottis and epiglottis being too small to permit voicing. Glottal consonants. Glottal consonants are those produced using the vocal folds in the larynx. Because the vocal folds are the source of phonation and below the oro-nasal vocal tract, a number of glottal consonants are impossible such as a voiced glottal stop. Three glottal consonants are possible, a voiceless glottal stop and two glottal fricatives, and all are attested in natural languages. Glottal stops, produced by closing the vocal folds, are notably common in the world's languages. While many languages use them to demarcate phrase boundaries, some languages like Huatla Mazatec have them as contrastive phonemes. Additionally, glottal stops can be realized as laryngealization of the following vowel in this language. Glottal stops, especially between vowels, do usually not form a complete closure. True glottal stops normally occur only when they are geminated. Manner of articulation. Knowing the place of articulation is not enough to fully describe a consonant, the way in which the stricture happens is equally important. Manners of articulation describe how exactly the active articulator modifies, narrows or closes off the vocal tract. Stops (also referred to as plosives) are consonants where the airstream is completely obstructed. Pressure builds up in the mouth during the stricture, which is then released as a small burst of sound when the articulators move apart. The velum is raised so that air cannot flow through the nasal cavity. If the velum is lowered and allows for air to flow through the nose, the result in a nasal stop. However, phoneticians almost always refer to nasal stops as just "nasals".Affricates are a sequence of stops followed by a fricative in the same place. Fricatives are consonants where the airstream is made turbulent by partially, but not completely, obstructing part of the vocal tract. Sibilants are a special type of fricative where the turbulent airstream is directed towards the teeth, creating a high-pitched hissing sound. Nasals (sometimes referred to as nasal stops) are consonants in which there's a closure in the oral cavity and the velum is lowered, allowing air to flow through the nose. In an approximant, the articulators come close together, but not to such an extent that allows a turbulent airstream. Laterals are consonants in which the airstream is obstructed along the center of the vocal tract, allowing the airstream to flow freely on one or both sides. Laterals have also been defined as consonants in which the tongue is contracted in such a way that the airstream is greater around the sides than over the center of the tongue. The first definition does not allow for air to flow over the tongue. Trills are consonants in which the tongue or lips are set in motion by the airstream. The stricture is formed in such a way that the airstream causes a repeating pattern of opening and closing of the soft articulator(s). Apical trills typically consist of two or three periods of vibration. Taps and flaps are single, rapid, usually apical gestures where the tongue is thrown against the roof of the mouth, comparable to a very rapid stop. These terms are sometimes used interchangeably, but some phoneticians make a distinction. In a tap, the tongue contacts the roof in a single motion whereas in a flap the tongue moves tangentially to the roof of the mouth, striking it in passing. During a glottalic airstream mechanism, the glottis is closed, trapping a body of air. This allows for the remaining air in the vocal tract to be moved separately. An upward movement of the closed glottis will move this air out, resulting in it an ejective consonant. Alternatively, the glottis can lower, sucking more air into the mouth, which results in an implosive consonant. Clicks are stops in which tongue movement causes air to be sucked in the mouth, this is referred to as a velaric airstream. During the click, the air becomes rarefied between two articulatory closures, producing a loud 'click' sound when the anterior closure is released. The release of the anterior closure is referred to as the click influx. The release of the posterior closure, which can be velar or uvular, is the click efflux. Clicks are used in several African language families, such as the Khoisan and Bantu languages. Vowels. Vowels are produced by the passage of air through the larynx and the vocal tract. Most vowels are voiced (i.e. the vocal folds are vibrating). Except in some marginal cases, the vocal tract is open, so that the airstream is able to escape without generating fricative noise. Variation in vowel quality is produced by means of the following articulatory structures: Articulators. Glottis. The glottis is the opening between the vocal folds located in the larynx. Its position creates different vibration patterns to distinguish voiced and voiceless sounds. In addition, the pitch of the vowel is changed by altering the frequency of vibration of the vocal folds. In some languages there are contrasts among vowels with different phonation types. Pharynx. The pharynx is the region of the vocal tract below the velum and above the larynx. Vowels may be made pharyngealized (also "epiglottalized", "sphincteric" or "strident") by means of a retraction of the tongue root. Vowels may also be articulated with advanced tongue root. There is discussion of whether this vowel feature (ATR) is different from the Tense/Lax distinction in vowels. Velum. The velum—or soft palate—controls airflow through the nasal cavity. Nasals and nasalized sounds are produced by lowering the velum and allowing air to escape through the nose. Vowels are normally produced with the soft palate raised so that no air escapes through the nose. However, vowels may be nasalized as a result of lowering the soft palate. Many languages use nasalization contrastively. Tongue. The tongue is a highly flexible organ that is capable of being moved in many different ways. For vowel articulation the principal variations are vowel Height and the dimension of Backness and frontness. A less common variation in vowel quality can be produced by a change in the shape of the front of the tongue, resulting in a rhotic or rhotacized vowel. Lips. The lips play a major role in vowel articulation. It is generally believed that two major variables are in effect: lip-rounding (or labialization) and lip protrusion. Airflow. For all practical purposes, temperature can be treated as constant in the articulatory system. Thus, Boyle's Law can usefully be written as the following two equations. formula_0 formula_1 What the above equations express is that given an initial pressure "P"1 and volume "V"1 at time 1 the product of these two values will be equal to the product of the pressure "P"2 and volume "V"2 at a later time 2. This means that if there is an increase in the volume of cavity, there will be a corresponding decrease in pressure of that same cavity, and vice versa. In other words, volume and pressure are inversely proportional (or negatively correlated) to each other. As applied to a description of the subglottal cavity, when the lung pistons contract the lungs, the volume of the subglottal cavity decreases while the subglottal air pressure increases. Conversely, if the lungs are expanded, the pressure decreases. A situation can be considered where (1) the vocal fold valve is closed separating the supraglottal cavity from the subglottal cavity, (2) the mouth is open and, therefore, supraglottal air pressure is equal to atmospheric pressure, and (3) the lungs are contracted resulting in a subglottal pressure that has increased to a pressure that is greater than atmospheric pressure. If the vocal fold valve is subsequently opened, the previously two separate cavities become one unified cavity although the cavities will still be aerodynamically isolated because the glottic valve between them is relatively small and constrictive. Pascal's Law states that the pressure within a system must be equal throughout the system. When the subglottal pressure is greater than supraglottal pressure, there is a pressure inequality in the unified cavity. Since pressure is a force applied to a surface area by definition and a force is the product of mass and acceleration according to Newton's Second Law of Motion, the pressure inequality will be resolved by having part of the mass in air molecules found in the subglottal cavity move to the supraglottal cavity. This movement of mass is airflow. The airflow will continue until a pressure equilibrium is reached. Similarly, in an ejective consonant with a glottalic airstream mechanism, the lips or the tongue (i.e., the buccal or lingual valve) are initially closed and the closed glottis (the laryngeal piston) is raised decreasing the oral cavity volume behind the valve closure and increasing the pressure compared to the volume and pressure at a resting state. When the closed valve is opened, airflow will result from the cavity behind the initial closure outward until intraoral pressure is equal to atmospheric pressure. That is, air will flow from a cavity of higher pressure to a cavity of lower pressure until the equilibrium point; the pressure as potential energy is, thus, converted into airflow as kinetic energy. Sound sources. Sound sources refer to the conversion of aerodynamic energy into acoustic energy. There are two main types of sound sources in the articulatory system: periodic (or more precisely semi-periodic) and aperiodic. A periodic sound source is vocal fold vibration produced at the glottis found in vowels and voiced consonants. A less common periodic sound source is the vibration of an oral articulator like the tongue found in alveolar trills. Aperiodic sound sources are the turbulent noise of fricative consonants and the short-noise burst of plosive releases produced in the oral cavity. Voicing is a common period sound source in spoken language and is related to how closely the vocal cords are placed together. In English there are only two possibilities, "voiced" and "unvoiced". Voicing is caused by the vocal cords held close by each other, so that air passing through them makes them vibrate. All normally spoken vowels are voiced, as are all other sonorants except "h", as well as some of the remaining sounds ("b", "d", "g", "v", "z", "zh", "j", and the "th" sound in "this"). All the rest are voiceless sounds, with the vocal cords held far enough apart that there is no vibration; however, there is still a certain amount of audible friction, as in the sound "h". Voiceless sounds are not very prominent unless there is some turbulence, as in the stops, fricatives, and affricates; this is why sonorants in general only occur voiced. The exception is during whispering, when all sounds pronounced are voiceless. Experimental techniques. Palatography. In order to understand how sounds are made, experimental procedures are often adopted. Palatography is one of the oldest instrumental phonetic techniques used to record data regarding articulators. In traditional, static palatography, a speaker's palate is coated with a dark powder. The speaker then produces a word, usually with a single consonant. The tongue wipes away some of the powder at the place of articulation. The experimenter can then use a mirror to photograph the entire upper surface of the speaker's mouth. This photograph, in which the place of articulation can be seen as the area where the powder has been removed, is called a palatogram. Technology has since made possible electropalatography (or EPG). In order to collect EPG data, the speaker is fitted with a special prosthetic palate, which contains a number of electrodes. The way in which the electrodes are "contacted" by the tongue during speech provides phoneticians with important information, such as how much of the palate is contacted in different speech sounds, or which regions of the palate are contacted, or what the duration of the contact is. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Refbegin/styles.css" /&gt; External links. &lt;templatestyles src="IPA common/styles.css" /&gt;&lt;templatestyles src="IPA pulmonic consonants/styles.css" /&gt;&lt;templatestyles src="IPA co-articulated consonants/styles.css" /&gt;&lt;templatestyles src="IPA vowels/styles.css" /&gt;
[ { "math_id": 0, "text": "P_1 V_1 = P_2 V_2 \\," }, { "math_id": 1, "text": "\\frac{V_1}{(V_1+\\Delta V)}=\\frac{(P_1+\\Delta P)}{P_1}" } ]
https://en.wikipedia.org/wiki?curid=82238
82269
Focal length
Measure of how strongly an optical system converges or diverges light The focal length of an optical system is a measure of how strongly the system converges or diverges light; it is the inverse of the system's optical power. A positive focal length indicates that a system converges light, while a negative focal length indicates that the system diverges light. A system with a shorter focal length bends the rays more sharply, bringing them to a focus in a shorter distance or diverging them more quickly. For the special case of a thin lens in air, a positive focal length is the distance over which initially collimated (parallel) rays are brought to a focus, or alternatively a negative focal length indicates how far in front of the lens a point source must be located to form a collimated beam. For more general optical systems, the focal length has no intuitive meaning; it is simply the inverse of the system's optical power. In most photography and all telescopy, where the subject is essentially infinitely far away, longer focal length (lower optical power) leads to higher magnification and a narrower angle of view; conversely, shorter focal length or higher optical power is associated with lower magnification and a wider angle of view. On the other hand, in applications such as microscopy in which magnification is achieved by bringing the object close to the lens, a shorter focal length (higher optical power) leads to higher magnification because the subject can be brought closer to the center of projection. Thin lens approximation. For a thin lens in air, the focal length is the distance from the center of the lens to the principal foci (or "focal points") of the lens. For a converging lens (for example a convex lens), the focal length is positive and is the distance at which a beam of collimated light will be focused to a single spot. For a diverging lens (for example a concave lens), the focal length is negative and is the distance to the point from which a collimated beam appears to be diverging after passing through the lens. When a lens is used to form an image of some object, the distance from the object to the lens "u", the distance from the lens to the image "v", and the focal length "f" are related by formula_0 The focal length of a thin "convex" lens can be easily measured by using it to form an image of a distant light source on a screen. The lens is moved until a sharp image is formed on the screen. In this case is negligible, and the focal length is then given by formula_1 Determining the focal length of a "concave" lens is somewhat more difficult. The focal length of such a lens is defined as the point at which the spreading beams of light meet when they are extended backwards. No image is formed during such a test, and the focal length must be determined by passing light (for example, the light of a laser beam) through the lens, examining how much that light becomes dispersed/ bent, and following the beam of light backwards to the lens's focal point. General optical systems. For a "thick" lens (one which has a non-negligible thickness), or an imaging system consisting of several lenses or mirrors (e.g. a photographic lens or a telescope), there are several related concepts that are referred to as focal lengths: For an optical system in air the effective focal length, front focal length, and rear focal length are all the same and may be called simply "focal length". For an optical system in a medium other than air or vacuum, the front and rear focal lengths are equal to the EFL times the refractive index of the medium in front of or behind the lens ("n"1 and "n"2 in the diagram above). The term "focal length" by itself is ambiguous in this case. The historical usage was to define the "focal length" as the EFL times the index of refraction of the medium. For a system with different media on both sides, such as the human eye, the front and rear focal lengths are not equal to one another, and convention may dictate which one is called "the focal length" of the system. Some modern authors avoid this ambiguity by instead defining "focal length" to be a synonym for EFL. The distinction between front/rear focal length and EFL is important for studying the human eye. The eye can be represented by an equivalent thin lens at an air/fluid boundary with front and rear focal lengths equal to those of the eye, or it can be represented by a different equivalent thin lens that is totally in air, with focal length equal to the eye's EFL. For the case of a lens of thickness d in air ("n"1 = "n"2 = 1), and surfaces with radii of curvature "R"1 and "R"2, the effective focal length f is given by the Lensmaker's equation: formula_2where n is the refractive index of the lens medium. The quantity is also known as the optical power of the lens. The corresponding front focal distance is:formula_3 and the back focal distance: formula_4 In the sign convention used here, the value of "R"1 will be positive if the first lens surface is convex, and negative if it is concave. The value of "R"2 is negative if the second surface is convex, and positive if concave. Sign conventions vary between different authors, which results in different forms of these equations depending on the convention used. For a spherically-curved mirror in air, the magnitude of the focal length is equal to the radius of curvature of the mirror divided by two. The focal length is positive for a concave mirror, and negative for a convex mirror. In the sign convention used in optical design, a concave mirror has negative radius of curvature, so formula_5 where R is the radius of curvature of the mirror's surface. See Radius of curvature (optics) for more information on the sign convention for radius of curvature used here. In photography. Camera lens focal lengths are usually specified in millimetres (mm), but some older lenses are marked in centimetres (cm) or inches. Focal length (f) and field of view (FOV) of a lens are inversely proportional. For a standard rectilinear lens, formula_6, where x is the width of the film or imaging sensor. When a photographic lens is set to "infinity", its rear principal plane is separated from the sensor or film, which is then situated at the focal plane, by the lens's focal length. Objects far away from the camera then produce sharp images on the sensor or film, which is also at the image plane. To render closer objects in sharp focus, the lens must be adjusted to increase the distance between the rear principal plane and the film, to put the film at the image plane. The focal length f, the distance from the front principal plane to the object to photograph "s"1, and the distance from the rear principal plane to the image plane "s"2 are then related by: formula_7 As "s"1 is decreased, "s"2 must be increased. For example, consider a normal lens for a 35 mm camera with a focal length of "f" 50 mm. To focus a distant object ("s"1 ≈ ∞), the rear principal plane of the lens must be located a distance "s"2 50 mm from the film plane, so that it is at the location of the image plane. To focus an object 1 m away ("s"1 1,000 mm), the lens must be moved 2.6 mm farther away from the film plane, to "s"2 52.6 mm. The focal length of a lens determines the magnification at which it images distant objects. It is equal to the distance between the image plane and a pinhole that images distant objects the same size as the lens in question. For rectilinear lenses (that is, with no image distortion), the imaging of distant objects is well modelled as a pinhole camera model. This model leads to the simple geometric model that photographers use for computing the angle of view of a camera; in this case, the angle of view depends only on the ratio of focal length to film size. In general, the angle of view depends also on the distortion. A lens with a focal length about equal to the diagonal size of the film or sensor format is known as a normal lens; its angle of view is similar to the angle subtended by a large-enough print viewed at a typical viewing distance of the print diagonal, which therefore yields a normal perspective when viewing the print; this angle of view is about 53 degrees diagonally. For full-frame 35 mm-format cameras, the diagonal is 43 mm and a typical "normal" lens has a 50 mm focal length. A lens with a focal length shorter than normal is often referred to as a wide-angle lens (typically 35 mm and less, for 35 mm-format cameras), while a lens significantly longer than normal may be referred to as a telephoto lens (typically 85 mm and more, for 35 mm-format cameras). Technically, long focal length lenses are only "telephoto" if the focal length is longer than the physical length of the lens, but the term is often used to describe any long focal length lens. Due to the popularity of the 35 mm standard, camera–lens combinations are often described in terms of their 35 mm-equivalent focal length, that is, the focal length of a lens that would have the same angle of view, or field of view, if used on a full-frame 35 mm camera. Use of a 35 mm-equivalent focal length is particularly common with digital cameras, which often use sensors smaller than 35 mm film, and so require correspondingly shorter focal lengths to achieve a given angle of view, by a factor known as the crop factor. Optical power. The optical power of a lens or curved mirror is a physical quantity equal to the reciprocal of the focal length, expressed in metres. A dioptre is its unit of measurement with dimension of reciprocal length, equivalent to one reciprocal metre, 1 dioptre = 1 m−1. For example, a 2-dioptre lens brings parallel rays of light to focus at &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 metre. A flat window has an optical power of zero dioptres, as it does not cause light to converge or diverge. The main benefit of using optical power rather than focal length is that the thin lens formula has the object distance, image distance, and focal length all as reciprocals. Additionally, when relatively thin lenses are placed close together their powers approximately add. Thus, a thin 2.0-dioptre lens placed close to a thin 0.5-dioptre lens yields almost the same focal length as a single 2.5-dioptre lens. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{1}{f} =\\frac{1}{u}+\\frac{1}{v}\\ ." }, { "math_id": 1, "text": "f \\approx v\\ ." }, { "math_id": 2, "text": "\\frac{1}{f} = (n-1) \\left( \\frac{1}{R_1} - \\frac{1}{R_2} + \\frac{(n-1)d}{n R_1 R_2} \\right)," }, { "math_id": 3, "text": "\\mbox{FFD} = f \\left( 1 + \\frac{ (n-1) d}{n R_2} \\right), " }, { "math_id": 4, "text": "\\mbox{BFD} = f \\left( 1 - \\frac{ (n-1) d}{n R_1} \\right). " }, { "math_id": 5, "text": "f = -{R \\over 2}," }, { "math_id": 6, "text": "\\mathrm{FOV} = 2\\arctan{\\left({x\\over2f}\\right)}" }, { "math_id": 7, "text": "\\frac{1}{s_1} + \\frac{1}{s_2} = \\frac{1}{f}\\,." } ]
https://en.wikipedia.org/wiki?curid=82269
8227451
Symmetrization
In mathematics, symmetrization is a process that converts any function in formula_0 variables to a symmetric function in formula_0 variables. Similarly, antisymmetrization converts any function in formula_0 variables into an antisymmetric function. Two variables. Let formula_1 be a set and formula_2 be an additive abelian group. A map formula_3 is called a &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;symmetric map if formula_4 It is called an &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;antisymmetric map if instead formula_5 The &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;symmetrization of a map formula_3 is the map formula_6 Similarly, the &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;antisymmetrization or &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;skew-symmetrization of a map formula_3 is the map formula_7 The sum of the symmetrization and the antisymmetrization of a map formula_8 is formula_9 Thus, away from 2, meaning if 2 is invertible, such as for the real numbers, one can divide by 2 and express every function as a sum of a symmetric function and an anti-symmetric function. The symmetrization of a symmetric map is its double, while the symmetrization of an alternating map is zero; similarly, the antisymmetrization of a symmetric map is zero, while the antisymmetrization of an anti-symmetric map is its double. Bilinear forms. The symmetrization and antisymmetrization of a bilinear map are bilinear; thus away from 2, every bilinear form is a sum of a symmetric form and a skew-symmetric form, and there is no difference between a symmetric form and a quadratic form. At 2, not every form can be decomposed into a symmetric form and a skew-symmetric form. For instance, over the integers, the associated symmetric form (over the rationals) may take half-integer values, while over formula_10 a function is skew-symmetric if and only if it is symmetric (as formula_11). This leads to the notion of ε-quadratic forms and ε-symmetric forms. Representation theory. In terms of representation theory: As the symmetric group of order two equals the cyclic group of order two (formula_12), this corresponds to the discrete Fourier transform of order two. "n" variables. More generally, given a function in formula_0 variables, one can symmetrize by taking the sum over all formula_13 permutations of the variables, or antisymmetrize by taking the sum over all formula_14 even permutations and subtracting the sum over all formula_14 odd permutations (except that when formula_15 the only permutation is even). Here symmetrizing a symmetric function multiplies by formula_13 – thus if formula_13 is invertible, such as when working over a field of characteristic formula_16 or formula_17 then these yield projections when divided by formula_18 In terms of representation theory, these only yield the subrepresentations corresponding to the trivial and sign representation, but for formula_19 there are others – see representation theory of the symmetric group and symmetric polynomials. Bootstrapping. Given a function in formula_20 variables, one can obtain a symmetric function in formula_0 variables by taking the sum over formula_20-element subsets of the variables. In statistics, this is referred to as bootstrapping, and the associated statistics are called U-statistics. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": "A" }, { "math_id": 3, "text": "\\alpha : S \\times S \\to A" }, { "math_id": 4, "text": "\\alpha(s,t) = \\alpha(t,s) \\quad \\text{ for all } s, t \\in S." }, { "math_id": 5, "text": "\\alpha(s,t) = - \\alpha(t,s) \\quad \\text{ for all } s, t \\in S." }, { "math_id": 6, "text": "(x,y) \\mapsto \\alpha(x,y) + \\alpha(y,x)." }, { "math_id": 7, "text": "(x,y) \\mapsto \\alpha(x,y) - \\alpha(y,x)." }, { "math_id": 8, "text": "\\alpha" }, { "math_id": 9, "text": "2 \\alpha." }, { "math_id": 10, "text": "\\Z / 2\\Z," }, { "math_id": 11, "text": "1 = - 1" }, { "math_id": 12, "text": "\\mathrm{S}_2 = \\mathrm{C}_2" }, { "math_id": 13, "text": "n!" }, { "math_id": 14, "text": "n!/2" }, { "math_id": 15, "text": "n \\leq 1," }, { "math_id": 16, "text": "0" }, { "math_id": 17, "text": "p > n," }, { "math_id": 18, "text": "n!." }, { "math_id": 19, "text": "n > 2" }, { "math_id": 20, "text": "k" } ]
https://en.wikipedia.org/wiki?curid=8227451
8227721
Geothermobarometry
History of rock pressure and temperature Geothermobarometry is the methodology for estimating the pressure and temperature history of rocks (metamorphic, igneous or sedimentary). Geothermobarometry is a combination of "geobarometry", where the pressure attained (and retained) by a mineral assemblage is estimated, and "geothermometry" where the temperature attained (and retained) by a mineral assemblage is estimated. Methodology. Geothermobarometry relies upon understanding the temperature and pressure of the formation of minerals within rocks. There are several methods of measuring the temperature or pressure of mineral formation or re-equilibration relying for example on chemical equilibrium between minerals or by measuring the chemical composition and/or the crystal-chemical state of order of individual minerals or by measuring the residual stresses on solid inclusions or densities in fluid inclusions. "Classic" (thermodynamic) thermobarometry relies upon the attainment of thermodynamic equilibrium between mineral pairs/assemblages that vary their compositions as a function of temperature and pressure. The distribution of component elements between the mineral assemblages is then analysed using a variety of analytical techniques as for example electron microprobe (EM), scanning electron microscope (SEM), Mass Spectrometry (MS). There are numerous extra factors to consider such as oxygen fugacity and water activity (roughly, the same as concentration) that must be accounted for using the appropriate methodological and analytical approach (e.g. Mössbauer spectroscopy, micro-raman spectroscopy, infrared spectroscopy etc...) Geobarometers are typically net-transfer reactions, which are sensitive to pressure but have little change with temperature, such as "garnet-plagioclase-muscovite-biotite" reaction that involves a significant volume reduction upon high pressure: formula_0 Since mineral assemblages at equilibrium are dependent on pressures and temperatures, by measuring the composition of the coexisting minerals, together with using suitable activity models, the P-T conditions experienced by the rock can be determined. As different equilibrium constants of mineral assemblages would occur as lines with different slopes in the P-T diagram, therefore, by finding the intersection of at least two lines in the P-T diagram, the P-T condition of the specimen can be obtained. Despite the usefulness of geothermobarometry, special attention should be paid to whether the mineral assemblages represent an equilibrium, any occurrence of retrograde equilibrium in the rock, and appropriateness of calibration of the results. Elastic thermobarometry is a method of determining the equilibrium pressure and temperature attained by the host mineral and its inclusion on the rock history from the excess pressures exhibited by mineral inclusions trapped inside host minerals. Upon exhumation and cooling, contrasting compressibilities and thermal expansivities induce differential strains (volume mismatches) between a host crystal and its inclusions. These strains can be quantified in situ using Raman spectroscopy or X-ray diffraction. Knowing equations of state and elastic properties of minerals, elastic thermobarometry inverts measured strains to calculate the pressure-temperature conditions under which the stress state was uniform in the host and inclusion. These are commonly interpreted to represent the conditions of inclusion entrapment or the last elastic equilibration of the pair. Data on the geothermometers and geobarometers is derived from both laboratory studies on synthetic (artificial) mineral assemblages and from natural systems for which other constraints are available. For example, one of the best known and most widely applicable geothermometers is the garnet-biotite relationship where the relative proportions of Fe and Mg in garnet and biotite change with increasing temperature, so measurement of the compositions of these minerals to give the Fe-Mg distribution between them allows the temperature of crystallization to be calculated, given some assumptions. Assumptions in thermodynamic thermobarometry. In natural systems, the chemical reactions occur in open systems with unknown geological and chemical histories, and application of geothermobarometers relies on several assumptions that must hold in order for the laboratory data and natural compositions to relate in a valid fashion: Assumptions in elastic thermobarometry. In natural systems elastic behaviour of minerals can be easily perturbed by high temperature re-equilibration, plastic or brittle deformation, leading to an irreversible change beyond the elastic regime that will prevent reconstructing the "elastic history" of the pair. Techniques. Some techniques include: Geothermometers. Note that the Fe-Mg exchange thermometers are empirical (laboratory tested and calibrated) as well as calculated based on a theoretical thermodynamic understanding of the components and phases involved. The Ti-in-biotite thermometer is solely empirical and not well understood thermodynamically. Geobarometers. Various mineral assemblages rely more upon pressure than temperature; for example reactions which involve a large volume change. At high pressure, specific minerals assume lower volumes (therefore density increases, as the mass does not change) - it is these minerals which are good indicators of paleo-pressure. Software. "Software for "classic" thermobarometry includes:" "Software for elastic thermobarometry includes:" Clinopyroxene thermobarometry. The mineral clinopyroxene is used for temperature and pressure calculations of the magma that produced igneous rock containing this mineral. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathsf{\\underbrace\\ce{ Fe3Al2Si3O12 }_{\\text{Fe-Al garnet}}+\\underbrace\\ce{Ca3Al2Si3O12}_{\\text{Ca-Al garnet}}+\\underbrace\\ce{KAl3Si3O10(OH)2}_{\\text{muscovite}}\\ \\ce{<=>} \\ \\underbrace\\ce{3CaAl2Si2O8}_{\\text{plagioclase}}+\\underbrace\\ce{KFe3AlSi3O10(OH)2}_{\\text{biotite}}}" } ]
https://en.wikipedia.org/wiki?curid=8227721
822778
Baum–Welch algorithm
Algorithm in mathematics In electrical engineering, statistical computing and bioinformatics, the Baum–Welch algorithm is a special case of the expectation–maximization algorithm used to find the unknown parameters of a hidden Markov model (HMM). It makes use of the forward-backward algorithm to compute the statistics for the expectation step. History. The Baum–Welch algorithm was named after its inventors Leonard E. Baum and Lloyd R. Welch. The algorithm and the Hidden Markov models were first described in a series of articles by Baum and his peers at the IDA Center for Communications Research, Princeton in the late 1960s and early 1970s. One of the first major applications of HMMs was to the field of speech processing. In the 1980s, HMMs were emerging as a useful tool in the analysis of biological systems and information, and in particular genetic information. They have since become an important tool in the probabilistic modeling of genomic sequences. Description. A hidden Markov model describes the joint probability of a collection of "hidden" and observed discrete random variables. It relies on the assumption that the "i"-th hidden variable given the ("i" − 1)-th hidden variable is independent of previous hidden variables, and the current observation variables depend only on the current hidden state. The Baum–Welch algorithm uses the well known EM algorithm to find the maximum likelihood estimate of the parameters of a hidden Markov model given a set of observed feature vectors. Let formula_0 be a discrete hidden random variable with formula_1 possible values (i.e. We assume there are formula_1 states in total). We assume the formula_2 is independent of time formula_3, which leads to the definition of the time-independent stochastic transition matrix formula_4 The initial state distribution (i.e. when formula_5) is given by formula_6 The observation variables formula_7 can take one of formula_8 possible values. We also assume the observation given the "hidden" state is time independent. The probability of a certain observation formula_9 at time formula_3 for state formula_10 is given by formula_11 Taking into account all the possible values of formula_7 and formula_0, we obtain the formula_12 matrix formula_13 where formula_14 belongs to all the possible states and formula_9 belongs to all the observations. An observation sequence is given by formula_15. Thus we can describe a hidden Markov chain by formula_16. The Baum–Welch algorithm finds a local maximum for formula_17 (i.e. the HMM parameters formula_18 that maximize the probability of the observation). Algorithm. Set formula_19 with random initial conditions. They can also be set using prior information about the parameters if it is available; this can speed up the algorithm and also steer it toward the desired local maximum. Forward procedure. Let formula_20, the probability of seeing the observations formula_21 and being in state formula_22 at time formula_3. This is found recursively: Since this series converges exponentially to zero, the algorithm will numerically underflow for longer sequences. However, this can be avoided in a slightly modified algorithm by scaling formula_25 in the forward and formula_26 in the backward procedure below. Backward procedure. Let formula_27 that is the probability of the ending partial sequence formula_28 given starting state formula_22 at time formula_3. We calculate formula_29 as, Update. We can now calculate the temporary variables, according to Bayes' theorem: formula_32 which is the probability of being in state formula_22 at time formula_3 given the observed sequence formula_33 and the parameters formula_18 formula_34 which is the probability of being in state formula_22 and formula_35 at times formula_3 and formula_36 respectively given the observed sequence formula_33 and parameters formula_18. The denominators of formula_37 and formula_38 are the same ; they represent the probability of making the observation formula_33 given the parameters formula_18. The parameters of the hidden Markov model formula_18 can now be updated: which is the expected frequency spent in state formula_22 at time formula_40. which is the expected number of transitions from state "i" to state "j" compared to the expected total number of transitions away from state "i". To clarify, the number of transitions away from state "i" does not mean transitions to a different state "j", but to any state including itself. This is equivalent to the number of times state "i" is observed in the sequence from "t" = 1 to "t" = "T" − 1. where formula_43 is an indicator function, and formula_44 is the expected number of times the output observations have been equal to formula_45 while in state formula_22 over the expected total number of times in state formula_22. These steps are now repeated iteratively until a desired level of convergence. Note: It is possible to over-fit a particular data set. That is, formula_46. The algorithm also does not guarantee a global maximum. Multiple sequences. The algorithm described thus far assumes a single observed sequence formula_47. However, in many situations, there are several sequences observed: formula_48. In this case, the information from all of the observed sequences must be used in the update of the parameters formula_49, formula_50, and formula_51. Assuming that you have computed formula_52 and formula_53 for each sequence formula_54, the parameters can now be updated: where formula_58 is an indicator function Example. Suppose we have a chicken from which we collect eggs at noon every day. Now whether or not the chicken has laid eggs for collection depends on some unknown factors that are hidden. We can however (for simplicity) assume that the chicken is always in one of two states that influence whether the chicken lays eggs, and that this state only depends on the state on the previous day. Now we don't know the state at the initial starting point, we don't know the transition probabilities between the two states and we don't know the probability that the chicken lays an egg given a particular state. To start we first guess the transition and emission matrices. We then take a set of observations (E = eggs, N = no eggs): N, N, N, N, N, E, E, N, N, N This gives us a set of observed transitions between days: NN, NN, NN, NN, NE, EE, EN, NN, NN The next step is to estimate a new transition matrix. For example, the probability of the sequence NN and the state being &amp;NoBreak;&amp;NoBreak; then &amp;NoBreak;&amp;NoBreak; is given by the following, formula_59 Thus the new estimate for the &amp;NoBreak;&amp;NoBreak; to &amp;NoBreak;&amp;NoBreak; transition is now formula_60 (referred to as "Pseudo probabilities" in the following tables). We then calculate the &amp;NoBreak;&amp;NoBreak; to &amp;NoBreak;&amp;NoBreak;, &amp;NoBreak;&amp;NoBreak; to &amp;NoBreak;&amp;NoBreak; and &amp;NoBreak;&amp;NoBreak; to &amp;NoBreak;&amp;NoBreak; transition probabilities and normalize so they add to 1. This gives us the updated transition matrix: Next, we want to estimate a new emission matrix, The new estimate for the E coming from &amp;NoBreak;&amp;NoBreak; emission is now formula_61. This allows us to calculate the emission matrix as described above in the algorithm, by adding up the probabilities for the respective observed sequences. We then repeat for if N came from &amp;NoBreak;&amp;NoBreak; and for if N and E came from &amp;NoBreak;&amp;NoBreak; and normalize. To estimate the initial probabilities we assume all sequences start with the hidden state &amp;NoBreak;&amp;NoBreak; and calculate the highest probability and then repeat for &amp;NoBreak;&amp;NoBreak;. Again we then normalize to give an updated initial vector. Finally we repeat these steps until the resulting probabilities converge satisfactorily. Applications. Speech recognition. Hidden Markov Models were first applied to speech recognition by James K. Baker in 1975. Continuous speech recognition occurs by the following steps, modeled by a HMM. Feature analysis is first undertaken on temporal and/or spectral features of the speech signal. This produces an observation vector. The feature is then compared to all sequences of the speech recognition units. These units could be phonemes, syllables, or whole-word units. A lexicon decoding system is applied to constrain the paths investigated, so only words in the system's lexicon (word dictionary) are investigated. Similar to the lexicon decoding, the system path is further constrained by the rules of grammar and syntax. Finally, semantic analysis is applied and the system outputs the recognized utterance. A limitation of many HMM applications to speech recognition is that the current state only depends on the state at the previous time-step, which is unrealistic for speech as dependencies are often several time-steps in duration. The Baum–Welch algorithm also has extensive applications in solving HMMs used in the field of speech synthesis. Cryptanalysis. The Baum–Welch algorithm is often used to estimate the parameters of HMMs in deciphering hidden or noisy information and consequently is often used in cryptanalysis. In data security an observer would like to extract information from a data stream without knowing all the parameters of the transmission. This can involve reverse engineering a channel encoder. HMMs and as a consequence the Baum–Welch algorithm have also been used to identify spoken phrases in encrypted VoIP calls. In addition HMM cryptanalysis is an important tool for automated investigations of cache-timing data. It allows for the automatic discovery of critical algorithm state, for example key values. Applications in bioinformatics. Finding genes. Prokaryotic. The GLIMMER (Gene Locator and Interpolated Markov ModelER) software was an early gene-finding program used for the identification of coding regions in prokaryotic DNA. GLIMMER uses Interpolated Markov Models (IMMs) to identify the coding regions and distinguish them from the noncoding DNA. The latest release (GLIMMER3) has been shown to have increased specificity and accuracy compared with its predecessors with regard to predicting translation initiation sites, demonstrating an average 99% accuracy in locating 3' locations compared to confirmed genes in prokaryotes. Eukaryotic. The GENSCAN webserver is a gene locator capable of analyzing eukaryotic sequences up to one million base-pairs (1 Mbp) long. GENSCAN utilizes a general inhomogeneous, three periodic, fifth order Markov model of DNA coding regions. Additionally, this model accounts for differences in gene density and structure (such as intron lengths) that occur in different isochores. While most integrated gene-finding software (at the time of GENSCANs release) assumed input sequences contained exactly one gene, GENSCAN solves a general case where partial, complete, or multiple genes (or even no gene at all) is present. GENSCAN was shown to exactly predict exon location with 90% accuracy with 80% specificity compared to an annotated database. Copy-number variation detection. Copy-number variations (CNVs) are an abundant form of genome structure variation in humans. A discrete-valued bivariate HMM (dbHMM) was used assigning chromosomal regions to seven distinct states: unaffected regions, deletions, duplications and four transition states. Solving this model using Baum-Welch demonstrated the ability to predict the location of CNV breakpoint to approximately 300 bp from micro-array experiments. This magnitude of resolution enables more precise correlations between different CNVs and across populations than previously possible, allowing the study of CNV population frequencies. It also demonstrated a direct inheritance pattern for a particular CNV. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X_t" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "P(X_t\\mid X_{t-1})" }, { "math_id": 3, "text": "t" }, { "math_id": 4, "text": "A=\\{a_{ij}\\}=P(X_t=j\\mid X_{t-1}=i)." }, { "math_id": 5, "text": "t=1" }, { "math_id": 6, "text": "\\pi_i=P(X_1 = i)." }, { "math_id": 7, "text": "Y_t" }, { "math_id": 8, "text": "K" }, { "math_id": 9, "text": "y_i" }, { "math_id": 10, "text": "X_t = j" }, { "math_id": 11, "text": "b_j(y_i)=P(Y_t=y_i\\mid X_t=j)." }, { "math_id": 12, "text": "N \\times K" }, { "math_id": 13, "text": "B=\\{b_j(y_i)\\}" }, { "math_id": 14, "text": "b_j" }, { "math_id": 15, "text": "Y= (Y_1=y_1,Y_2=y_2,\\ldots,Y_T=y_T)" }, { "math_id": 16, "text": "\\theta = (A,B,\\pi)" }, { "math_id": 17, "text": "\\theta^* = \\operatorname{arg\\,max}_\\theta P(Y\\mid\\theta)" }, { "math_id": 18, "text": "\\theta" }, { "math_id": 19, "text": "\\theta = (A, B, \\pi)" }, { "math_id": 20, "text": "\\alpha_i(t)=P(Y_1=y_1,\\ldots,Y_t=y_t,X_t=i\\mid\\theta)" }, { "math_id": 21, "text": "y_1,y_2,\\ldots,y_t" }, { "math_id": 22, "text": "i" }, { "math_id": 23, "text": "\\alpha_i(1)=\\pi_i b_i(y_1)," }, { "math_id": 24, "text": "\\alpha_i(t+1)=b_i(y_{t+1}) \\sum_{j=1}^N \\alpha_j(t) a_{ji}." }, { "math_id": 25, "text": "\\alpha" }, { "math_id": 26, "text": "\\beta" }, { "math_id": 27, "text": "\\beta_i(t)=P(Y_{t+1}=y_{t+1},\\ldots,Y_T=y_{T}\\mid X_t=i,\\theta)" }, { "math_id": 28, "text": "y_{t+1},\\ldots,y_T" }, { "math_id": 29, "text": "\\beta_i(t)" }, { "math_id": 30, "text": "\\beta_i(T)=1," }, { "math_id": 31, "text": "\\beta_i(t)=\\sum_{j=1}^N \\beta_j(t+1) a_{ij} b_j(y_{t+1})." }, { "math_id": 32, "text": "\\gamma_i(t)=P(X_t=i\\mid Y,\\theta) = \\frac{P(X_t=i,Y\\mid\\theta)}{P(Y\\mid\\theta)} = \\frac{\\alpha_i(t)\\beta_i(t)}{\\sum_{j=1}^N \\alpha_j(t)\\beta_j(t)}," }, { "math_id": 33, "text": "Y" }, { "math_id": 34, "text": "\\xi_{ij}(t)=P(X_t=i,X_{t+1}=j\\mid Y,\\theta) = \\frac{P(X_t=i,X_{t+1}=j,Y\\mid\\theta)}{P(Y\\mid\\theta)} = \\frac{\\alpha_i(t) a_{ij} \\beta_j(t+1) b_j(y_{t+1})}{\\sum_{k=1}^N \\sum_{w=1}^N \\alpha_k(t) a_{kw} \\beta_w(t+1) b_w(y_{t+1}) }, " }, { "math_id": 35, "text": "j" }, { "math_id": 36, "text": "t+1" }, { "math_id": 37, "text": "\\gamma_i(t)" }, { "math_id": 38, "text": "\\xi_{ij}(t)" }, { "math_id": 39, "text": "\\pi_i^* = \\gamma_i(1)," }, { "math_id": 40, "text": "1" }, { "math_id": 41, "text": "a_{ij}^*=\\frac{\\sum^{T-1}_{t=1}\\xi_{ij}(t)}{\\sum^{T-1}_{t=1}\\gamma_i(t)}," }, { "math_id": 42, "text": "b_i^*(v_k)=\\frac{\\sum^T_{t=1} 1_{y_t=v_k} \\gamma_i(t)}{\\sum^T_{t=1} \\gamma_i(t)}," }, { "math_id": 43, "text": "\n1_{y_t=v_k}=\n\\begin{cases}\n1 & \\text{if } y_t=v_k,\\\\\n0 & \\text{otherwise}\n\\end{cases}\n" }, { "math_id": 44, "text": "b_i^*(v_k)" }, { "math_id": 45, "text": "v_k" }, { "math_id": 46, "text": "P(Y\\mid\\theta_\\text{final}) > P(Y \\mid \\theta_\\text{true}) " }, { "math_id": 47, "text": "Y = y_1, \\ldots, y_N" }, { "math_id": 48, "text": "Y_1, \\ldots, Y_R" }, { "math_id": 49, "text": "A" }, { "math_id": 50, "text": "\\pi" }, { "math_id": 51, "text": "b" }, { "math_id": 52, "text": "\\gamma_{ir}(t)" }, { "math_id": 53, "text": "\\xi_{ijr}(t)" }, { "math_id": 54, "text": "y_{1,r},\\ldots,y_{N_r,r}" }, { "math_id": 55, "text": "\\pi_i^* = \\frac{\\sum_{r=1}^{R}\\gamma_{ir}(1)}{R}" }, { "math_id": 56, "text": "a_{ij}^*=\\frac{\\sum_{r=1}^{R} \\sum^{T-1}_{t=1}\\xi_{ijr}(t)}{\\sum_{r=1}^{R} \\sum^{T-1}_{t=1}\\gamma_{ir}(t)}," }, { "math_id": 57, "text": "b_i^*(v_k)=\\frac{\\sum_{r=1}^{R} \\sum^T_{t=1} 1_{y_{tr}=v_k} \\gamma_{ir}(t)}{\\sum_{r=1}^{R} \\sum^T_{t=1} \\gamma_{ir}(t)}," }, { "math_id": 58, "text": "\n1_{y_{tr}=v_k}=\n\\begin{cases}\n1 & \\text{if } y_{t,r}=v_k,\\\\\n0 & \\text{otherwise}\n\\end{cases}\n" }, { "math_id": 59, "text": "P(S_1) \\cdot P(N|S_1) \\cdot P(S_1 \\rightarrow S_2) \\cdot P(N|S_2)." }, { "math_id": 60, "text": "\\frac{0.22}{2.4234}=0.0908" }, { "math_id": 61, "text": "\\frac{0.2394}{0.2730}=0.8769" } ]
https://en.wikipedia.org/wiki?curid=822778
82285
Mathematical proof
Reasoning for mathematical statements A mathematical proof is a deductive argument for a mathematical statement, showing that the stated assumptions logically guarantee the conclusion. The argument may use other previously established statements, such as theorems; but every proof can, in principle, be constructed using only certain basic or original assumptions known as axioms, along with the accepted rules of inference. Proofs are examples of exhaustive deductive reasoning which establish logical certainty, to be distinguished from empirical arguments or non-exhaustive inductive reasoning which establish "reasonable expectation". Presenting many cases in which the statement holds is not enough for a proof, which must demonstrate that the statement is true in "all" possible cases. A proposition that has not been proved but is believed to be true is known as a conjecture, or a hypothesis if frequently used as an assumption for further mathematical work. Proofs employ logic expressed in mathematical symbols, along with natural language which usually admits some ambiguity. In most mathematical literature, proofs are written in terms of rigorous informal logic. Purely formal proofs, written fully in symbolic language without the involvement of natural language, are considered in proof theory. The distinction between formal and informal proofs has led to much examination of current and historical mathematical practice, quasi-empiricism in mathematics, and so-called folk mathematics, oral traditions in the mainstream mathematical community or in other cultures. The philosophy of mathematics is concerned with the role of language and logic in proofs, and mathematics as a language. History and etymology. The word "proof" comes from the Latin "probare" (to test). Related modern words are English "probe", "probation", and "probability", Spanish "probar" (to smell or taste, or sometimes touch or test), Italian "provare" (to try), and German "probieren" (to try). The legal term "probity" means authority or credibility, the power of testimony to prove facts when given by persons of reputation or status. Plausibility arguments using heuristic devices such as pictures and analogies preceded strict mathematical proof. It is likely that the idea of demonstrating a conclusion first arose in connection with geometry, which originated in practical problems of land measurement. The development of mathematical proof is primarily the product of ancient Greek mathematics, and one of its greatest achievements. Thales (624–546 BCE) and Hippocrates of Chios (c. 470–410 BCE) gave some of the first known proofs of theorems in geometry. Eudoxus (408–355 BCE) and Theaetetus (417–369 BCE) formulated theorems but did not prove them. Aristotle (384–322 BCE) said definitions should describe the concept being defined in terms of other concepts already known. Mathematical proof was revolutionized by Euclid (300 BCE), who introduced the axiomatic method still in use today. It starts with undefined terms and axioms, propositions concerning the undefined terms which are assumed to be self-evidently true (from Greek "axios", something worthy). From this basis, the method proves theorems using deductive logic. Euclid's book, the "Elements", was read by anyone who was considered educated in the West until the middle of the 20th century. In addition to theorems of geometry, such as the Pythagorean theorem, the "Elements" also covers number theory, including a proof that the square root of two is irrational and a proof that there are infinitely many prime numbers. Further advances also took place in medieval Islamic mathematics. In the 10th century CE, the Iraqi mathematician Al-Hashimi worked with numbers as such, called "lines" but not necessarily considered as measurements of geometric objects, to prove algebraic propositions concerning multiplication, division, etc., including the existence of irrational numbers. An inductive proof for arithmetic sequences was introduced in the "Al-Fakhri" (1000) by Al-Karaji, who used it to prove the binomial theorem and properties of Pascal's triangle. Modern proof theory treats proofs as inductively defined data structures, not requiring an assumption that axioms are "true" in any sense. This allows parallel mathematical theories as formal models of a given intuitive concept, based on alternate sets of axioms, for example Axiomatic set theory and Non-Euclidean geometry. Nature and purpose. As practiced, a proof is expressed in natural language and is a rigorous argument intended to convince the audience of the truth of a statement. The standard of rigor is not absolute and has varied throughout history. A proof can be presented differently depending on the intended audience. To gain acceptance, a proof has to meet communal standards of rigor; an argument considered vague or incomplete may be rejected. The concept of proof is formalized in the field of mathematical logic. A formal proof is written in a formal language instead of natural language. A formal proof is a sequence of formulas in a formal language, starting with an assumption, and with each subsequent formula a logical consequence of the preceding ones. This definition makes the concept of proof amenable to study. Indeed, the field of proof theory studies formal proofs and their properties, the most famous and surprising being that almost all axiomatic systems can generate certain undecidable statements not provable within the system. The definition of a formal proof is intended to capture the concept of proofs as written in the practice of mathematics. The soundness of this definition amounts to the belief that a published proof can, in principle, be converted into a formal proof. However, outside the field of automated proof assistants, this is rarely done in practice. A classic question in philosophy asks whether mathematical proofs are analytic or synthetic. Kant, who introduced the analytic–synthetic distinction, believed mathematical proofs are synthetic, whereas Quine argued in his 1951 "Two Dogmas of Empiricism" that such a distinction is untenable. Proofs may be admired for their mathematical beauty. The mathematician Paul Erdős was known for describing proofs which he found to be particularly elegant as coming from "The Book", a hypothetical tome containing the most beautiful method(s) of proving each theorem. The book "Proofs from THE BOOK", published in 2003, is devoted to presenting 32 proofs its editors find particularly pleasing. Methods of proof. Direct proof. In direct proof, the conclusion is established by logically combining the axioms, definitions, and earlier theorems. For example, direct proof can be used to prove that the sum of two even integers is always even: Consider two even integers "x" and "y". Since they are even, they can be written as "x" = 2"a" and "y" = 2"b", respectively, for some integers "a" and "b". Then the sum is "x" + "y" = 2"a" + 2"b" = 2("a"+"b"). Therefore "x"+"y" has 2 as a factor and, by definition, is even. Hence, the sum of any two even integers is even. This proof uses the definition of even integers, the integer properties of closure under addition and multiplication, and the distributive property. Proof by mathematical induction. Despite its name, mathematical induction is a method of deduction, not a form of inductive reasoning. In proof by mathematical induction, a single "base case" is proved, and an "induction rule" is proved that establishes that any arbitrary case implies the next case. Since in principle the induction rule can be applied repeatedly (starting from the proved base case), it follows that all (usually infinitely many) cases are provable. This avoids having to prove each case individually. A variant of mathematical induction is proof by infinite descent, which can be used, for example, to prove the irrationality of the square root of two. A common application of proof by mathematical induction is to prove that a property known to hold for one number holds for all natural numbers: Let N = {1, 2, 3, 4, ...} be the set of natural numbers, and let "P"("n") be a mathematical statement involving the natural number "n" belonging to N such that For example, we can prove by induction that all positive integers of the form 2"n" − 1 are odd. Let "P"("n") represent "2"n" − 1 is odd": (i) For "n" = 1, 2"n" − 1 = 2(1) − 1 = 1, and 1 is odd, since it leaves a remainder of 1 when divided by 2. Thus "P"(1) is true. (ii) For any "n", if 2"n" − 1 is odd ("P"("n")), then (2"n" − 1) + 2 must also be odd, because adding 2 to an odd number results in an odd number. But (2"n" − 1) + 2 = 2"n" + 1 = 2("n"+1) − 1, so 2("n"+1) − 1 is odd ("P"("n"+1)). So "P"("n") implies "P"("n"+1). Thus 2"n" − 1 is odd, for all positive integers "n". The shorter phrase "proof by induction" is often used instead of "proof by mathematical induction". Proof by contraposition. Proof by contraposition infers the statement "if "p" then "q"" by establishing the logically equivalent contrapositive statement: "if "not q" then "not p"". For example, contraposition can be used to establish that, given an integer formula_0, if formula_1 is even, then formula_0 is even: Suppose formula_0 is not even. Then formula_0 is odd. The product of two odd numbers is odd, hence formula_2 is odd. Thus formula_1 is not even. Thus, if formula_1 "is" even, the supposition must be false, so formula_3 has to be even. Proof by contradiction. In proof by contradiction, also known by the Latin phrase "reductio ad absurdum" (by reduction to the absurd), it is shown that if some statement is assumed true, a logical contradiction occurs, hence the statement must be false. A famous example involves the proof that formula_4 is an irrational number: Suppose that formula_4 were a rational number. Then it could be written in lowest terms as formula_5 where "a" and "b" are non-zero integers with no common factor. Thus, formula_6. Squaring both sides yields 2"b"2 = "a"2. Since the expression on the left is an integer multiple of 2, the right expression is by definition divisible by 2. That is, "a"2 is even, which implies that "a" must also be even, as seen in the proposition above (in #Proof by contraposition). So we can write "a" = 2"c", where "c" is also an integer. Substitution into the original equation yields 2"b"2 = (2"c")2 = 4"c"2. Dividing both sides by 2 yields "b"2 = 2"c"2. But then, by the same argument as before, 2 divides "b"2, so "b" must be even. However, if "a" and "b" are both even, they have 2 as a common factor. This contradicts our previous statement that "a" and "b" have no common factor, so we must conclude that formula_4 is an irrational number. To paraphrase: if one could write formula_4 as a fraction, this fraction could never be written in lowest terms, since 2 could always be factored from numerator and denominator. Proof by construction. Proof by construction, or proof by example, is the construction of a concrete example with a property to show that something having that property exists. Joseph Liouville, for instance, proved the existence of transcendental numbers by constructing an explicit example. It can also be used to construct a counterexample to disprove a proposition that all elements have a certain property. Proof by exhaustion. In proof by exhaustion, the conclusion is established by dividing it into a finite number of cases and proving each one separately. The number of cases sometimes can become very large. For example, the first proof of the four color theorem was a proof by exhaustion with 1,936 cases. This proof was controversial because the majority of the cases were checked by a computer program, not by hand. Closed chain inference. "Main article: Closed chain inference" A closed chain inference shows that a collection of statements are pairwise equivalent. In order to prove that the statements formula_7 are each pairwise equivalent, proofs are given for the implications formula_8, formula_9, formula_10, formula_11 and formula_12. The pairwise equivalence of the statements then results from the transitivity of the material conditional. Probabilistic proof. A probabilistic proof is one in which an example is shown to exist, with certainty, by using methods of probability theory. Probabilistic proof, like proof by construction, is one of many ways to prove existence theorems. In the probabilistic method, one seeks an object having a given property, starting with a large set of candidates. One assigns a certain probability for each candidate to be chosen, and then proves that there is a non-zero probability that a chosen candidate will have the desired property. This does not specify which candidates have the property, but the probability could not be positive without at least one. A probabilistic proof is not to be confused with an argument that a theorem is 'probably' true, a 'plausibility argument'. The work toward the Collatz conjecture shows how far plausibility is from genuine proof, as does the disproof of the Mertens conjecture. While most mathematicians do not think that probabilistic evidence for the properties of a given object counts as a genuine mathematical proof, a few mathematicians and philosophers have argued that at least some types of probabilistic evidence (such as Rabin's probabilistic algorithm for testing primality) are as good as genuine mathematical proofs. Combinatorial proof. A combinatorial proof establishes the equivalence of different expressions by showing that they count the same object in different ways. Often a bijection between two sets is used to show that the expressions for their two sizes are equal. Alternatively, a double counting argument provides two different expressions for the size of a single set, again showing that the two expressions are equal. Nonconstructive proof. A nonconstructive proof establishes that a mathematical object with a certain property exists—without explaining how such an object can be found. Often, this takes the form of a proof by contradiction in which the nonexistence of the object is proved to be impossible. In contrast, a constructive proof establishes that a particular object exists by providing a method of finding it. The following famous example of a nonconstructive proof shows that there exist two irrational numbers "a" and "b" such that formula_13 is a rational number. This proof uses that formula_4 is irrational (an easy proof is known since Euclid), but not that formula_14 is irrational (this is true, but the proof is not elementary). Either formula_14 is a rational number and we are done (take formula_15), or formula_14 is irrational so we can write formula_16 and formula_17. This then gives formula_18, which is thus a rational number of the form formula_19 Statistical proofs in pure mathematics. The expression "statistical proof" may be used technically or colloquially in areas of pure mathematics, such as involving cryptography, chaotic series, and probabilistic number theory or analytic number theory. It is less commonly used to refer to a mathematical proof in the branch of mathematics known as mathematical statistics. See also the "Statistical proof using data" section below. Computer-assisted proofs. Until the twentieth century it was assumed that any proof could, in principle, be checked by a competent mathematician to confirm its validity. However, computers are now used both to prove theorems and to carry out calculations that are too long for any human or team of humans to check; the first proof of the four color theorem is an example of a computer-assisted proof. Some mathematicians are concerned that the possibility of an error in a computer program or a run-time error in its calculations calls the validity of such computer-assisted proofs into question. In practice, the chances of an error invalidating a computer-assisted proof can be reduced by incorporating redundancy and self-checks into calculations, and by developing multiple independent approaches and programs. Errors can never be completely ruled out in case of verification of a proof by humans either, especially if the proof contains natural language and requires deep mathematical insight to uncover the potential hidden assumptions and fallacies involved. Undecidable statements. A statement that is neither provable nor disprovable from a set of axioms is called undecidable (from those axioms). One example is the parallel postulate, which is neither provable nor refutable from the remaining axioms of Euclidean geometry. Mathematicians have shown there are many statements that are neither provable nor disprovable in Zermelo–Fraenkel set theory with the axiom of choice (ZFC), the standard system of set theory in mathematics (assuming that ZFC is consistent); see List of statements undecidable in ZFC. Gödel's (first) incompleteness theorem shows that many axiom systems of mathematical interest will have undecidable statements. Heuristic mathematics and experimental mathematics. While early mathematicians such as Eudoxus of Cnidus did not use proofs, from Euclid to the foundational mathematics developments of the late 19th and 20th centuries, proofs were an essential part of mathematics. With the increase in computing power in the 1960s, significant work began to be done investigating mathematical objects beyond the proof-theorem framework, in experimental mathematics. Early pioneers of these methods intended the work ultimately to be resolved into a classical proof-theorem framework, e.g. the early development of fractal geometry, which was ultimately so resolved. Related concepts. Visual proof. Although not a formal proof, a visual demonstration of a mathematical theorem is sometimes called a "proof without words". The left-hand picture below is an example of a historic visual proof of the Pythagorean theorem in the case of the (3,4,5) triangle. Some illusory visual proofs, such as the missing square puzzle, can be constructed in a way which appear to prove a supposed mathematical fact but only do so by neglecting tiny errors (for example, supposedly straight lines which actually bend slightly) which are unnoticeable until the entire picture is closely examined, with lengths and angles precisely measured or calculated. Elementary proof. An elementary proof is a proof which only uses basic techniques. More specifically, the term is used in number theory to refer to proofs that make no use of complex analysis. For some time it was thought that certain theorems, like the prime number theorem, could only be proved using "higher" mathematics. However, over time, many of these results have been reproved using only elementary techniques. Two-column proof. A particular way of organising a proof using two parallel columns is often used as a mathematical exercise in elementary geometry classes in the United States. The proof is written as a series of lines in two columns. In each line, the left-hand column contains a proposition, while the right-hand column contains a brief explanation of how the corresponding proposition in the left-hand column is either an axiom, a hypothesis, or can be logically derived from previous propositions. The left-hand column is typically headed "Statements" and the right-hand column is typically headed "Reasons". Colloquial use of "mathematical proof". The expression "mathematical proof" is used by lay people to refer to using mathematical methods or arguing with mathematical objects, such as numbers, to demonstrate something about everyday life, or when data used in an argument is numerical. It is sometimes also used to mean a "statistical proof" (below), especially when used to argue from data. Statistical proof using data. "Statistical proof" from data refers to the application of statistics, data analysis, or Bayesian analysis to infer propositions regarding the probability of data. While "using" mathematical proof to establish theorems in statistics, it is usually not a mathematical proof in that the "assumptions" from which probability statements are derived require empirical evidence from outside mathematics to verify. In physics, in addition to statistical methods, "statistical proof" can refer to the specialized "mathematical methods of physics" applied to analyze data in a particle physics experiment or observational study in physical cosmology. "Statistical proof" may also refer to raw data or a convincing diagram involving data, such as scatter plots, when the data or diagram is adequately convincing without further analysis. Inductive logic proofs and Bayesian analysis. Proofs using inductive logic, while considered mathematical in nature, seek to establish propositions with a degree of certainty, which acts in a similar manner to probability, and may be less than full certainty. Inductive logic should not be confused with mathematical induction. Bayesian analysis uses Bayes' theorem to update a person's assessment of likelihoods of hypotheses when new evidence or information is acquired. Proofs as mental objects. Psychologism views mathematical proofs as psychological or mental objects. Mathematician philosophers, such as Leibniz, Frege, and Carnap have variously criticized this view and attempted to develop a semantics for what they considered to be the language of thought, whereby standards of mathematical proof might be applied to empirical science. Influence of mathematical proof methods outside mathematics. Philosopher-mathematicians such as Spinoza have attempted to formulate philosophical arguments in an axiomatic manner, whereby mathematical proof standards could be applied to argumentation in general philosophy. Other mathematician-philosophers have tried to use standards of mathematical proof and reason, without empiricism, to arrive at statements outside of mathematics, but having the certainty of propositions deduced in a mathematical proof, such as Descartes' "cogito" argument. Ending a proof. Sometimes, the abbreviation "Q.E.D." is written to indicate the end of a proof. This abbreviation stands for "quod erat demonstrandum", which is Latin for "that which was to be demonstrated". A more common alternative is to use a square or a rectangle, such as □ or ∎, known as a "tombstone" or "halmos" after its eponym Paul Halmos. Often, "which was to be shown" is verbally stated when writing "QED", "□", or "∎" during an oral presentation. Unicode explicitly provides the "end of proof" character, U+220E (∎) . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": " x^2 " }, { "math_id": 2, "text": " x^2 = x\\cdot x " }, { "math_id": 3, "text": " x " }, { "math_id": 4, "text": "\\sqrt{2}" }, { "math_id": 5, "text": "\\sqrt{2} = {a\\over b}" }, { "math_id": 6, "text": "b\\sqrt{2} = a" }, { "math_id": 7, "text": "\\varphi_1,\\ldots,\\varphi_n" }, { "math_id": 8, "text": "\\varphi_1\\Rightarrow\\varphi_2" }, { "math_id": 9, "text": "\\varphi_2\\Rightarrow\\varphi_3" }, { "math_id": 10, "text": "\\dots" }, { "math_id": 11, "text": "\\varphi_{n-1}\\Rightarrow\\varphi_n" }, { "math_id": 12, "text": "\\varphi_{n}\\Rightarrow\\varphi_1" }, { "math_id": 13, "text": "a^b" }, { "math_id": 14, "text": "\\sqrt{2}^{\\sqrt{2}}" }, { "math_id": 15, "text": "a=b=\\sqrt{2}" }, { "math_id": 16, "text": "a=\\sqrt{2}^{\\sqrt{2}}" }, { "math_id": 17, "text": "b=\\sqrt{2}" }, { "math_id": 18, "text": "\\left (\\sqrt{2}^{\\sqrt{2}}\\right )^{\\sqrt{2}}=\\sqrt{2}^{2}=2" }, { "math_id": 19, "text": "a^b." } ]
https://en.wikipedia.org/wiki?curid=82285
82289
Composite number
Integer having a non-trivial divisor A composite number is a positive integer that can be formed by multiplying two smaller positive integers. Equivalently, it is a positive integer that has at least one divisor other than 1 and itself. Every positive integer is composite, prime, or the unit 1, so the composite numbers are exactly the numbers that are not prime and not a unit. For example, the integer 14 is a composite number because it is the product of the two smaller integers 2 × 7. Likewise, the integers 2 and 3 are not composite numbers because each of them can only be divided by one and itself. The composite numbers up to 150 are: 4, 6, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21, 22, 24, 25, 26, 27, 28, 30, 32, 33, 34, 35, 36, 38, 39, 40, 42, 44, 45, 46, 48, 49, 50, 51, 52, 54, 55, 56, 57, 58, 60, 62, 63, 64, 65, 66, 68, 69, 70, 72, 74, 75, 76, 77, 78, 80, 81, 82, 84, 85, 86, 87, 88, 90, 91, 92, 93, 94, 95, 96, 98, 99, 100, 102, 104, 105, 106, 108, 110, 111, 112, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 128, 129, 130, 132, 133, 134, 135, 136, 138, 140, 141, 142, 143, 144, 145, 146, 147, 148, 150. (sequence in the OEIS) Every composite number can be written as the product of two or more (not necessarily distinct) primes. For example, the composite number 299 can be written as 13 × 23, and the composite number 360 can be written as 23 × 32 × 5; furthermore, this representation is unique up to the order of the factors. This fact is called the fundamental theorem of arithmetic. There are several known primality tests that can determine whether a number is prime or composite, without necessarily revealing the factorization of a composite input. Types. One way to classify composite numbers is by counting the number of prime factors. A composite number with two prime factors is a semiprime or 2-almost prime (the factors need not be distinct, hence squares of primes are included). A composite number with three distinct prime factors is a sphenic number. In some applications, it is necessary to differentiate between composite numbers with an odd number of distinct prime factors and those with an even number of distinct prime factors. For the latter formula_0 (where μ is the Möbius function and "x" is half the total of prime factors), while for the former formula_1 However, for prime numbers, the function also returns −1 and formula_2. For a number "n" with one or more repeated prime factors, formula_3. If "all" the prime factors of a number are repeated it is called a powerful number (All perfect powers are powerful numbers). If "none" of its prime factors are repeated, it is called squarefree. (All prime numbers and 1 are squarefree.) For example, 72 = 23 × 32, all the prime factors are repeated, so 72 is a powerful number. 42 = 2 × 3 × 7, none of the prime factors are repeated, so 42 is squarefree. Another way to classify composite numbers is by counting the number of divisors. All composite numbers have at least three divisors. In the case of squares of primes, those divisors are formula_4. A number "n" that has more divisors than any "x" &lt; "n" is a highly composite number (though the first two such numbers are 1 and 2). Composite numbers have also been called "rectangular numbers", but that name can also refer to the pronic numbers, numbers that are the product of two consecutive integers. Yet another way to classify composite numbers is to determine whether all prime factors are either all below or all above some fixed (prime) number. Such numbers are called smooth numbers and rough numbers, respectively. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mu(n) = (-1)^{2x} = 1" }, { "math_id": 1, "text": "\\mu(n) = (-1)^{2x + 1} = -1." }, { "math_id": 2, "text": "\\mu(1) = 1" }, { "math_id": 3, "text": "\\mu(n) = 0" }, { "math_id": 4, "text": "\\{1, p, p^2\\}" } ]
https://en.wikipedia.org/wiki?curid=82289
8231140
Methanol reformer
A methanol reformer is a device used in chemical engineering, especially in the area of fuel cell technology, which can produce pure hydrogen gas and carbon dioxide by reacting a methanol and water (steam) mixture. formula_0 Technology. A mixture of water and methanol with a molar concentration ratio (water:methanol) of 1.0 - 1.5 is pressurized to approximately 20 bar, vaporized and heated to a temperature of 250 - 360 °C. The hydrogen that is created is separated through the use of Pressure swing adsorption or a hydrogen-permeable membrane made of polymer or a palladium alloy. There are two basic methods of conducting this process. With either design, not all of the hydrogen is removed from the product gases (raffinate). Since the remaining gas mixture still contains a significant amount of chemical energy, it is often mixed with air and burned to provide heat for the endothermic reforming reaction. Advantages and disadvantages. Methanol reformers are used as a component of stationary fuel cell systems or hydrogen fuel cell-powered vehicles (see Reformed methanol fuel cell). A prototype car, the NECAR 5, was introduced by Daimler-Chrysler in the year 2000. The primary advantage of a vehicle with a reformer is that it does not need a pressurized gas tank to store hydrogen fuel; instead methanol is stored as a liquid. The logistic implications of this are great; pressurized hydrogen is difficult to store and produce. Also, this could help ease the public's concern over the danger of hydrogen and thereby make fuel cell-powered vehicles more attractive. However, methanol, like gasoline, is toxic and (of course) flammable. The cost of the PdAg membrane and its susceptibility to damage by temperature changes provide obstacles to adoption. While hydrogen power produces energy without CO2, a methanol reformer creates the gas as a byproduct. Methanol (prepared from natural gas) that is used in an efficient fuel cell, however, releases less CO2 in the atmosphere than gasoline, in a net analysis.
[ { "math_id": 0, "text": "\\mathrm{ CH_3OH_{(g)} + H_2O_{(g)} \\;\\longrightarrow\\; CO_2 + 3\\ H_2 \\qquad} \\Delta H_{R\\ 298}^0 = 49.2\\ \\mathrm{kJ/mol} " } ]
https://en.wikipedia.org/wiki?curid=8231140
8232682
Robust optimization
Robust optimization is a field of mathematical optimization theory that deals with optimization problems in which a certain measure of robustness is sought against uncertainty that can be represented as deterministic variability in the value of the parameters of the problem itself and/or its solution. It is related to, but often distinguished from, probabilistic optimization methods such as chance-constrained optimization. History. The origins of robust optimization date back to the establishment of modern decision theory in the 1950s and the use of worst case analysis and Wald's maximin model as a tool for the treatment of severe uncertainty. It became a discipline of its own in the 1970s with parallel developments in several scientific and technological fields. Over the years, it has been applied in statistics, but also in operations research, electrical engineering, control theory, finance, portfolio management logistics, manufacturing engineering, chemical engineering, medicine, and computer science. In engineering problems, these formulations often take the name of "Robust Design Optimization", RDO or "Reliability Based Design Optimization", RBDO. Example 1. Consider the following linear programming problem formula_0 where formula_1 is a given subset of formula_2. What makes this a 'robust optimization' problem is the formula_3 clause in the constraints. Its implication is that for a pair formula_4 to be admissible, the constraint formula_5 must be satisfied by the worst formula_6 pertaining to formula_4, namely the pair formula_6 that maximizes the value of formula_7 for the given value of formula_4. If the parameter space formula_1 is finite (consisting of finitely many elements), then this robust optimization problem itself is a linear programming problem: for each formula_6 there is a linear constraint formula_5. If formula_1 is not a finite set, then this problem is a linear semi-infinite programming problem, namely a linear programming problem with finitely many (2) decision variables and infinitely many constraints. Classification. There are a number of classification criteria for robust optimization problems/models. In particular, one can distinguish between problems dealing with local and global models of robustness; and between probabilistic and non-probabilistic models of robustness. Modern robust optimization deals primarily with non-probabilistic models of robustness that are worst case oriented and as such usually deploy Wald's maximin models. Local robustness. There are cases where robustness is sought against small perturbations in a nominal value of a parameter. A very popular model of local robustness is the radius of stability model: formula_8 where formula_9 denotes the nominal value of the parameter, formula_10 denotes a ball of radius formula_11 centered at formula_9 and formula_12 denotes the set of values of formula_13 that satisfy given stability/performance conditions associated with decision formula_14. In words, the robustness (radius of stability) of decision formula_14 is the radius of the largest ball centered at formula_9 all of whose elements satisfy the stability requirements imposed on formula_14. The picture is this: where the rectangle formula_15 represents the set of all the values formula_13 associated with decision formula_14. Global robustness. Consider the simple abstract robust optimization problem formula_16 where formula_17 denotes the set of all "possible" values of formula_13 under consideration. This is a "global" robust optimization problem in the sense that the robustness constraint formula_18 represents all the "possible" values of formula_13. The difficulty is that such a "global" constraint can be too demanding in that there is no formula_19 that satisfies this constraint. But even if such an formula_19 exists, the constraint can be too "conservative" in that it yields a solution formula_19 that generates a very small payoff formula_20 that is not representative of the performance of other decisions in formula_21. For instance, there could be an formula_22 that only slightly violates the robustness constraint but yields a very large payoff formula_23. In such cases it might be necessary to relax a bit the robustness constraint and/or modify the statement of the problem. Example 2. Consider the case where the objective is to satisfy a constraint formula_24. where formula_19 denotes the decision variable and formula_13 is a parameter whose set of possible values in formula_17. If there is no formula_19 such that formula_25, then the following intuitive measure of robustness suggests itself: formula_26 where formula_27 denotes an appropriate measure of the "size" of set formula_28. For example, if formula_17 is a finite set, then formula_27 could be defined as the cardinality of set formula_28. In words, the robustness of decision is the size of the largest subset of formula_17 for which the constraint formula_29 is satisfied for each formula_13 in this set. An optimal decision is then a decision whose robustness is the largest. This yields the following robust optimization problem: formula_30 This intuitive notion of global robustness is not used often in practice because the robust optimization problems that it induces are usually (not always) very difficult to solve. Example 3. Consider the robust optimization problem formula_31 where formula_32 is a real-valued function on formula_33, and assume that there is no feasible solution to this problem because the robustness constraint formula_18 is too demanding. To overcome this difficulty, let formula_34 be a relatively small subset of formula_17 representing "normal" values of formula_13 and consider the following robust optimization problem: formula_35 Since formula_34 is much smaller than formula_17, its optimal solution may not perform well on a large portion of formula_17 and therefore may not be robust against the variability of formula_13 over formula_17. One way to fix this difficulty is to relax the constraint formula_29 for values of formula_13 outside the set formula_34 in a controlled manner so that larger violations are allowed as the distance of formula_13 from formula_34 increases. For instance, consider the relaxed robustness constraint formula_36 where formula_37 is a control parameter and formula_38 denotes the distance of formula_13 from formula_34. Thus, for formula_39 the relaxed robustness constraint reduces back to the original robustness constraint. This yields the following (relaxed) robust optimization problem: formula_40 The function formula_41 is defined in such a manner that formula_42 and formula_43 and therefore the optimal solution to the relaxed problem satisfies the original constraint formula_29 for all values of formula_13 in formula_34. It also satisfies the relaxed constraint formula_44 outside formula_34. Non-probabilistic robust optimization models. The dominating paradigm in this area of robust optimization is Wald's maximin model, namely formula_45 where the formula_46 represents the decision maker, the formula_47 represents Nature, namely uncertainty, formula_21 represents the decision space and formula_15 denotes the set of possible values of formula_13 associated with decision formula_14. This is the "classic" format of the generic model, and is often referred to as "minimax" or "maximin" optimization problem. The non-probabilistic (deterministic) model has been and is being extensively used for robust optimization especially in the field of signal processing. The equivalent mathematical programming (MP) of the classic format above is formula_48 Constraints can be incorporated explicitly in these models. The generic constrained classic format is formula_49 The equivalent constrained MP format is defined as: formula_50 Probabilistically robust optimization models. These models quantify the uncertainty in the "true" value of the parameter of interest by probability distribution functions. They have been traditionally classified as stochastic programming and stochastic optimization models. Recently, probabilistically robust optimization has gained popularity by the introduction of rigorous theories such as scenario optimization able to quantify the robustness level of solutions obtained by randomization. These methods are also relevant to data-driven optimization methods. Robust counterpart. The solution method to many robust program involves creating a deterministic equivalent, called the robust counterpart. The practical difficulty of a robust program depends on if its robust counterpart is computationally tractable. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\max_{x,y} \\ \\{3x + 2y\\} \\ \\ \\mathrm { subject \\ to }\\ \\ x,y\\ge 0; cx + dy \\le 10, \\forall (c,d)\\in P " }, { "math_id": 1, "text": "P" }, { "math_id": 2, "text": "\\mathbb{R}^{2}" }, { "math_id": 3, "text": "\\forall (c,d)\\in P" }, { "math_id": 4, "text": "(x,y)" }, { "math_id": 5, "text": "cx + dy \\le 10" }, { "math_id": 6, "text": "(c,d)\\in P" }, { "math_id": 7, "text": "cx + dy" }, { "math_id": 8, "text": "\\hat{\\rho}(x,\\hat{u}):= \\max_{\\rho\\ge 0}\\ \\{\\rho: u\\in S(x), \\forall u\\in B(\\rho,\\hat{u})\\}" }, { "math_id": 9, "text": "\\hat{u}" }, { "math_id": 10, "text": "B(\\rho,\\hat{u})" }, { "math_id": 11, "text": "\\rho" }, { "math_id": 12, "text": "S(x)" }, { "math_id": 13, "text": "u" }, { "math_id": 14, "text": "x" }, { "math_id": 15, "text": "U(x)" }, { "math_id": 16, "text": "\\max_{x\\in X}\\ \\{f(x): g(x,u)\\le b, \\forall u\\in U\\}" }, { "math_id": 17, "text": "U" }, { "math_id": 18, "text": "g(x,u)\\le b, \\forall u\\in U" }, { "math_id": 19, "text": "x\\in X" }, { "math_id": 20, "text": "f(x)" }, { "math_id": 21, "text": "X" }, { "math_id": 22, "text": "x'\\in X" }, { "math_id": 23, "text": "f(x')" }, { "math_id": 24, "text": "g(x,u)\\le b," }, { "math_id": 25, "text": "g(x,u)\\le b,\\forall u\\in U" }, { "math_id": 26, "text": "\\rho(x):= \\max_{Y\\subseteq U} \\ \\{size(Y): g(x,u)\\le b, \\forall u\\in Y\\} \\ , \\ x\\in X" }, { "math_id": 27, "text": "size(Y)" }, { "math_id": 28, "text": "Y" }, { "math_id": 29, "text": "g(x,u)\\le b" }, { "math_id": 30, "text": "\\max_{x\\in X, Y\\subseteq U} \\ \\{size(Y): g(x,u) \\le b, \\forall u\\in Y\\}" }, { "math_id": 31, "text": "z(U):= \\max_{x\\in X}\\ \\{f(x): g(x,u)\\le b, \\forall u\\in U\\}" }, { "math_id": 32, "text": "g" }, { "math_id": 33, "text": "X\\times U" }, { "math_id": 34, "text": "\\mathcal{N}" }, { "math_id": 35, "text": "z(\\mathcal{N}):= \\max_{x\\in X}\\ \\{f(x): g(x,u)\\le b, \\forall u\\in \\mathcal{N}\\}" }, { "math_id": 36, "text": "g(x,u) \\le b + \\beta \\cdot dist(u,\\mathcal{N}) \\ , \\ \\forall u\\in U" }, { "math_id": 37, "text": "\\beta \\ge 0" }, { "math_id": 38, "text": "dist(u,\\mathcal{N})" }, { "math_id": 39, "text": "\\beta =0" }, { "math_id": 40, "text": "z(\\mathcal{N},U):= \\max_{x\\in X}\\ \\{f(x): g(x,u)\\le b + \\beta \\cdot dist(u,\\mathcal{N}) \\ , \\ \\forall u\\in U\\}" }, { "math_id": 41, "text": "dist" }, { "math_id": 42, "text": "dist(u,\\mathcal{N})\\ge 0,\\forall u\\in U" }, { "math_id": 43, "text": "dist(u,\\mathcal{N})= 0,\\forall u\\in \\mathcal{N}" }, { "math_id": 44, "text": "g(x,u)\\le b + \\beta \\cdot dist(u,\\mathcal{N})" }, { "math_id": 45, "text": "\\max_{x\\in X}\\min_{u\\in U(x)} f(x,u)" }, { "math_id": 46, "text": "\\max" }, { "math_id": 47, "text": "\\min" }, { "math_id": 48, "text": "\\max_{x\\in X,v\\in \\mathbb{R}} \\ \\{v: v\\le f(x,u), \\forall u\\in U(x)\\}" }, { "math_id": 49, "text": "\\max_{x\\in X}\\min_{u\\in U(x)} \\ \\{f(x,u): g(x,u)\\le b,\\forall u\\in U(x)\\}" }, { "math_id": 50, "text": "\\max_{x\\in X,v\\in \\mathbb{R}} \\ \\{v: v\\le f(x,u), g(x,u)\\le b, \\forall u\\in U(x)\\}" } ]
https://en.wikipedia.org/wiki?curid=8232682
82330
Electric generator
Device that converts other energy to electrical energy In electricity generation, a generator is a device that converts motion-based power (potential and kinetic energy) or fuel-based power (chemical energy) into electric power for use in an external circuit. Sources of mechanical energy include steam turbines, gas turbines, water turbines, internal combustion engines, wind turbines and even hand cranks. The first electromagnetic generator, the Faraday disk, was invented in 1831 by British scientist Michael Faraday. Generators provide nearly all the power for electrical grids. In addition to electricity- and motion-based designs, photovoltaic and fuel cell powered generators use solar power and hydrogen-based fuels, respectively, to generate electrical output. The reverse conversion of electrical energy into mechanical energy is done by an electric motor, and motors and generators are very similar. Many motors can generate electricity from mechanical energy. Terminology. Electromagnetic generators fall into one of two broad categories, dynamos and alternators. Mechanically, a generator consists of a rotating part and a stationary part which together form a magnetic circuit: One of these parts generates a magnetic field, the other has a wire winding in which the changing field induces an electric current: The armature can be on either the rotor or the stator, depending on the design, with the field coil or magnet on the other part. History. Before the connection between magnetism and electricity was discovered, electrostatic generators were invented. They operated on electrostatic principles, by using moving electrically charged belts, plates and disks that carried charge to a high potential electrode. The charge was generated using either of two mechanisms: electrostatic induction or the triboelectric effect. Such generators generated very high voltage and low current. Because of their inefficiency and the difficulty of insulating machines that produced very high voltages, electrostatic generators had low power ratings, and were never used for generation of commercially significant quantities of electric power. Their only practical applications were to power early X-ray tubes, and later in some atomic particle accelerators. Faraday disk generator. The operating principle of electromagnetic generators was discovered in the years of 1831–1832 by Michael Faraday. The principle, later called Faraday's law, is that an electromotive force is generated in an electrical conductor which encircles a varying magnetic flux. Faraday also built the first electromagnetic generator, called the Faraday disk; a type of homopolar generator, using a copper disc rotating between the poles of a horseshoe magnet. It produced a small DC voltage. This design was inefficient, due to self-cancelling counterflows of current in regions of the disk that were not under the influence of the magnetic field. While current was induced directly underneath the magnet, the current would circulate backwards in regions that were outside the influence of the magnetic field. This counterflow limited the power output to the pickup wires and induced waste heating of the copper disc. Later homopolar generators would solve this problem by using an array of magnets arranged around the disc perimeter to maintain a steady field effect in one current-flow direction. Another disadvantage was that the output voltage was very low, due to the single current path through the magnetic flux. Experimenters found that using multiple turns of wire in a coil could produce higher, more useful voltages. Since the output voltage is proportional to the number of turns, generators could be easily designed to produce any desired voltage by varying the number of turns. Wire windings became a basic feature of all subsequent generator designs. Jedlik and the self-excitation phenomenon. Independently of Faraday, Ányos Jedlik started experimenting in 1827 with the electromagnetic rotating devices which he called electromagnetic self-rotors. In the prototype of the single-pole electric starter (finished between 1852 and 1854) both the stationary and the revolving parts were electromagnetic. It was also the discovery of the principle of dynamo self-excitation, which replaced permanent magnet designs. He also may have formulated the concept of the dynamo in 1861 (before Siemens and Wheatstone) but did not patent it as he thought he was not the first to realize this. Direct current generators. A coil of wire rotating in a magnetic field produces a current which changes direction with each 180° rotation, an alternating current (AC). However many early uses of electricity required direct current (DC). In the first practical electric generators, called "dynamos", the AC was converted into DC with a "commutator", a set of rotating switch contacts on the armature shaft. The commutator reversed the connection of the armature winding to the circuit every 180° rotation of the shaft, creating a pulsing DC current. One of the first dynamos was built by Hippolyte Pixii in 1832. The dynamo was the first electrical generator capable of delivering power for industry. The Woolrich Electrical Generator of 1844, now in Thinktank, Birmingham Science Museum, is the earliest electrical generator used in an industrial process. It was used by the firm of Elkingtons for commercial electroplating. The modern dynamo, fit for use in industrial applications, was invented independently by Sir Charles Wheatstone, Werner von Siemens and Samuel Alfred Varley. Varley took out a patent on 24 December 1866, while Siemens and Wheatstone both announced their discoveries on 17 January 1867, the latter delivering a paper on his discovery to the Royal Society. The "dynamo-electric machine" employed self-powering electromagnetic field coils rather than permanent magnets to create the stator field. Wheatstone's design was similar to Siemens', with the difference that in the Siemens design the stator electromagnets were in series with the rotor, but in Wheatstone's design they were in parallel. The use of electromagnets rather than permanent magnets greatly increased the power output of a dynamo and enabled high power generation for the first time. This invention led directly to the first major industrial uses of electricity. For example, in the 1870s Siemens used electromagnetic dynamos to power electric arc furnaces for the production of metals and other materials. The dynamo machine that was developed consisted of a stationary structure, which provides the magnetic field, and a set of rotating windings which turn within that field. On larger machines the constant magnetic field is provided by one or more electromagnets, which are usually called field coils. Large power generation dynamos are now rarely seen due to the now nearly universal use of alternating current for power distribution. Before the adoption of AC, very large direct-current dynamos were the only means of power generation and distribution. AC has come to dominate due to the ability of AC to be easily transformed to and from very high voltages to permit low losses over large distances. Synchronous generators (alternating current generators). Through a series of discoveries, the dynamo was succeeded by many later inventions, especially the AC alternator, which was capable of generating alternating current. It is commonly known to be the Synchronous Generators (SGs). The synchronous machines are directly connected to the grid and need to be properly synchronized during startup. Moreover, they are excited with special control to enhance the stability of the power system. Alternating current generating systems were known in simple forms from Michael Faraday's original discovery of the magnetic induction of electric current. Faraday himself built an early alternator. His machine was a "rotating rectangle", whose operation was "heteropolar": each active conductor passed successively through regions where the magnetic field was in opposite directions. Large two-phase alternating current generators were built by a British electrician, J. E. H. Gordon, in 1882. The first public demonstration of an "alternator system" was given by William Stanley Jr., an employee of Westinghouse Electric in 1886. Sebastian Ziani de Ferranti established "Ferranti, Thompson and Ince" in 1882, to market his "Ferranti-Thompson Alternator", invented with the help of renowned physicist Lord Kelvin. His early alternators produced frequencies between 100 and 300 Hz. Ferranti went on to design the Deptford Power Station for the London Electric Supply Corporation in 1887 using an alternating current system. On its completion in 1891, it was the first truly modern power station, supplying high-voltage AC power that was then "stepped down" for consumer use on each street. This basic system remains in use today around the world. After 1891, polyphase alternators were introduced to supply currents of multiple differing phases. Later alternators were designed for varying alternating-current frequencies between sixteen and about one hundred hertz, for use with arc lighting, incandescent lighting and electric motors. Self-excitation. As the requirements for larger scale power generation increased, a new limitation rose: the magnetic fields available from permanent magnets. Diverting a small amount of the power generated by the generator to an electromagnetic field coil allowed the generator to produce substantially more power. This concept was dubbed self-excitation. The field coils are connected in series or parallel with the armature winding. When the generator first starts to turn, the small amount of remanent magnetism present in the iron core provides a magnetic field to get it started, generating a small current in the armature. This flows through the field coils, creating a larger magnetic field which generates a larger armature current. This "bootstrap" process continues until the magnetic field in the core levels off due to saturation and the generator reaches a steady state power output. Very large power station generators often utilize a separate smaller generator to excite the field coils of the larger. In the event of a severe widespread power outage where islanding of power stations has occurred, the stations may need to perform a black start to excite the fields of their largest generators, in order to restore customer power service. Specialised types of generator. Direct current (DC). A dynamo uses commutators to produce direct current. It is self-excited, i.e. its field electromagnets are powered by the machine's own output. Other types of DC generators use a separate source of direct current to energise their field magnets. Homopolar generator. A homopolar generator is a DC electrical generator comprising an electrically conductive disc or cylinder rotating in a plane perpendicular to a uniform static magnetic field. A potential difference is created between the center of the disc and the rim (or ends of the cylinder), the electrical polarity depending on the direction of rotation and the orientation of the field. It is also known as a unipolar generator, acyclic generator, disk dynamo, or Faraday disc. The voltage is typically low, on the order of a few volts in the case of small demonstration models, but large research generators can produce hundreds of volts, and some systems have multiple generators in series to produce an even larger voltage. They are unusual in that they can produce tremendous electric current, some more than a million amperes, because the homopolar generator can be made to have very low internal resistance. Magnetohydrodynamic (MHD) generator. A magnetohydrodynamic generator directly extracts electric power from moving hot gases through a magnetic field, without the use of rotating electromagnetic machinery. MHD generators were originally developed because the output of a plasma MHD generator is a flame, well able to heat the boilers of a steam power plant. The first practical design was the AVCO Mk. 25, developed in 1965. The U.S. government funded substantial development, culminating in a 25 MW demonstration plant in 1987. In the Soviet Union from 1972 until the late 1980s, the MHD plant U 25 was in regular utility operation on the Moscow power system with a rating of 25 MW, the largest MHD plant rating in the world at that time. MHD generators operated as a topping cycle are currently (2007) less efficient than combined cycle gas turbines. Alternating current (AC). Induction generator. Induction AC motors may be used as generators, turning mechanical energy into electric current. Induction generators operate by mechanically turning their rotor faster than the simultaneous speed, giving negative slip. A regular AC non-simultaneous motor usually can be used as a generator, without any changes to its parts. Induction generators are useful in applications like minihydro power plants, wind turbines, or in reducing high-pressure gas streams to lower pressure, because they can recover energy with relatively simple controls. They do not require another circuit to start working because the turning magnetic field is provided by induction from the one they have. They also do not require speed governor equipment as they inherently operate at the connected grid frequency. An induction generator must be powered with a leading voltage; this is usually done by connection to an electrical grid, or by powering themselves with phase correcting capacitors. Linear electric generator. In the simplest form of linear electric generator, a sliding magnet moves back and forth through a solenoid, a copper wire or a coil. An alternating current is induced in the wire, or loops of wire, by Faraday's law of induction each time the magnet slides through. This type of generator is used in the Faraday flashlight. Larger linear electricity generators are used in wave power schemes. Variable-speed constant-frequency generators. Grid-connected generators deliver power at a constant frequency. For generators of the synchronous or induction type, the primer mover speed turning the generator shaft must be at a particular speed (or narrow range of speed) to deliver power at the required utility frequency. Mechanical speed-regulating devices may waste a significant fraction of the input energy to maintain a required fixed frequency. Where it is impractical or undesired to tightly regulate the speed of the prime mover, doubly fed electric machines may be used as generators. With the assistance of power electronic devices, these can regulate the output frequency to a desired value over a wider range of generator shaft speeds. Alternatively, a standard generator can be used with no attempt to regulate frequency, and the resulting power converted to the desired output frequency with a rectifier and converter combination. Allowing a wider range of prime mover speeds can improve the overall energy production of an installation, at the cost of more complex generators and controls. For example, where a wind turbine operating at fixed frequency might be required to spill energy at high wind speeds, a variable speed system can allow recovery of energy contained during periods of high wind speed. Common use cases. Power station. A "power station", also known as a "power plant" or "powerhouse" and sometimes "generating station" or "generating plant", is an industrial facility that generates electricity. Most power stations contain one or more generators, or spinning machines converting mechanical power into three-phase electrical power. The relative motion between a magnetic field and a conductor creates an electric current. The energy source harnessed to turn the generator varies widely. Most power stations in the world burn fossil fuels such as coal, oil, and natural gas to generate electricity. Cleaner sources include nuclear power, and increasingly use renewables such as the sun, wind, waves and running water. Vehicular generators. Roadway vehicles. Motor vehicles require electrical energy to power their instrumentation, keep the engine itself operating, and recharge their batteries. Until about the 1960s motor vehicles tended to use DC generators (dynamos) with electromechanical regulators. Following the historical trend above and for many of the same reasons, these have now been replaced by alternators with built-in rectifier circuits. Bicycles. Bicycles require energy to power running lights and other equipment. There are two common kinds of generator in use on bicycles: bottle dynamos which engage the bicycle's tire on an as-needed basis, and hub dynamos which are directly attached to the bicycle's drive train. The name is conventional as they are small permanent-magnet alternators, not self-excited DC machines as are dynamos. Some electric bicycles are capable of regenerative braking, where the drive motor is used as a generator to recover some energy during braking. Sailboats. Sailing boats may use a water- or wind-powered generator to trickle-charge the batteries. A small propeller, wind turbine or turbine is connected to a low-power generator to supply currents at typical wind or cruising speeds. Recreational vehicles. Recreational vehicles need an extra power supply to power their onboard accessories, including air conditioning units, and refrigerators. An RV power plug is connected to the electric generator to obtain a stable power supply. Electric scooters. Electric scooters with regenerative braking have become popular all over the world. Engineers use kinetic energy recovery systems on the scooter to reduce energy consumption and increase its range up to 40-60% by simply recovering energy using the magnetic brake, which generates electric energy for further use. Modern vehicles reach speed up to 25–30 km/h and can run up to 35–40 km. Genset. An "engine-generator" is the combination of an electrical generator and an engine (prime mover) mounted together to form a single piece of self-contained equipment. The engines used are usually piston engines, but gas turbines can also be used, and there are even hybrid diesel-gas units, called dual-fuel units. Many different versions of engine-generators are available - ranging from very small portable petrol powered sets to large turbine installations. The primary advantage of engine-generators is the ability to independently supply electricity, allowing the units to serve as backup power sources. Human powered electrical generators. A generator can also be driven by human muscle power (for instance, in field radio station equipment). Human powered electric generators are commercially available, and have been the project of some DIY enthusiasts. Typically operated by means of pedal power, a converted bicycle trainer, or a foot pump, such generators can be practically used to charge batteries, and in some cases are designed with an integral inverter. An average "healthy human" can produce a steady 75 watts (0.1 horsepower) for a full eight hour period, while a "first class athlete" can produce approximately 298 watts (0.4 horsepower) for a similar period, at the end of which an undetermined period of rest and recovery will be required. At 298 watts, the average "healthy human" becomes exhausted within 10 minutes. The net electrical power that can be produced will be less, due to the efficiency of the generator. Portable radio receivers with a crank are made to reduce battery purchase requirements, see clockwork radio. During the mid 20th century, pedal powered radios were used throughout the Australian outback, to provide schooling (School of the Air), medical and other needs in remote stations and towns. Mechanical measurement. A tachogenerator is an electromechanical device which produces an output voltage proportional to its shaft speed. It may be used for a speed indicator or in a feedback speed control system. Tachogenerators are frequently used to power tachometers to measure the speeds of electric motors, engines, and the equipment they power. Generators generate voltage roughly proportional to shaft speed. With precise construction and design, generators can be built to produce very precise voltages for certain ranges of shaft speeds. Equivalent circuit. An equivalent circuit of a generator and load is shown in the adjacent diagram. The generator is represented by an abstract generator consisting of an ideal voltage source and an internal impedance. The generator's formula_0 and formula_1 parameters can be determined by measuring the winding resistance (corrected to operating temperature), and measuring the open-circuit and loaded voltage for a defined current load. This is the simplest model of a generator, further elements may need to be added for an accurate representation. In particular, inductance can be added to allow for the machine's windings and magnetic leakage flux, but a full representation can become much more complex than this. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V_\\text{G}" }, { "math_id": 1, "text": "R_\\text{G}" } ]
https://en.wikipedia.org/wiki?curid=82330
823302
Littlewood's law
Statistical law Littlewood's law states that a person can expect to experience events with odds of one in a million (referred to as a "miracle") at the rate of about one per month. It is named after the British mathematician John Edensor Littlewood. It seeks, among other things, to debunk one element of supposed supernatural phenomenology and is related to the more general law of truly large numbers, which states that with a sample size large enough, any outrageous (in terms of probability model of single sample) thing is likely to happen. History. An early formulation of the law appears in the 1953 collection of Littlewood's work, "A Mathematician's Miscellany". In the chapter "Large Numbers", Littlewood states: Improbabilities are apt to be overestimated. It is true that I should have been surprised in the past to learn that Professor Hardy [an atheist] had joined the Oxford Group [a Christian organization]. But one could not say the adverse chance was 106 : 1. Mathematics is a dangerous profession; an appreciable proportion of us go mad, and then this particular event would be quite likely. [...] I sometimes ask the question: what is the most remarkable coincidence you have experienced, and is it, for "the" most remarkable one, remarkable? (With a lifetime to choose from, 106 : 1 is a mere trifle.) Littlewood uses these remarks to illustrate that seemingly unlikely coincidences can be expected over long periods. He provides several anecdotes about improbable events that, given enough time, are likely to occur. For example, in the game of bridge, the probability that a player will be dealt 13 cards of the same suit is extremely low (Littlewood calculates it as formula_0). While such a deal might seem miraculous, if one estimates that formula_1 people in England each play an average of 30 bridge hands a week, it becomes quite expected that such a "miracle" would happen approximately once per year. This statement was later reformulated as Littlewood's law of miracles by Freeman Dyson, in a 2004 review of the book "Debunked! ESP, Telekinesis, and Other Pseudoscience", published in the "New York Review of Books": The paradoxical feature of the laws of probability is that they make unlikely events happen unexpectedly often. A simple way to state the paradox is Littlewood’s law of miracles. John Littlewood [...] defined a miracle as an event that has special importance when it occurs, but occurs with a probability of one in a million. This definition agrees with our commonsense understanding of the word “miracle.” Littlewood’s law of miracles states that in the course of any normal person’s life, miracles happen at a rate of roughly one per month. The proof of the law is simple. During the time that we are awake and actively engaged in living our lives, roughly for 8 hours each day, we see and hear things happening at a rate of about one per second. So the total number of events that happen to us is about 30,000 per day, or about a million per month. With few exceptions, these events are not miracles because they are insignificant. The chance of a miracle is about one per million events. Therefore we should expect about one miracle to happen, on the average, every month. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "2.4 \\cdot 10^{-9}" }, { "math_id": 1, "text": "2 \\cdot 10^6" } ]
https://en.wikipedia.org/wiki?curid=823302
8233045
Loomis–Whitney inequality
Result in geometry In mathematics, the Loomis–Whitney inequality is a result in geometry, which in its simplest form, allows one to estimate the "size" of a formula_0-dimensional set by the sizes of its formula_1-dimensional projections. The inequality has applications in incidence geometry, the study of so-called "lattice animals", and other areas. The result is named after the American mathematicians Lynn Harold Loomis and Hassler Whitney, and was published in 1949. Statement of the inequality. Fix a dimension formula_2 and consider the projections formula_3 formula_4 For each 1 ≤ "j" ≤ "d", let formula_5 formula_6 Then the Loomis–Whitney inequality holds: formula_7 Equivalently, taking formula_8 we have formula_9 formula_10 implying formula_11 A special case. The Loomis–Whitney inequality can be used to relate the Lebesgue measure of a subset of Euclidean space formula_12 to its "average widths" in the coordinate directions. This is in fact the original version published by Loomis and Whitney in 1949 (the above is a generalization). Let "E" be some measurable subset of formula_12 and let formula_13 be the indicator function of the projection of "E" onto the "j"th coordinate hyperplane. It follows that for any point "x" in "E", formula_14 Hence, by the Loomis–Whitney inequality, formula_15 and hence formula_16 The quantity formula_17 can be thought of as the average width of formula_18 in the formula_19th coordinate direction. This interpretation of the Loomis–Whitney inequality also holds if we consider a finite subset of Euclidean space and replace Lebesgue measure by counting measure. The following proof is the original one &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof Overview: We prove it for unions of unit cubes on the integer grid, then take the continuum limit. When formula_20, it is obvious. Now induct on formula_21. The only trick is to use Hölder's inequality for counting measures. Enumerate the dimensions of formula_22 as formula_23. Given formula_24 unit cubes on the integer grid in formula_22, with their union being formula_25, we project them to the 0-th coordinate. Each unit cube projects to an integer unit interval on formula_26. Now define the following: By induction on each slice of formula_28, we have formula_41 Multiplying by formula_35, we have formula_42 Thus formula_43 Now, the sum-product can be written as an integral over counting measure, allowing us to perform Holder's inequality: formula_44 Plugging in formula_40, we get formula_45 Corollary. Since formula_46, we get a loose isoperimetric inequality: formula_47Iterating the theorem yields formula_48 and more generallyformula_49where formula_50 enumerates over all projections of formula_51 to its formula_52 dimensional subspaces. Generalizations. The Loomis–Whitney inequality is a special case of the Brascamp–Lieb inequality, in which the projections "πj" above are replaced by more general linear maps, not necessarily all mapping onto spaces of the same dimension. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d" }, { "math_id": 1, "text": "(d-1)" }, { "math_id": 2, "text": "d\\ge 2" }, { "math_id": 3, "text": "\\pi_{j} : \\mathbb{R}^{d} \\to \\mathbb{R}^{d - 1}," }, { "math_id": 4, "text": "\\pi_{j} : x = (x_{1}, \\dots, x_{d}) \\mapsto \\hat{x}_{j} = (x_{1}, \\dots, x_{j - 1}, x_{j + 1}, \\dots, x_{d})." }, { "math_id": 5, "text": "g_{j} : \\mathbb{R}^{d - 1} \\to [0, + \\infty)," }, { "math_id": 6, "text": "g_{j} \\in L^{d - 1} (\\mathbb{R}^{d -1})." }, { "math_id": 7, "text": "\\left\\|\\prod_{j=1}^d g_j \\circ \\pi_j\\right\\|_{L^{1} (\\mathbb{R}^{d })}\n = \\int_{\\mathbb{R}^{d}} \\prod_{j = 1}^{d} g_{j} ( \\pi_{j} (x) ) \\, \\mathrm{d} x \\leq \\prod_{j = 1}^{d} \\| g_{j} \\|_{L^{d - 1} (\\mathbb{R}^{d - 1})}." }, { "math_id": 8, "text": "f_{j} (x) = g_{j} (x)^{d - 1}," }, { "math_id": 9, "text": "f_{j} : \\mathbb{R}^{d - 1} \\to [0, + \\infty)," }, { "math_id": 10, "text": "f_{j} \\in L^{1} (\\mathbb{R}^{d -1})" }, { "math_id": 11, "text": "\\int_{\\mathbb{R}^{d}} \\prod_{j = 1}^{d} f_{j} ( \\pi_{j} (x) )^{1 / (d - 1)} \\, \\mathrm{d} x \\leq \\prod_{j = 1}^{d} \\left( \\int_{\\mathbb{R}^{d - 1}} f_{j} (\\hat{x}_{j}) \\, \\mathrm{d} \\hat{x}_{j} \\right)^{1 / (d - 1)}." }, { "math_id": 12, "text": "\\mathbb{R}^{d}" }, { "math_id": 13, "text": "f_{j} = \\mathbf{1}_{\\pi_{j} (E)}" }, { "math_id": 14, "text": "\\prod_{j = 1}^{d} f_{j} (\\pi_{j} (x))^{1 / (d - 1)} = \\prod_{j = 1}^{d} 1 = 1." }, { "math_id": 15, "text": "\\int_{\\mathbb{R}^{d}} \\mathbf 1_E(x) \\, \\mathrm{d} x = | E | \\leq \\prod_{j = 1}^{d} | \\pi_{j} (E) |^{1 / (d - 1)}," }, { "math_id": 16, "text": "| E | \\geq \\prod_{j = 1}^{d} \\frac{| E |}{| \\pi_{j} (E) |}." }, { "math_id": 17, "text": "\\frac{| E |}{| \\pi_{j} (E) |}" }, { "math_id": 18, "text": "E" }, { "math_id": 19, "text": "j" }, { "math_id": 20, "text": "d=1, 2" }, { "math_id": 21, "text": "d+1" }, { "math_id": 22, "text": "\\R^{d+1}" }, { "math_id": 23, "text": "0, 1, ..., d" }, { "math_id": 24, "text": "N" }, { "math_id": 25, "text": "T" }, { "math_id": 26, "text": "\\R" }, { "math_id": 27, "text": "I_1, ..., I_k" }, { "math_id": 28, "text": "T_i" }, { "math_id": 29, "text": "I_i" }, { "math_id": 30, "text": "N_j" }, { "math_id": 31, "text": "\\pi_j(T)" }, { "math_id": 32, "text": "j = 0, 1, ..., d" }, { "math_id": 33, "text": "a_i" }, { "math_id": 34, "text": "\\sum_i a_i = N" }, { "math_id": 35, "text": "a_i \\leq N_0" }, { "math_id": 36, "text": "T_{ij}" }, { "math_id": 37, "text": "\\pi_j(T_i)" }, { "math_id": 38, "text": "j = 1, ..., d" }, { "math_id": 39, "text": "a_{ij}" }, { "math_id": 40, "text": "\\sum_i a_{ij} = N_j" }, { "math_id": 41, "text": "a_i^{d-1}\\leq \\prod_{j=1}^d a_{ij}" }, { "math_id": 42, "text": "a_i^{d}\\leq N_0\\prod_{j=1}^d a_{ij}" }, { "math_id": 43, "text": "N = \\sum_i a_i \\leq \\sum_i N_0^{1/d} \\prod_{j=1}^d a_{ij}^{1/d} = N_0^{1/d} \\sum_{i=1}^k\\prod_{j=1}^d a_{ij}^{1/d}" }, { "math_id": 44, "text": "\\sum_{i=1}^k\\prod_{j=1}^d a_{ij}^{1/d} = \\int_i \\prod_{j=1}^d a_{ij}^{1/d} = \\left\\|\\prod_{j=1}^d a_{\\cdot, j}^{1/d}\\right\\|_1 \\leq \\prod_j \\|a_{\\cdot, j}^{1/d}\\|_d=\\prod_{j=1}^d \\left(\\sum_{i=1}^k a_{ij}\\right)^{1/d}" }, { "math_id": 45, "text": "N^d \\leq \\prod_{j=0}^d N_j" }, { "math_id": 46, "text": "2 |\\pi_j(E)| \\leq |\\partial E|" }, { "math_id": 47, "text": "|E|^{d-1}\\leq 2^{-d}|\\partial E|^d" }, { "math_id": 48, "text": "| E | \\leq \\prod_{1 \\leq j < k \\leq d} | \\pi_{j}\\circ \\pi_k (E) |^{\\binom{d-1}{2}^{-1}}" }, { "math_id": 49, "text": "| E | \\leq \\prod_j | \\pi_{j} (E) |^{\\binom{d-1}{k}^{-1}}" }, { "math_id": 50, "text": "\\pi_j" }, { "math_id": 51, "text": "\\R^d" }, { "math_id": 52, "text": "d-k" } ]
https://en.wikipedia.org/wiki?curid=8233045
82341
Factorization
(Mathematical) decomposition into a product In mathematics, factorization (or factorisation, see English spelling differences) or factoring consists of writing a number or another mathematical object as a product of several "factors", usually smaller or simpler objects of the same kind. For example, 3 × 5 is an "integer factorization" of 15, and ("x" – 2)("x" + 2) is a "polynomial factorization" of "x"2 – 4. Factorization is not usually considered meaningful within number systems possessing division, such as the real or complex numbers, since any formula_0 can be trivially written as formula_1 whenever formula_2 is not zero. However, a meaningful factorization for a rational number or a rational function can be obtained by writing it in lowest terms and separately factoring its numerator and denominator. Factorization was first considered by ancient Greek mathematicians in the case of integers. They proved the fundamental theorem of arithmetic, which asserts that every positive integer may be factored into a product of prime numbers, which cannot be further factored into integers greater than 1. Moreover, this factorization is unique up to the order of the factors. Although integer factorization is a sort of inverse to multiplication, it is much more difficult algorithmically, a fact which is exploited in the RSA cryptosystem to implement public-key cryptography. Polynomial factorization has also been studied for centuries. In elementary algebra, factoring a polynomial reduces the problem of finding its roots to finding the roots of the factors. Polynomials with coefficients in the integers or in a field possess the unique factorization property, a version of the fundamental theorem of arithmetic with prime numbers replaced by irreducible polynomials. In particular, a univariate polynomial with complex coefficients admits a unique (up to ordering) factorization into linear polynomials: this is a version of the fundamental theorem of algebra. In this case, the factorization can be done with root-finding algorithms. The case of polynomials with integer coefficients is fundamental for computer algebra. There are efficient computer algorithms for computing (complete) factorizations within the ring of polynomials with rational number coefficients (see factorization of polynomials). A commutative ring possessing the unique factorization property is called a unique factorization domain. There are number systems, such as certain rings of algebraic integers, which are not unique factorization domains. However, rings of algebraic integers satisfy the weaker property of Dedekind domains: ideals factor uniquely into prime ideals. "Factorization" may also refer to more general decompositions of a mathematical object into the product of smaller or simpler objects. For example, every function may be factored into the composition of a surjective function with an injective function. Matrices possess many kinds of matrix factorizations. For example, every matrix has a unique LUP factorization as a product of a lower triangular matrix L with all diagonal entries equal to one, an upper triangular matrix U, and a permutation matrix P; this is a matrix formulation of Gaussian elimination. Integers. By the fundamental theorem of arithmetic, every integer greater than 1 has a unique (up to the order of the factors) factorization into prime numbers, which are those integers which cannot be further factorized into the product of integers greater than one. For computing the factorization of an integer n, one needs an algorithm for finding a divisor q of n or deciding that n is prime. When such a divisor is found, the repeated application of this algorithm to the factors q and "n" / "q" gives eventually the complete factorization of n. For finding a divisor q of n, if any, it suffices to test all values of q such that 1 &lt; "q" and "q"2 ≤ "n". In fact, if "r" is a divisor of n such that "r"2 &gt; "n", then "q" = "n" / "r" is a divisor of n such that "q"2 ≤ "n". If one tests the values of q in increasing order, the first divisor that is found is necessarily a prime number, and the "cofactor" "r" = "n" / "q" cannot have any divisor smaller than q. For getting the complete factorization, it suffices thus to continue the algorithm by searching a divisor of r that is not smaller than q and not greater than . There is no need to test all values of q for applying the method. In principle, it suffices to test only prime divisors. This needs to have a table of prime numbers that may be generated for example with the sieve of Eratosthenes. As the method of factorization does essentially the same work as the sieve of Eratosthenes, it is generally more efficient to test for a divisor only those numbers for which it is not immediately clear whether they are prime or not. Typically, one may proceed by testing 2, 3, 5, and the numbers &gt; 5, whose last digit is 1, 3, 7, 9 and the sum of digits is not a multiple of 3. This method works well for factoring small integers, but is inefficient for larger integers. For example, Pierre de Fermat was unable to discover that the 6th Fermat number formula_3 is not a prime number. In fact, applying the above method would require more than , for a number that has 10 decimal digits. There are more efficient factoring algorithms. However they remain relatively inefficient, as, with the present state of the art, one cannot factorize, even with the more powerful computers, a number of 500 decimal digits that is the product of two randomly chosen prime numbers. This ensures the security of the RSA cryptosystem, which is widely used for secure internet communication. Example. For factoring "n" = 1386 into primes: 1386 = 2 · 32 · 7 · 11. Expressions. Manipulating expressions is the basis of algebra. Factorization is one of the most important methods for expression manipulation for several reasons. If one can put an equation in a factored form "E"⋅"F" = 0, then the problem of solving the equation splits into two independent (and generally easier) problems "E" = 0 and "F" = 0. When an expression can be factored, the factors are often much simpler, and may thus offer some insight on the problem. For example, formula_4 having 16 multiplications, 4 subtractions and 3 additions, may be factored into the much simpler expression formula_5 with only two multiplications and three subtractions. Moreover, the factored form immediately gives roots "x" = "a","b","c" as the roots of the polynomial. On the other hand, factorization is not always possible, and when it is possible, the factors are not always simpler. For example, formula_6 can be factored into two irreducible factors formula_7 and formula_8. Various methods have been developed for finding factorizations; some are described below. Solving algebraic equations may be viewed as a problem of polynomial factorization. In fact, the fundamental theorem of algebra can be stated as follows: every polynomial in x of degree "n" with complex coefficients may be factorized into "n" linear factors formula_9 for "i" = 1, ..., "n", where the "a""i"s are the roots of the polynomial. Even though the structure of the factorization is known in these cases, the "a""i"s generally cannot be computed in terms of radicals ("n"th roots), by the Abel–Ruffini theorem. In most cases, the best that can be done is computing approximate values of the roots with a root-finding algorithm. History of factorization of expressions. The systematic use of algebraic manipulations for simplifying expressions (more specifically equations) may be dated to 9th century, with al-Khwarizmi's book "The Compendious Book on Calculation by Completion and Balancing", which is titled with two such types of manipulation. However, even for solving quadratic equations, the factoring method was not used before Harriot's work published in 1631, ten years after his death. In his book "Artis Analyticae Praxis ad Aequationes Algebraicas Resolvendas", Harriot drew tables for addition, subtraction, multiplication and division of monomials, binomials, and trinomials. Then, in a second section, he set up the equation "aa" − "ba" + "ca" = + "bc", and showed that this matches the form of multiplication he had previously provided, giving the factorization ("a" − "b")("a" + "c"). General methods. The following methods apply to any expression that is a sum, or that may be transformed into a sum. Therefore, they are most often applied to polynomials, though they also may be applied when the terms of the sum are not monomials, that is, the terms of the sum are a product of variables and constants. Common factor. It may occur that all terms of a sum are products and that some factors are common to all terms. In this case, the distributive law allows factoring out this common factor. If there are several such common factors, it is preferable to divide out the greatest such common factor. Also, if there are integer coefficients, one may factor out the greatest common divisor of these coefficients. For example, formula_10 since 2 is the greatest common divisor of 6, 8, and 10, and formula_11 divides all terms. Grouping. Grouping terms may allow using other methods for getting a factorization. For example, to factor formula_12 one may remark that the first two terms have a common factor x, and the last two terms have the common factor y. Thus formula_13 Then a simple inspection shows the common factor "x" + 5, leading to the factorization formula_14 In general, this works for sums of 4 terms that have been obtained as the product of two binomials. Although not frequently, this may work also for more complicated examples. Adding and subtracting terms. Sometimes, some term grouping reveals part of a recognizable pattern. It is then useful to add and subtract terms to complete the pattern. A typical use of this is the completing the square method for getting the quadratic formula. Another example is the factorization of formula_15 If one introduces the non-real square root of –1, commonly denoted i, then one has a difference of squares formula_16 However, one may also want a factorization with real number coefficients. By adding and subtracting formula_17 and grouping three terms together, one may recognize the square of a binomial: formula_18 Subtracting and adding formula_19 also yields the factorization: formula_20 These factorizations work not only over the complex numbers, but also over any field, where either –1, 2 or –2 is a square. In a finite field, the product of two non-squares is a square; this implies that the polynomial formula_21 which is irreducible over the integers, is reducible modulo every prime number. For example, formula_22 formula_23since formula_24 formula_25since formula_26 formula_27since formula_28 Recognizable patterns. Many identities provide an equality between a sum and a product. The above methods may be used for letting the sum side of some identity appear in an expression, which may therefore be replaced by a product. Below are identities whose left-hand sides are commonly used as patterns (this means that the variables E and F that appear in these identities may represent any subexpression of the expression that has to be factorized). formula_29 For example, formula_30 formula_31 formula_32 formula_33 In the following identities, the factors may often be further factorized: *;Difference, even exponent formula_34 *;Difference, even or odd exponent formula_35 This is an example showing that the factors may be much larger than the sum that is factorized. *;Sum, odd exponent formula_36 (obtained by changing F by –"F" in the preceding formula) *;Sum, even exponent If the exponent is a power of two then the expression cannot, in general, be factorized without introducing complex numbers (if E and F contain complex numbers, this may be not the case). If "n" has an odd divisor, that is if "n" = "pq" with p odd, one may use the preceding formula (in "Sum, odd exponent") applied to formula_37 formula_38 The binomial theorem supplies patterns that can easily be recognized from the integers that appear in them In low degree: formula_39 formula_40 formula_41 formula_42 More generally, the coefficients of the expanded forms of formula_43 and formula_44 are the binomial coefficients, that appear in the "n"th row of Pascal's triangle. Roots of unity. The nth roots of unity are the complex numbers each of which is a root of the polynomial formula_45 They are thus the numbers formula_46 for formula_47 It follows that for any two expressions E and F, one has: formula_48 formula_49 formula_50 If E and F are real expressions, and one wants real factors, one has to replace every pair of complex conjugate factors by its product. As the complex conjugate of formula_51 is formula_52 and formula_53 one has the following real factorizations (one passes from one to the other by changing k into "n" – "k" or "n" + 1 – "k", and applying the usual trigonometric formulas: formula_54 formula_55 The cosines that appear in these factorizations are algebraic numbers, and may be expressed in terms of radicals (this is possible because their Galois group is cyclic); however, these radical expressions are too complicated to be used, except for low values of n. For example, formula_56 formula_57 formula_58 Often one wants a factorization with rational coefficients. Such a factorization involves cyclotomic polynomials. To express rational factorizations of sums and differences or powers, we need a notation for the homogenization of a polynomial: if formula_59 its "homogenization" is the bivariate polynomial formula_60 Then, one has formula_61 formula_62 where the products are taken over all divisors of n, or all divisors of 2"n" that do not divide n, and formula_63 is the nth cyclotomic polynomial. For example, formula_64 formula_65 since the divisors of 6 are 1, 2, 3, 6, and the divisors of 12 that do not divide 6 are 4 and 12. Polynomials. For polynomials, factorization is strongly related with the problem of solving algebraic equations. An algebraic equation has the form formula_66 where "P"("x") is a polynomial in x with formula_67 A solution of this equation (also called a root of the polynomial) is a value r of x such that formula_68 If formula_69 is a factorization of "P"("x") = 0 as a product of two polynomials, then the roots of "P"("x") are the union of the roots of "Q"("x") and the roots of "R"("x"). Thus solving "P"("x") = 0 is reduced to the simpler problems of solving "Q"("x") = 0 and "R"("x") = 0. Conversely, the factor theorem asserts that, if r is a root of "P"("x") = 0, then "P"("x") may be factored as formula_70 where "Q"("x") is the quotient of Euclidean division of "P"("x") = 0 by the linear (degree one) factor "x" – "r". If the coefficients of "P"("x") are real or complex numbers, the fundamental theorem of algebra asserts that "P"("x") has a real or complex root. Using the factor theorem recursively, it results that formula_71 where formula_72 are the real or complex roots of P, with some of them possibly repeated. This complete factorization is unique up to the order of the factors. If the coefficients of "P"("x") are real, one generally wants a factorization where factors have real coefficients. In this case, the complete factorization may have some quadratic (degree two) factors. This factorization may easily be deduced from the above complete factorization. In fact, if "r" = "a" + "ib" is a non-real root of "P"("x"), then its complex conjugate "s" = "a" - "ib" is also a root of "P"("x"). So, the product formula_73 is a factor of "P"("x") with real coefficients. Repeating this for all non-real factors gives a factorization with linear or quadratic real factors. For computing these real or complex factorizations, one needs the roots of the polynomial, which may not be computed exactly, and only approximated using root-finding algorithms. In practice, most algebraic equations of interest have integer or rational coefficients, and one may want a factorization with factors of the same kind. The fundamental theorem of arithmetic may be generalized to this case, stating that polynomials with integer or rational coefficients have the unique factorization property. More precisely, every polynomial with rational coefficients may be factorized in a product formula_74 where q is a rational number and formula_75 are non-constant polynomials with integer coefficients that are irreducible and primitive; this means that none of the formula_76 may be written as the product two polynomials (with integer coefficients) that are neither 1 nor –1 (integers are considered as polynomials of degree zero). Moreover, this factorization is unique up to the order of the factors and the signs of the factors. There are efficient algorithms for computing this factorization, which are implemented in most computer algebra systems. See Factorization of polynomials. Unfortunately, these algorithms are too complicated to use for paper-and-pencil computations. Besides the heuristics above, only a few methods are suitable for hand computations, which generally work only for polynomials of low degree, with few nonzero coefficients. The main such methods are described in next subsections. Primitive-part &amp; content factorization. Every polynomial with rational coefficients, may be factorized, in a unique way, as the product of a rational number and a polynomial with integer coefficients, which is primitive (that is, the greatest common divisor of the coefficients is 1), and has a positive leading coefficient (coefficient of the term of the highest degree). For example: formula_77 formula_78 In this factorization, the rational number is called the content, and the primitive polynomial is the primitive part. The computation of this factorization may be done as follows: firstly, reduce all coefficients to a common denominator, for getting the quotient by an integer q of a polynomial with integer coefficients. Then one divides out the greater common divisor p of the coefficients of this polynomial for getting the primitive part, the content being formula_79 Finally, if needed, one changes the signs of p and all coefficients of the primitive part. This factorization may produce a result that is larger than the original polynomial (typically when there are many coprime denominators), but, even when this is the case, the primitive part is generally easier to manipulate for further factorization. Using the factor theorem. The factor theorem states that, if r is a root of a polynomial formula_80 meaning "P"("r") = 0, then there is a factorization formula_70 where formula_81 with formula_82. Then polynomial long division or synthetic division give: formula_83 This may be useful when one knows or can guess a root of the polynomial. For example, for formula_84 one may easily see that the sum of its coefficients is 0, so "r" = 1 is a root. As "r" + 0 = 1, and formula_85 one has formula_86 Rational roots. For polynomials with rational number coefficients, one may search for roots which are rational numbers. Primitive part-content factorization (see above) reduces the problem of searching for rational roots to the case of polynomials with integer coefficients having no non-trivial common divisor. If formula_87 is a rational root of such a polynomial formula_80 the factor theorem shows that one has a factorization formula_88 where both factors have integer coefficients (the fact that Q has integer coefficients results from the above formula for the quotient of "P"("x") by formula_89). Comparing the coefficients of degree n and the constant coefficients in the above equality shows that, if formula_90 is a rational root in reduced form, then q is a divisor of formula_91 and p is a divisor of formula_92 Therefore, there is a finite number of possibilities for p and q, which can be systematically examined. For example, if the polynomial formula_93 has a rational root formula_90 with "q" &gt; 0, then p must divide 6; that is formula_94 and q must divide 2, that is formula_95 Moreover, if "x" &lt; 0, all terms of the polynomial are negative, and, therefore, a root cannot be negative. That is, one must have formula_96 A direct computation shows that only formula_97 is a root, so there can be no other rational root. Applying the factor theorem leads finally to the factorization formula_98 Quadratic ac method. The above method may be adapted for quadratic polynomials, leading to the "ac method" of factorization. Consider the quadratic polynomial formula_99 with integer coefficients. If it has a rational root, its denominator must divide "a" evenly and it may be written as a possibly reducible fraction formula_100 By Vieta's formulas, the other root formula_101 is formula_102 with formula_103 Thus the second root is also rational, and Vieta's second formula formula_104 gives formula_105 that is formula_106 Checking all pairs of integers whose product is "ac" gives the rational roots, if any. In summary, if formula_107 has rational roots there are integers r and s such formula_108 and formula_109 (a finite number of cases to test), and the roots are formula_110 and formula_111 In other words, one has the factorization formula_112 For example, let consider the quadratic polynomial formula_113 Inspection of the factors of "ac" = 36 leads to 4 + 9 = 13 = "b", giving the two roots formula_114 and the factorization formula_115 Using formulas for polynomial roots. Any univariate quadratic polynomial formula_116 can be factored using the quadratic formula: formula_117 where formula_118 and formula_119 are the two roots of the polynomial. If "a, b, c" are all real, the factors are real if and only if the discriminant formula_120 is non-negative. Otherwise, the quadratic polynomial cannot be factorized into non-constant real factors. The quadratic formula is valid when the coefficients belong to any field of characteristic different from two, and, in particular, for coefficients in a finite field with an odd number of elements. There are also formulas for roots of cubic and quartic polynomials, which are, in general, too complicated for practical use. The Abel–Ruffini theorem shows that there are no general root formulas in terms of radicals for polynomials of degree five or higher. Using relations between roots. It may occur that one knows some relationship between the roots of a polynomial and its coefficients. Using this knowledge may help factoring the polynomial and finding its roots. Galois theory is based on a systematic study of the relations between roots and coefficients, that include Vieta's formulas. Here, we consider the simpler case where two roots formula_121 and formula_122 of a polynomial formula_123 satisfy the relation formula_124 where Q is a polynomial. This implies that formula_121 is a common root of formula_125 and formula_126 It is therefore a root of the greatest common divisor of these two polynomials. It follows that this greatest common divisor is a non constant factor of formula_126 Euclidean algorithm for polynomials allows computing this greatest common factor. For example, if one know or guess that: formula_127 has two roots that sum to zero, one may apply Euclidean algorithm to formula_123 and formula_128 The first division step consists in adding formula_123 to formula_129 giving the remainder of formula_130 Then, dividing formula_123 by formula_131 gives zero as a new remainder, and "x" – 5 as a quotient, leading to the complete factorization formula_132 Unique factorization domains. The integers and the polynomials over a field share the property of unique factorization, that is, every nonzero element may be factored into a product of an invertible element (a unit, ±1 in the case of integers) and a product of irreducible elements (prime numbers, in the case of integers), and this factorization is unique up to rearranging the factors and shifting units among the factors. Integral domains which share this property are called unique factorization domains (UFD). Greatest common divisors exist in UFDs, but not every integral domain in which greatest common divisors exist (known as a GCD domain) is a UFD. Every principal ideal domain is a UFD. A Euclidean domain is an integral domain on which is defined a Euclidean division similar to that of integers. Every Euclidean domain is a principal ideal domain, and thus a UFD. In a Euclidean domain, Euclidean division allows defining a Euclidean algorithm for computing greatest common divisors. However this does not imply the existence of a factorization algorithm. There is an explicit example of a field F such that there cannot exist any factorization algorithm in the Euclidean domain "F"["x"] of the univariate polynomials over F. Ideals. In algebraic number theory, the study of Diophantine equations led mathematicians, during 19th century, to introduce generalizations of the integers called algebraic integers. The first ring of algebraic integers that have been considered were Gaussian integers and Eisenstein integers, which share with usual integers the property of being principal ideal domains, and have thus the unique factorization property. Unfortunately, it soon appeared that most rings of algebraic integers are not principal and do not have unique factorization. The simplest example is formula_133 in which formula_134 and all these factors are irreducible. This lack of unique factorization is a major difficulty for solving Diophantine equations. For example, many wrong proofs of Fermat's Last Theorem (probably including Fermat's "truly marvelous proof of this, which this margin is too narrow to contain") were based on the implicit supposition of unique factorization. This difficulty was resolved by Dedekind, who proved that the rings of algebraic integers have unique factorization of ideals: in these rings, every ideal is a product of prime ideals, and this factorization is unique up the order of the factors. The integral domains that have this unique factorization property are now called Dedekind domains. They have many nice properties that make them fundamental in algebraic number theory. Matrices. Matrix rings are non-commutative and have no unique factorization: there are, in general, many ways of writing a matrix as a product of matrices. Thus, the factorization problem consists of finding factors of specified types. For example, the LU decomposition gives a matrix as the product of a lower triangular matrix by an upper triangular matrix. As this is not always possible, one generally considers the "LUP decomposition" having a permutation matrix as its third factor. See Matrix decomposition for the most common types of matrix factorizations. A logical matrix represents a binary relation, and matrix multiplication corresponds to composition of relations. Decomposition of a relation through factorization serves to profile the nature of the relation, such as a difunctional relation. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x" }, { "math_id": 1, "text": "(xy)\\times(1/y)" }, { "math_id": 2, "text": "y" }, { "math_id": 3, "text": "1 + 2^{2^5} = 1 + 2^{32} = 4\\,294\\,967\\,297" }, { "math_id": 4, "text": "x^3-ax^2-bx^2-cx^2+ abx+acx+bcx-abc" }, { "math_id": 5, "text": "(x-a)(x-b)(x-c)," }, { "math_id": 6, "text": "x^{10}-1" }, { "math_id": 7, "text": "x-1" }, { "math_id": 8, "text": "x^{9}+x^{8}+\\cdots+x^2+x+1" }, { "math_id": 9, "text": "x-a_i," }, { "math_id": 10, "text": "6 x^3 y^2 + 8 x^4 y^3 - 10 x^5 y^3 = 2 x^3 y^2(3 + 4xy - 5 x^2 y)," }, { "math_id": 11, "text": "x^3y^2" }, { "math_id": 12, "text": "4x^2 + 20x + 3xy + 15y, " }, { "math_id": 13, "text": "4x^2+20x+3xy+15y = (4x^2+20x) + (3xy+15y) = 4x(x+5) + 3y(x+5). " }, { "math_id": 14, "text": "4x^2+20x+3xy+15y = (4x+3y) (x+5)." }, { "math_id": 15, "text": "x^4 + 1." }, { "math_id": 16, "text": "x^4+1=(x^2+i)(x^2-i)." }, { "math_id": 17, "text": "2x^2," }, { "math_id": 18, "text": "x^4+1 = (x^4+2x^2+1) - 2x^2 = (x^2+1)^2 - \\left(x\\sqrt2\\right)^2 = \\left(x^2+x\\sqrt2+1\\right) \\left(x^2-x\\sqrt2+1\\right)." }, { "math_id": 19, "text": "2x^2" }, { "math_id": 20, "text": "x^4+1 = (x^4-2x^2+1)+2x^2 = (x^2-1)^2 + \\left(x\\sqrt2\\right)^2 = \\left(x^2+x\\sqrt{-2}-1\\right) \\left(x^2-x\\sqrt{-2}-1\\right)." }, { "math_id": 21, "text": "x^4 + 1," }, { "math_id": 22, "text": "x^4 + 1 \\equiv (x+1)^4 \\pmod 2;" }, { "math_id": 23, "text": "x^4 + 1 \\equiv (x^2+x-1)(x^2-x-1) \\pmod 3," }, { "math_id": 24, "text": "1^2 \\equiv -2 \\pmod 3;" }, { "math_id": 25, "text": "x^4 + 1 \\equiv (x^2+2)(x^2-2) \\pmod 5," }, { "math_id": 26, "text": "2^2 \\equiv -1 \\pmod 5;" }, { "math_id": 27, "text": "x^4 + 1 \\equiv (x^2+3x+1)(x^2-3x+1) \\pmod 7," }, { "math_id": 28, "text": "3^2 \\equiv 2 \\pmod 7." }, { "math_id": 29, "text": " E^2 - F^2 = (E+F)(E-F)" }, { "math_id": 30, "text": "\\begin{align}\na^2 + &2ab + b^2 - x^2 +2xy - y^2 \\\\\n&= (a^2 + 2ab + b^2) - (x^2 -2xy + y^2) \\\\\n&= (a+b)^2 - (x -y)^2 \\\\\n&= (a+b + x -y)(a+b -x + y).\n\\end{align} " }, { "math_id": 31, "text": " E^3 + F^3 = (E + F)(E^2 - EF + F^2)" }, { "math_id": 32, "text": " E^3 - F^3 = (E - F)(E^2 + EF + F^2)" }, { "math_id": 33, "text": "\\begin{align}\nE^4 - F^4 &= (E^2 + F^2)(E^2 - F^2) \\\\\n&= (E^2 + F^2)(E + F)(E - F)\n\\end{align}" }, { "math_id": 34, "text": "E^{2n}-F^{2n}= (E^n+F^n)(E^n-F^n)" }, { "math_id": 35, "text": " E^n - F^n = (E-F)(E^{n-1} + E^{n-2}F + E^{n-3}F^2 + \\cdots + EF^{n-2} + F^{n-1} )" }, { "math_id": 36, "text": " E^n + F^n = (E+F)(E^{n-1} - E^{n-2}F + E^{n-3}F^2 - \\cdots - EF^{n-2} + F^{n-1} )" }, { "math_id": 37, "text": "(E^q)^p+(F^q)^p." }, { "math_id": 38, "text": "\n\\begin{align}\n &x^2 + y^2 + z^2 + 2(xy +yz+xz)= (x + y+ z)^2 \\\\\n &x^3 + y^3 + z^3 - 3xyz = (x + y + z)(x^2 + y^2 + z^2 - xy - xz - yz)\\\\\n &x^3 + y^3 + z^3 + 3x^2(y + z) +3y^2(x+z) + 3z^2(x+y) + 6xyz = (x + y+z)^3 \\\\\n &x^4 + x^2y^2 + y^4 = (x^2 + xy+y^2)(x^2 - xy + y^2).\n\\end{align}\n" }, { "math_id": 39, "text": " a^2 + 2ab + b^2 = (a + b)^2" }, { "math_id": 40, "text": " a^2 - 2ab + b^2 = (a - b)^2" }, { "math_id": 41, "text": " a^3 + 3a^2b + 3ab^2 + b^3 = (a+b)^3 " }, { "math_id": 42, "text": " a^3 - 3a^2b + 3ab^2 - b^3 = (a-b)^3 " }, { "math_id": 43, "text": "(a+b)^n" }, { "math_id": 44, "text": "(a-b)^n" }, { "math_id": 45, "text": "x^n-1." }, { "math_id": 46, "text": "e^{2ik\\pi/n} = \\cos \\tfrac{2\\pi k}n + i\\sin \\tfrac{2\\pi k} n " }, { "math_id": 47, "text": "k=0, \\ldots, n-1." }, { "math_id": 48, "text": "E^n-F^n = (E-F) \\prod_{k=1}^{n-1} \\left(E-F e^{2ik\\pi/n}\\right)" }, { "math_id": 49, "text": "E^n + F^n = \\prod_{k=0}^{n-1} \\left(E-F e^{(2k+1)i\\pi/n}\\right) \\qquad \\text{if } n \\text{ is even}" }, { "math_id": 50, "text": "E^{n}+F^{n}=(E+F) \\prod_{k=1}^{n-1}\\left(E+F e^{2ik\\pi/n}\\right) \\qquad \\text{if } n \\text{ is odd}" }, { "math_id": 51, "text": "e^{i\\alpha}" }, { "math_id": 52, "text": "e^{-i\\alpha}," }, { "math_id": 53, "text": "\\left(a-be^{i\\alpha}\\right) \\left(a-be^{-i\\alpha}\\right)=\na^2 - ab\\left(e^{i\\alpha}+e^{-i\\alpha}\\right) + b^2e^{i\\alpha}e^{-i\\alpha} =\na^2 - 2ab\\cos\\,\\alpha + b^2, " }, { "math_id": 54, "text": "\\begin{align}\nE^{2n}-F^{2n}&= \n(E-F)(E+F)\\prod_{k=1}^{n-1} \\left(E^2-2EF \\cos\\,\\tfrac{k\\pi}n +F^2\\right)\\\\\n&=(E-F)(E+F)\\prod_{k=1}^{n-1} \\left(E^2+2EF \\cos\\,\\tfrac{k\\pi}n +F^2\\right)\n\\end{align}" }, { "math_id": 55, "text": " \\begin{align}\nE^{2n} + F^{2n} &= \n\\prod_{k=1}^n \\left(E^2 + 2EF\\cos\\,\\tfrac{(2k-1)\\pi}{2n}+F^2\\right)\\\\\n&=\\prod_{k=1}^n \\left(E^2 - 2EF\\cos\\,\\tfrac{(2k-1)\\pi}{2n}+F^2\\right)\n\\end{align}" }, { "math_id": 56, "text": " a^4 + b^4 = (a^2 - \\sqrt 2 ab + b^2)(a^2 + \\sqrt 2 ab + b^2)." }, { "math_id": 57, "text": " a^5 - b^5 = (a - b) \\left(a^2 + \\frac{1-\\sqrt 5}2 ab + b^2\\right) \\left(a^2 +\\frac{1+\\sqrt 5}2 ab + b^2\\right)," }, { "math_id": 58, "text": " a^5 + b^5 = (a + b) \\left(a^2 - \\frac{1-\\sqrt 5}2 ab + b^2\\right) \\left(a^2 -\\frac{1+\\sqrt 5}2 ab + b^2\\right)," }, { "math_id": 59, "text": "P(x)=a_0x^n+a_ix^{n-1} +\\cdots +a_n," }, { "math_id": 60, "text": "\\overline P(x,y)=a_0x^n+a_ix^{n-1}y +\\cdots +a_ny^n." }, { "math_id": 61, "text": "E^n-F^n=\\prod_{k\\mid n}\\overline Q_n(E,F)," }, { "math_id": 62, "text": "E^n+F^n=\\prod_{k\\mid 2n,k\\not\\mid n}\\overline Q_n(E,F)," }, { "math_id": 63, "text": "Q_n(x)" }, { "math_id": 64, "text": "a^6-b^6= \\overline Q_1(a,b)\\overline Q_2(a,b)\\overline Q_3(a,b)\\overline Q_6(a,b)=(a-b)(a+b)(a^2-ab+b^2)(a^2+ab+b^2)," }, { "math_id": 65, "text": "a^6+b^6=\\overline Q_4(a,b)\\overline Q_{12}(a,b) = (a^2+b^2)(a^4-a^2b^2+b^4)," }, { "math_id": 66, "text": "P(x)\\ \\,\\stackrel{\\text{def}}{=}\\ \\,a_0x^n+a_1x^{n-1}+\\cdots+a_n=0," }, { "math_id": 67, "text": "a_0\\ne 0." }, { "math_id": 68, "text": "P(r)=0." }, { "math_id": 69, "text": "P(x)=Q(x)R(x)" }, { "math_id": 70, "text": "P(x)=(x-r)Q(x)," }, { "math_id": 71, "text": "P(x)=a_0(x-r_1)\\cdots (x-r_n)," }, { "math_id": 72, "text": "r_1, \\ldots, r_n" }, { "math_id": 73, "text": "(x-r)(x-s) = x^2-(r+s)x+rs = x^2-2ax+a^2+b^2" }, { "math_id": 74, "text": "P(x)=q\\,P_1(x)\\cdots P_k(x)," }, { "math_id": 75, "text": "P_1(x), \\ldots, P_k(x)" }, { "math_id": 76, "text": "P_i(x)" }, { "math_id": 77, "text": "-10x^2 + 5x + 5 = (-5)\\cdot (2x^2 - x - 1)" }, { "math_id": 78, "text": "\\frac{1}{3}x^5 + \\frac{7}{2} x^2 + 2x + 1 = \\frac{1}{6} ( 2x^5 + 21x^2 + 12x + 6)" }, { "math_id": 79, "text": "p/q." }, { "math_id": 80, "text": "P(x)=a_0x^n+a_1x^{n-1}+\\cdots+a_{n-1}x+a_n," }, { "math_id": 81, "text": "Q(x)=b_0x^{n-1}+\\cdots+b_{n-2}x+b_{n-1}," }, { "math_id": 82, "text": "a_0=b_0" }, { "math_id": 83, "text": "b_i=a_0r^i +\\cdots+a_{i-1}r+a_i \\ \\text{ for }\\ i = 1,\\ldots,n{-}1." }, { "math_id": 84, "text": "P(x) = x^3 - 3x + 2," }, { "math_id": 85, "text": "r^2 +0r-3=-2," }, { "math_id": 86, "text": "x^3 - 3x + 2 = (x - 1)(x^2 + x - 2)." }, { "math_id": 87, "text": "x=\\tfrac pq" }, { "math_id": 88, "text": "P(x)=(qx-p)Q(x)," }, { "math_id": 89, "text": "x-p/q" }, { "math_id": 90, "text": "\\tfrac pq" }, { "math_id": 91, "text": "a_0," }, { "math_id": 92, "text": "a_n." }, { "math_id": 93, "text": "P(x)=2x^3 - 7x^2 + 10x - 6" }, { "math_id": 94, "text": "p\\in\\{\\pm 1,\\pm 2,\\pm3, \\pm 6\\}, " }, { "math_id": 95, "text": "q\\in\\{1, 2\\}. " }, { "math_id": 96, "text": "\\tfrac pq \\in \\{1, 2, 3, 6, \\tfrac 12, \\tfrac 32\\}." }, { "math_id": 97, "text": "\\tfrac 32" }, { "math_id": 98, "text": "2x^3 - 7x^2 + 10x - 6 = (2x -3)(x^2 -2x + 2)." }, { "math_id": 99, "text": "P(x)=ax^2 + bx + c" }, { "math_id": 100, "text": "r_1 = \\tfrac ra." }, { "math_id": 101, "text": "r_2" }, { "math_id": 102, "text": "r_2 = -\\frac ba - r_1 = -\\frac ba-\\frac ra =-\\frac{b+r}a = \\frac sa," }, { "math_id": 103, "text": "s=-(b+r)." }, { "math_id": 104, "text": "r_1 r_2=\\frac ca" }, { "math_id": 105, "text": "\\frac sa\\frac ra =\\frac ca," }, { "math_id": 106, "text": "rs=ac\\quad \\text{and}\\quad r+s=-b." }, { "math_id": 107, "text": "ax^2 +bx+c" }, { "math_id": 108, "text": "rs=ac" }, { "math_id": 109, "text": "r+s=-b" }, { "math_id": 110, "text": "\\tfrac ra" }, { "math_id": 111, "text": "\\tfrac sa." }, { "math_id": 112, "text": "a(ax^2+bx+c) = (ax-r)(ax-s)." }, { "math_id": 113, "text": "6x^2 + 13x + 6." }, { "math_id": 114, "text": "r_1 = -\\frac 46 =-\\frac 23 \\quad \\text{and} \\quad r_2 = -\\frac96 = -\\frac 32," }, { "math_id": 115, "text": "\n6x^2 + 13x + 6 = 6(x+\\tfrac 23)(x+\\tfrac 32)= (3x+2)(2x+3).\n" }, { "math_id": 116, "text": "ax^2+bx+c" }, { "math_id": 117, "text": "\nax^2 + bx + c = a(x - \\alpha)(x - \\beta) = a\\left(x - \\frac{-b + \\sqrt{b^2-4ac}}{2a}\\right) \\left(x - \\frac{-b - \\sqrt{b^2-4ac}}{2a}\\right),\n" }, { "math_id": 118, "text": "\\alpha" }, { "math_id": 119, "text": "\\beta" }, { "math_id": 120, "text": "b^2-4ac" }, { "math_id": 121, "text": "x_1" }, { "math_id": 122, "text": "x_2" }, { "math_id": 123, "text": "P(x)" }, { "math_id": 124, "text": "x_2=Q(x_1)," }, { "math_id": 125, "text": "P(Q(x))" }, { "math_id": 126, "text": "P(x)." }, { "math_id": 127, "text": "P(x)=x^3 -5x^2 -16x +80" }, { "math_id": 128, "text": "P(-x)." }, { "math_id": 129, "text": "P(-x)," }, { "math_id": 130, "text": "-10(x^2-16)." }, { "math_id": 131, "text": "x^2-16" }, { "math_id": 132, "text": "x^3 - 5x^2 - 16x + 80 = (x -5)(x-4)(x+4)." }, { "math_id": 133, "text": "\\mathbb Z[\\sqrt{-5}]," }, { "math_id": 134, "text": "9=3\\cdot 3 = (2+\\sqrt{-5})(2-\\sqrt{-5})," } ]
https://en.wikipedia.org/wiki?curid=82341
8234367
Tree-graded space
A geodesic metric space formula_0 is called a tree-graded space with respect to a collection of connected proper subsets called "pieces", if any two distinct pieces intersect in at most one point, and every non-trivial simple geodesic triangle of formula_0 is contained in one of the pieces. If the pieces have bounded diameter, tree-graded spaces behave like real trees in their coarse geometry (in the sense of Gromov), while allowing non-tree-like behavior within the pieces. Tree-graded spaces were introduced by Cornelia Druţu and Mark Sapir (2005) in their study of the asymptotic cones of hyperbolic groups.
[ { "math_id": 0, "text": "X" } ]
https://en.wikipedia.org/wiki?curid=8234367
82354
Henry (unit)
SI unit of inductance &lt;templatestyles src="Template:Infobox/styles-images.css" /&gt; The henry (symbol: H) is the unit of electrical inductance in the International System of Units (SI). If a current of 1 ampere flowing through a coil produces flux linkage of 1 weber turn, that coil has a self-inductance of 1 henry.‌ The unit is named after Joseph Henry (1797–1878), the American scientist who discovered electromagnetic induction independently of and at about the same time as Michael Faraday (1791–1867) in England. Definition. The inductance of an electric circuit is one henry when an electric current that is changing at one ampere per second results in an electromotive force of one volt across the inductor: formula_0, where "V"("t") is the resulting voltage across the circuit, "I"("t") is the current through the circuit, and "L" is the inductance of the circuit. The henry is a derived unit based on four of the seven base units of the International System of Units: kilogram (kg), metre (m), second (s), and ampere (A). Expressed in combinations of SI units, the henry is: formula_1 where: H = henry, kg = kilogram, m = metre, s = second, A = ampere, N = newton, C = coulomb, J = joule, T = tesla, Wb = weber, V = volt, F = farad, Ω = ohm, Hz = hertz, rad = radian Use. The International System of Units (SI) specifies that the symbol of a unit named for a person is written with an initial capital letter, while the name is not capitalized in sentence text, except when any word in that position would be capitalized, such as at the beginning of a sentence or in material using title case. The United States National Institute of Standards and Technology recommends users writing in English to use the plural as "henries". Applications. The inductance of a coil depends on its size, the number of turns, and the permeability of the material within and surrounding the coil. Formulae can be used to calculate the inductance of many common arrangements of conductors, such as parallel wires, or a solenoid. A small air-core coil used for broadcast AM radio tuning might have an inductance of a few tens of microhenries. A large motor winding with many turns around an iron core may have an inductance of hundreds of henries. The physical size of an inductance is also related to its current carrying and voltage withstand ratings. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\displaystyle V(t)= L \\frac{\\mathrm{d}I}{\\mathrm{d}t}" }, { "math_id": 1, "text": "\\text{H}\n= \\dfrac{\\text{kg} {\\cdot} \\text{m}^2}{\\text{s}^{2} {\\cdot} \\text{A}^2}\n= \\dfrac{\\text{N} {\\cdot} \\text{m}}{\\text{A}^2}\n= \\dfrac{\\text{kg} {\\cdot} \\text{m}^2}{\\text{C}^2}\n= \\dfrac{\\text{J}}{\\text{A}^2}\n= \\dfrac{\\text{T} {\\cdot} \\text{m}^2}{\\text{A}}\n= \\dfrac{\\text{Wb}}{\\text{A}}\n= \\dfrac{\\text{V} {\\cdot} \\text{s}}{\\text{A}}\n= \\dfrac{\\text{s}^2}{\\text{F}}\n= \\dfrac{\\Omega}{\\text{rad}{\\cdot} \\text{Hz}}\n= \\dfrac{\\Omega{\\cdot}\\text{s}} { \\text{rad}} " } ]
https://en.wikipedia.org/wiki?curid=82354
82355
Farad
SI unit of electric capacitance &lt;templatestyles src="Template:Infobox/styles-images.css" /&gt; The farad (symbol: F) is the unit of electrical capacitance, the ability of a body to store an electrical charge, in the International System of Units (SI), equivalent to 1 coulomb per volt (C/V). It is named after the English physicist Michael Faraday (1791–1867). In SI base units 1 F = 1 kg−1⋅m−2⋅s4⋅A2. Definition. The capacitance of a capacitor is one farad when one coulomb of charge changes the potential between the plates by one volt. Equally, one farad can be described as the capacitance which stores a one-coulomb charge across a potential difference of one volt. The relationship between capacitance, charge, and potential difference is linear. For example, if the potential difference across a capacitor is halved, the quantity of charge stored by that capacitor will also be halved. For most applications, the farad is an impractically large unit of capacitance. Most electrical and electronic applications are covered by the following SI prefixes: Equalities. A farad is a derived unit based on four of the seven base units of the International System of Units: kilogram (kg), metre (m), second (s), and ampere (A). Expressed in combinations of SI units, the farad is: formula_0 where F = farad, s = second, C = coulomb, V = volt, W = watt, J = joule, N = newton, Ω = ohm, Hz = Hertz, S = siemens, H = henry, A = ampere. History. The term "farad" was originally coined by Latimer Clark and Charles Bright in 1861, in honor of Michael Faraday, for a unit of quantity of charge, and by 1873, the farad had become a unit of capacitance. In 1881, at the International Congress of Electricians in Paris, the name farad was officially used for the unit of electrical capacitance. Explanation. A capacitor generally consists of two conducting surfaces, frequently referred to as plates, separated by an insulating layer usually referred to as a dielectric. The original capacitor was the Leyden jar developed in the 18th century. It is the accumulation of electric charge on the plates that results in capacitance. Modern capacitors are constructed using a range of manufacturing techniques and materials to provide the extraordinarily wide range of capacitance values used in electronics applications from femtofarads to farads, with maximum-voltage ratings ranging from a few volts to several kilovolts. Values of capacitors are usually specified in terms of SI prefixes of farads (F), microfarads (μF), nanofarads (nF) and picofarads (pF). The millifarad (mF) is rarely used in practice; a capacitance of 4.7 mF (0.0047 F), for example, is instead written as . The nanofarad (nF) is uncommon in North America. The size of commercially available capacitors ranges from around 0.1 pF to (5 kF) supercapacitors. Parasitic capacitance in high-performance integrated circuits can be measured in femtofarads (1 fF = 0.001 pF = 10-15 F), while high-performance test equipment can detect changes in capacitance on the order of tens of attofarads (1 aF = 10−18 F). A value of 0.1 pF is about the smallest available in capacitors for general use in electronic design, since smaller ones would be dominated by the parasitic capacitances of other components, wiring or printed circuit boards. Capacitance values of 1 pF or lower can be achieved by twisting two short lengths of insulated wire together. The capacitance of the Earth's ionosphere with respect to the ground is calculated to be about 1 F. Informal and deprecated terminology. The picofarad (pF) is sometimes colloquially pronounced as "puff" or "pic", as in "a ten-puff capacitor". Similarly, "mic" (pronounced "mike") is sometimes used informally to signify microfarads. Nonstandard abbreviations were and are often used. Farad has been abbreviated "f", "fd", and "Fd". For the prefix "micro-", when the Greek small letter "μ" or the legacy micro sign "μ" is not available (as on typewriters) or inconvenient to enter, it is often substituted with the similar-appearing "u" or "U", with little risk of confusion. It was also substituted with the similar-sounding "M" or "m", which can be confusing because M officially stands for 1,000,000, and m preferably stands for 1/1000. In texts prior to 1960, and on capacitor packages until more recently, "microfarad(s)" was abbreviated "mf" or "MFD" rather than the modern "μF". A 1940 Radio Shack catalog listed every capacitor's rating in "Mfd.", from 0.000005 Mfd. (5 pF) to 50 Mfd. (50 μF). "Micromicrofarad" or "micro-microfarad" is an obsolete unit found in some older texts and labels, contains a nonstandard metric double prefix. It is exactly equivalent to a picofarad (pF). It is abbreviated μμF, uuF, or (confusingly) "mmf", "MMF", or "MMFD". Summary of obsolete or deprecated capacitance units or abbreviations: (upper/lower case variations are not shown) is a square version of (, the Japanese word for "farad") intended for Japanese vertical text. It is included in Unicode for compatibility with earlier character sets. Related concepts. The reciprocal of capacitance is called electrical elastance, the (non-standard, non-SI) unit of which is the daraf. CGS units. The abfarad (abbreviated abF) is an obsolete CGS unit of capacitance, which corresponds to 109 farads (1 gigafarad, GF). The statfarad (abbreviated statF) is a rarely used CGS unit equivalent to the capacitance of a capacitor with a charge of 1 statcoulomb across a potential difference of 1 statvolt. It is 1/(10−5 "c"2) farad, approximately 1.1126 picofarads. More commonly, the centimeter (cm) is used, which is equal to the statfarad. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{F}\n= \\dfrac{\\text{s}^4 {\\cdot} \\text{A}^2}{\\text{m}^{2} {\\cdot} \\text{kg}}\n= \\dfrac{\\text{s}^2 {\\cdot} \\text{C}^2}{\\text{m}^{2} {\\cdot} \\text{kg}}\n= \\dfrac{\\text{C}}{\\text{V}}\n= \\dfrac{\\text{A} {\\cdot} \\text{s}}{\\text{V}}\n= \\dfrac{\\text{W} {\\cdot} \\text{s}} {{\\text{V}^2} }\n= \\dfrac{\\text{J}}{{\\text{V}^2} }\n= \\dfrac{\\text{N} {\\cdot} \\text{m}} {{\\text{V}^2} }\n= \\dfrac{\\text{C}^2}{\\text{J}}\n= \\dfrac{\\text{C}^2}{\\text{N} {\\cdot} \\text{m}}\n= \\dfrac{\\text{s}}{\\Omega}\n= \\dfrac{1}{\\Omega {\\cdot} \\text{Hz}}\n= \\dfrac{\\text{S}}{\\text{Hz}}\n= \\dfrac{\\text{s}^{2}}{\\text{H}},\n" } ]
https://en.wikipedia.org/wiki?curid=82355
8235776
Scherk surface
In mathematics, a Scherk surface (named after Heinrich Scherk) is an example of a minimal surface. Scherk described two complete embedded minimal surfaces in 1834; his first surface is a doubly periodic surface, his second surface is singly periodic. They were the third non-trivial examples of minimal surfaces (the first two were the catenoid and helicoid). The two surfaces are conjugates of each other. Scherk surfaces arise in the study of certain limiting minimal surface problems and in the study of harmonic diffeomorphisms of hyperbolic space. Scherk's first surface. Scherk's first surface is asymptotic to two infinite families of parallel planes, orthogonal to each other, that meet near "z" = 0 in a checkerboard pattern of bridging arches. It contains an infinite number of straight vertical lines. Construction of a simple Scherk surface. Consider the following minimal surface problem on a square in the Euclidean plane: for a natural number "n", find a minimal surface Σ"n" as the graph of some function formula_0 such that formula_1 formula_2 That is, "u""n" satisfies the minimal surface equation formula_3 and formula_4 What, if anything, is the limiting surface as "n" tends to infinity? The answer was given by H. Scherk in 1834: the limiting surface Σ is the graph of formula_5 formula_6 That is, the Scherk surface over the square is formula_7 More general Scherk surfaces. One can consider similar minimal surface problems on other quadrilaterals in the Euclidean plane. One can also consider the same problem on quadrilaterals in the hyperbolic plane. In 2006, Harold Rosenberg and Pascal Collin used hyperbolic Scherk surfaces to construct a harmonic diffeomorphism from the complex plane onto the hyperbolic plane (the unit disc with the hyperbolic metric), thereby disproving the Schoen–Yau conjecture. Scherk's second surface. Scherk's second surface looks globally like two orthogonal planes whose intersection consists of a sequence of tunnels in alternating directions. Its intersections with horizontal planes consists of alternating hyperbolas. It has implicit equation: formula_8 It has the Weierstrass–Enneper parameterization formula_9, formula_10 and can be parametrized as: formula_11 formula_12 formula_13 for formula_14 and formula_15. This gives one period of the surface, which can then be extended in the z-direction by symmetry. The surface has been generalised by H. Karcher into the saddle tower family of periodic minimal surfaces. Somewhat confusingly, this surface is occasionally called Scherk's fifth surface in the literature. To minimize confusion it is useful to refer to it as Scherk's singly periodic surface or the Scherk-tower. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "u_{n} : \\left( - \\frac{\\pi}{2}, + \\frac{\\pi}{2} \\right) \\times \\left( - \\frac{\\pi}{2}, + \\frac{\\pi}{2} \\right) \\to \\mathbb{R}" }, { "math_id": 1, "text": "\\lim_{y \\to \\pm \\pi / 2} u_{n} \\left( x, y \\right) = + n \\text{ for } - \\frac{\\pi}{2} < x < + \\frac{\\pi}{2}," }, { "math_id": 2, "text": "\\lim_{x \\to \\pm \\pi / 2} u_{n} \\left( x, y \\right) = - n \\text{ for } - \\frac{\\pi}{2} < y < + \\frac{\\pi}{2}." }, { "math_id": 3, "text": "\\mathrm{div} \\left( \\frac{\\nabla u_{n} (x, y)}{\\sqrt{1 + | \\nabla u_{n} (x, y) |^{2}}} \\right) \\equiv 0" }, { "math_id": 4, "text": "\\Sigma_{n} = \\left\\{ (x, y, u_{n}(x, y)) \\in \\mathbb{R}^{3} \\left| - \\frac{\\pi}{2} < x, y < + \\frac{\\pi}{2} \\right. \\right\\}." }, { "math_id": 5, "text": "u : \\left( - \\frac{\\pi}{2}, + \\frac{\\pi}{2} \\right) \\times \\left( - \\frac{\\pi}{2}, + \\frac{\\pi}{2} \\right) \\to \\mathbb{R}," }, { "math_id": 6, "text": "u(x, y) = \\log \\left( \\frac{\\cos (x)}{\\cos (y)} \\right)." }, { "math_id": 7, "text": "\\Sigma = \\left\\{ \\left. \\left(x, y, \\log \\left( \\frac{\\cos (x)}{\\cos (y)} \\right) \\right) \\in \\mathbb{R}^{3} \\right| - \\frac{\\pi}{2} < x, y < + \\frac{\\pi}{2} \\right\\}." }, { "math_id": 8, "text": "\\sin(z) - \\sinh(x)\\sinh(y)=0" }, { "math_id": 9, "text": "f(z) = \\frac{4}{1-z^4}" }, { "math_id": 10, "text": "g(z) = iz" }, { "math_id": 11, "text": "x(r,\\theta) = 2 \\Re ( \\ln(1+re^{i \\theta}) - \\ln(1-re^{i \\theta}) ) = \\ln \\left( \\frac{1+r^2+2r \\cos \\theta}{1+r^2-2r \\cos \\theta} \\right)" }, { "math_id": 12, "text": "y(r,\\theta) = \\Re ( 4i \\tan^{-1}(re^{i \\theta})) = \\ln \\left( \\frac{1+r^2-2r \\sin\\theta}{1+r^2+2r \\sin \\theta} \\right)" }, { "math_id": 13, "text": "z(r,\\theta) = \\Re ( 2i(-\\ln(1-r^2e^{2i \\theta}) + \\ln(1+r^2e^{2i \\theta}) ) = 2 \\tan^{-1}\\left( \\frac{2 r^2 \\sin 2\\theta}{r^4-1} \\right)" }, { "math_id": 14, "text": "\\theta \\in [0, 2\\pi)" }, { "math_id": 15, "text": "r \\in (0,1)" } ]
https://en.wikipedia.org/wiki?curid=8235776
82359
Least squares
Approximation method in statistics The method of least squares is a parameter estimation method in regression analysis based on minimizing the sum of the squares of the residuals (a residual being the difference between an observed value and the fitted value provided by a model) made in the results of each individual equation. (More simply, least squares is a mathematical procedure for finding the best-fitting curve to a given set of points by minimizing the sum of the squares of the offsets ("the residuals") of the points from the curve.) The most important application is in data fitting. When the problem has substantial uncertainties in the independent variable (the "x" variable), then simple regression and least-squares methods have problems; in such cases, the methodology required for fitting errors-in-variables models may be considered instead of that for least squares. Least squares problems fall into two categories: linear or ordinary least squares and nonlinear least squares, depending on whether or not the model functions are linear in all unknowns. The linear least-squares problem occurs in statistical regression analysis; it has a closed-form solution. The nonlinear problem is usually solved by iterative refinement; at each iteration the system is approximated by a linear one, and thus the core calculation is similar in both cases. Polynomial least squares describes the variance in a prediction of the dependent variable as a function of the independent variable and the deviations from the fitted curve. When the observations come from an exponential family with identity as its natural sufficient statistics and mild-conditions are satisfied (e.g. for normal, exponential, Poisson and binomial distributions), standardized least-squares estimates and maximum-likelihood estimates are identical. The method of least squares can also be derived as a method of moments estimator. The following discussion is mostly presented in terms of linear functions but the use of least squares is valid and practical for more general families of functions. Also, by iteratively applying local quadratic approximation to the likelihood (through the Fisher information), the least-squares method may be used to fit a generalized linear model. The least-squares method was officially discovered and published by Adrien-Marie Legendre (1805), though it is usually also co-credited to Carl Friedrich Gauss (1809), who contributed significant theoretical advances to the method, and may have also used it in his earlier work in 1794 and 1795. History. Founding. The method of least squares grew out of the fields of astronomy and geodesy, as scientists and mathematicians sought to provide solutions to the challenges of navigating the Earth's oceans during the Age of Discovery. The accurate description of the behavior of celestial bodies was the key to enabling ships to sail in open seas, where sailors could no longer rely on land sightings for navigation. The method was the culmination of several advances that took place during the course of the eighteenth century: The method. The first clear and concise exposition of the method of least squares was published by Legendre in 1805. The technique is described as an algebraic procedure for fitting linear equations to data and Legendre demonstrates the new method by analyzing the same data as Laplace for the shape of the Earth. Within ten years after Legendre's publication, the method of least squares had been adopted as a standard tool in astronomy and geodesy in France, Italy, and Prussia, which constitutes an extraordinarily rapid acceptance of a scientific technique. In 1809 Carl Friedrich Gauss published his method of calculating the orbits of celestial bodies. In that work he claimed to have been in possession of the method of least squares since 1795. This naturally led to a priority dispute with Legendre. However, to Gauss's credit, he went beyond Legendre and succeeded in connecting the method of least squares with the principles of probability and to the normal distribution. He had managed to complete Laplace's program of specifying a mathematical form of the probability density for the observations, depending on a finite number of unknown parameters, and define a method of estimation that minimizes the error of estimation. Gauss showed that the arithmetic mean is indeed the best estimate of the location parameter by changing both the probability density and the method of estimation. He then turned the problem around by asking what form the density should have and what method of estimation should be used to get the arithmetic mean as estimate of the location parameter. In this attempt, he invented the normal distribution. An early demonstration of the strength of Gauss's method came when it was used to predict the future location of the newly discovered asteroid Ceres. On 1 January 1801, the Italian astronomer Giuseppe Piazzi discovered Ceres and was able to track its path for 40 days before it was lost in the glare of the Sun. Based on these data, astronomers desired to determine the location of Ceres after it emerged from behind the Sun without solving Kepler's complicated nonlinear equations of planetary motion. The only predictions that successfully allowed Hungarian astronomer Franz Xaver von Zach to relocate Ceres were those performed by the 24-year-old Gauss using least-squares analysis. In 1810, after reading Gauss's work, Laplace, after proving the central limit theorem, used it to give a large sample justification for the method of least squares and the normal distribution. In 1822, Gauss was able to state that the least-squares approach to regression analysis is optimal in the sense that in a linear model where the errors have a mean of zero, are uncorrelated, normally distributed, and have equal variances, the best linear unbiased estimator of the coefficients is the least-squares estimator. An extended version of this result is known as the Gauss–Markov theorem. The idea of least-squares analysis was also independently formulated by the American Robert Adrain in 1808. In the next two centuries workers in the theory of errors and in statistics found many different ways of implementing least squares. Problem statement. The objective consists of adjusting the parameters of a model function to best fit a data set. A simple data set consists of "n" points (data pairs) formula_0, "i" = 1, …, "n", where formula_1 is an independent variable and formula_2 is a dependent variable whose value is found by observation. The model function has the form formula_3, where "m" adjustable parameters are held in the vector formula_4. The goal is to find the parameter values for the model that "best" fits the data. The fit of a model to a data point is measured by its residual, defined as the difference between the observed value of the dependent variable and the value predicted by the model: formula_5 The least-squares method finds the optimal parameter values by minimizing the sum of squared residuals, formula_7: formula_8 In the simplest case formula_9 and the result of the least-squares method is the arithmetic mean of the input data. An example of a model in two dimensions is that of the straight line. Denoting the y-intercept as formula_10 and the slope as formula_11, the model function is given by formula_12. See linear least squares for a fully worked out example of this model. A data point may consist of more than one independent variable. For example, when fitting a plane to a set of height measurements, the plane is a function of two independent variables, "x" and "z", say. In the most general case there may be one or more independent variables and one or more dependent variables at each data point. To the right is a residual plot illustrating random fluctuations about formula_6, indicating that a linear modelformula_13 is appropriate. formula_14 is an independent, random variable.   If the residual points had some sort of a shape and were not randomly fluctuating, a linear model would not be appropriate. For example, if the residual plot had a parabolic shape as seen to the right, a parabolic model formula_15 would be appropriate for the data. The residuals for a parabolic model can be calculated via formula_16. Limitations. This regression formulation considers only observational errors in the dependent variable (but the alternative total least squares regression can account for errors in both variables). There are two rather different contexts with different implications: Solving the least squares problem. The minimum of the sum of squares is found by setting the gradient to zero. Since the model contains "m" parameters, there are "m" gradient equations: formula_17 and since formula_18, the gradient equations become formula_19 The gradient equations apply to all least squares problems. Each particular problem requires particular expressions for the model and its partial derivatives. Linear least squares. A regression model is a linear one when the model comprises a linear combination of the parameters, i.e., formula_20 where the function formula_21 is a function of formula_22. Letting formula_23 and putting the independent and dependent variables in matrices formula_24 and formula_25, respectively, we can compute the least squares in the following way. Note that formula_26 is the set of all data. formula_27 The gradient of the loss is: formula_28 Setting the gradient of the loss to zero and solving for formula_29, we get: formula_30 formula_31 Non-linear least squares. There is, in some cases, a closed-form solution to a non-linear least squares problem – but in general there is not. In the case of no closed-form solution, numerical algorithms are used to find the value of the parameters formula_32 that minimizes the objective. Most algorithms involve choosing initial values for the parameters. Then, the parameters are refined iteratively, that is, the values are obtained by successive approximation: formula_33 where a superscript "k" is an iteration number, and the vector of increments formula_34 is called the shift vector. In some commonly used algorithms, at each iteration the model may be linearized by approximation to a first-order Taylor series expansion about formula_35: formula_36 The Jacobian J is a function of constants, the independent variable "and" the parameters, so it changes from one iteration to the next. The residuals are given by formula_37 To minimize the sum of squares of formula_38, the gradient equation is set to zero and solved for formula_39: formula_40 which, on rearrangement, become "m" simultaneous linear equations, the normal equations: formula_41 The normal equations are written in matrix notation as formula_42 These are the defining equations of the Gauss–Newton algorithm. Differences between linear and nonlinear least squares. These differences must be considered whenever the solution to a nonlinear least squares problem is being sought. Example. Consider a simple example drawn from physics. A spring should obey Hooke's law which states that the extension of a spring y is proportional to the force, "F", applied to it. formula_46 constitutes the model, where "F" is the independent variable. In order to estimate the force constant, "k", we conduct a series of "n" measurements with different forces to produce a set of data, formula_47, where "yi" is a measured spring extension. Each experimental observation will contain some error, formula_48, and so we may specify an empirical model for our observations, formula_49 There are many methods we might use to estimate the unknown parameter "k". Since the "n" equations in the "m" variables in our data comprise an overdetermined system with one unknown and "n" equations, we estimate "k" using least squares. The sum of squares to be minimized is formula_50 The least squares estimate of the force constant, "k", is given by formula_51 We assume that applying force causes the spring to expand. After having derived the force constant by least squares fitting, we predict the extension from Hooke's law. Uncertainty quantification. In a least squares calculation with unit weights, or in linear regression, the variance on the "j"th parameter, denoted formula_52, is usually estimated with formula_53 formula_54 formula_55 where the true error variance "σ"2 is replaced by an estimate, the reduced chi-squared statistic, based on the minimized value of the residual sum of squares (objective function), "S". The denominator, "n" − "m", is the statistical degrees of freedom; see effective degrees of freedom for generalizations. "C" is the covariance matrix. Statistical testing. If the probability distribution of the parameters is known or an asymptotic approximation is made, confidence limits can be found. Similarly, statistical tests on the residuals can be conducted if the probability distribution of the residuals is known or assumed. We can derive the probability distribution of any linear combination of the dependent variables if the probability distribution of experimental errors is known or assumed. Inferring is easy when assuming that the errors follow a normal distribution, consequently implying that the parameter estimates and residuals will also be normally distributed conditional on the values of the independent variables. It is necessary to make assumptions about the nature of the experimental errors to test the results statistically. A common assumption is that the errors belong to a normal distribution. The central limit theorem supports the idea that this is a good approximation in many cases. However, suppose the errors are not normally distributed. In that case, a central limit theorem often nonetheless implies that the parameter estimates will be approximately normally distributed so long as the sample is reasonably large. For this reason, given the important property that the error mean is independent of the independent variables, the distribution of the error term is not an important issue in regression analysis. Specifically, it is not typically important whether the error term follows a normal distribution. Weighted least squares. A special case of generalized least squares called weighted least squares occurs when all the off-diagonal entries of Ω (the correlation matrix of the residuals) are null; the variances of the observations (along the covariance matrix diagonal) may still be unequal (heteroscedasticity). In simpler terms, heteroscedasticity is when the variance of formula_56 depends on the value of formula_57 which causes the residual plot to create a "fanning out" effect towards larger formula_56 values as seen in the residual plot to the right. On the other hand, homoscedasticity is assuming that the variance of formula_56 and variance of formula_14 are equal.   Relationship to principal components. The first principal component about the mean of a set of points can be represented by that line which most closely approaches the data points (as measured by squared distance of closest approach, i.e. perpendicular to the line). In contrast, linear least squares tries to minimize the distance in the formula_58 direction only. Thus, although the two use a similar error metric, linear least squares is a method that treats one dimension of the data preferentially, while PCA treats all dimensions equally. Relationship to measure theory. Notable statistician Sara van de Geer used empirical process theory and the Vapnik–Chervonenkis dimension to prove a least-squares estimator can be interpreted as a measure on the space of square-integrable functions. Regularization. Tikhonov regularization. In some contexts, a regularized version of the least squares solution may be preferable. Tikhonov regularization (or ridge regression) adds a constraint that formula_59, the squared formula_60-norm of the parameter vector, is not greater than a given value to the least squares formulation, leading to a constrained minimization problem. This is equivalent to the unconstrained minimization problem where the objective function is the residual sum of squares plus a penalty term formula_61 and formula_62 is a tuning parameter (this is the Lagrangian form of the constrained minimization problem). In a Bayesian context, this is equivalent to placing a zero-mean normally distributed prior on the parameter vector. Lasso method. An alternative regularized version of least squares is Lasso (least absolute shrinkage and selection operator), which uses the constraint that formula_63, the L1-norm of the parameter vector, is no greater than a given value. (One can show like above using Lagrange multipliers that this is equivalent to an unconstrained minimization of the least-squares penalty with formula_64 added.) In a Bayesian context, this is equivalent to placing a zero-mean Laplace prior distribution on the parameter vector. The optimization problem may be solved using quadratic programming or more general convex optimization methods, as well as by specific algorithms such as the least angle regression algorithm. One of the prime differences between Lasso and ridge regression is that in ridge regression, as the penalty is increased, all parameters are reduced while still remaining non-zero, while in Lasso, increasing the penalty will cause more and more of the parameters to be driven to zero. This is an advantage of Lasso over ridge regression, as driving parameters to zero deselects the features from the regression. Thus, Lasso automatically selects more relevant features and discards the others, whereas Ridge regression never fully discards any features. Some feature selection techniques are developed based on the LASSO including Bolasso which bootstraps samples, and FeaLect which analyzes the regression coefficients corresponding to different values of formula_62 to score all the features. The "L"1-regularized formulation is useful in some contexts due to its tendency to prefer solutions where more parameters are zero, which gives solutions that depend on fewer variables. For this reason, the Lasso and its variants are fundamental to the field of compressed sensing. An extension of this approach is elastic net regularization. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(x_i,y_i)\\!" }, { "math_id": 1, "text": "x_i\\!" }, { "math_id": 2, "text": "y_i\\!" }, { "math_id": 3, "text": "f(x, \\boldsymbol \\beta)" }, { "math_id": 4, "text": "\\boldsymbol \\beta" }, { "math_id": 5, "text": "r_i = y_i - f(x_i, \\boldsymbol \\beta)." }, { "math_id": 6, "text": "r_i=0" }, { "math_id": 7, "text": "S" }, { "math_id": 8, "text": "S = \\sum_{i=1}^n r_i^2." }, { "math_id": 9, "text": "f(x_i, \\boldsymbol \\beta)= \\beta" }, { "math_id": 10, "text": "\\beta_0" }, { "math_id": 11, "text": "\\beta_1" }, { "math_id": 12, "text": "f(x,\\boldsymbol \\beta)=\\beta_0+\\beta_1 x" }, { "math_id": 13, "text": "(Y_i = \\beta_0 + \\beta_1 x_i + U_i)" }, { "math_id": 14, "text": "U_i" }, { "math_id": 15, "text": "(Y_i = \\beta_0 + \\beta_1 x_i + \\beta_2 x_i^2 + U_i)" }, { "math_id": 16, "text": "r_i=y_i-\\hat{\\beta}_0-\\hat{\\beta}_1 x_i-\\hat{\\beta}_2 x_i^2" }, { "math_id": 17, "text": "\\frac{\\partial S}{\\partial \\beta_j}=2\\sum_i r_i\\frac{\\partial r_i}{\\partial \\beta_j} = 0,\\ j=1,\\ldots,m," }, { "math_id": 18, "text": "r_i=y_i-f(x_i,\\boldsymbol \\beta)" }, { "math_id": 19, "text": "-2\\sum_i r_i\\frac{\\partial f(x_i,\\boldsymbol \\beta)}{\\partial \\beta_j}=0,\\ j=1,\\ldots,m." }, { "math_id": 20, "text": " f(x, \\boldsymbol \\beta) = \\sum_{j = 1}^m \\beta_j \\phi_j(x)," }, { "math_id": 21, "text": "\\phi_j" }, { "math_id": 22, "text": " x " }, { "math_id": 23, "text": " X_{ij}= \\phi_j(x_{i})" }, { "math_id": 24, "text": " X" }, { "math_id": 25, "text": " Y" }, { "math_id": 26, "text": " D" }, { "math_id": 27, "text": " L(D, \\boldsymbol{\\beta}) = \\left\\|Y - X\\boldsymbol{\\beta} \\right\\|^2\n= (Y - X\\boldsymbol{\\beta})^\\mathsf{T} (Y - X\\boldsymbol{\\beta})\n= Y^\\mathsf{T}Y- Y^\\mathsf{T}X\\boldsymbol{\\beta}- \\boldsymbol{\\beta}^\\mathsf{T}X^\\mathsf{T}Y + \\boldsymbol{\\beta}^\\mathsf{T}X^\\mathsf{T}X\\boldsymbol{\\beta}" }, { "math_id": 28, "text": "\\frac{\\partial L(D, \\boldsymbol{\\beta})}{\\partial \\boldsymbol{\\beta}}\n= \\frac{\\partial \\left(Y^\\mathsf{T}Y- Y^\\mathsf{T}X\\boldsymbol{\\beta}- \\boldsymbol{\\beta}^\\mathsf{T}X^\\mathsf{T}Y+\\boldsymbol{\\beta}^\\mathsf{T}X^\\mathsf{T}X\\boldsymbol{\\beta}\\right)}{\\partial \\boldsymbol{\\beta}}\n= -2X^\\mathsf{T}Y + 2X^\\mathsf{T}X\\boldsymbol{\\beta}" }, { "math_id": 29, "text": "\\boldsymbol{\\beta}" }, { "math_id": 30, "text": "-2X^\\mathsf{T}Y + 2X^\\mathsf{T}X\\boldsymbol{\\beta} = 0\n\\Rightarrow X^\\mathsf{T}Y = X^\\mathsf{T}X\\boldsymbol{\\beta}" }, { "math_id": 31, "text": "\\boldsymbol{\\hat{\\beta}} = \\left(X^\\mathsf{T}X\\right)^{-1} X^\\mathsf{T}Y" }, { "math_id": 32, "text": "\\beta" }, { "math_id": 33, "text": "{\\beta_j}^{k+1} = {\\beta_j}^k+\\Delta \\beta_j," }, { "math_id": 34, "text": "\\Delta \\beta_j" }, { "math_id": 35, "text": " \\boldsymbol \\beta^k" }, { "math_id": 36, "text": "\\begin{align}\nf(x_i,\\boldsymbol \\beta)\n&= f^k(x_i,\\boldsymbol \\beta) +\\sum_j \\frac{\\partial f(x_i,\\boldsymbol \\beta)}{\\partial \\beta_j} \\left(\\beta_j-{\\beta_j}^k \\right) \\\\[1ex]\n&= f^k(x_i,\\boldsymbol \\beta) +\\sum_j J_{ij} \\,\\Delta\\beta_j.\n\\end{align}" }, { "math_id": 37, "text": "r_i = y_i - f^k(x_i, \\boldsymbol \\beta)- \\sum_{k=1}^{m} J_{ik}\\,\\Delta\\beta_k = \\Delta y_i- \\sum_{j=1}^m J_{ij}\\,\\Delta\\beta_j." }, { "math_id": 38, "text": "r_i" }, { "math_id": 39, "text": " \\Delta \\beta_j" }, { "math_id": 40, "text": "-2\\sum_{i=1}^n J_{ij} \\left( \\Delta y_i-\\sum_{k=1}^m J_{ik} \\, \\Delta \\beta_k \\right) = 0," }, { "math_id": 41, "text": "\\sum_{i=1}^{n}\\sum_{k=1}^m J_{ij} J_{ik} \\, \\Delta \\beta_k=\\sum_{i=1}^n J_{ij} \\, \\Delta y_i \\qquad (j=1,\\ldots,m)." }, { "math_id": 42, "text": "\\left(\\mathbf{J}^\\mathsf{T} \\mathbf{J}\\right) \\Delta \\boldsymbol \\beta = \\mathbf{J}^\\mathsf{T}\\Delta \\mathbf{y}." }, { "math_id": 43, "text": "f = X_{i1}\\beta_1 + X_{i2}\\beta_2 +\\cdots" }, { "math_id": 44, "text": "\\beta^2, e^{\\beta x}" }, { "math_id": 45, "text": "\\partial f / \\partial \\beta_j" }, { "math_id": 46, "text": "y = f(F,k) = k F" }, { "math_id": 47, "text": "(F_i, y_i),\\ i=1,\\dots,n\\!" }, { "math_id": 48, "text": "\\varepsilon" }, { "math_id": 49, "text": " y_i = kF_i + \\varepsilon_i. " }, { "math_id": 50, "text": " S = \\sum_{i=1}^n \\left(y_i - k F_i\\right)^2. " }, { "math_id": 51, "text": "\\hat k = \\frac{\\sum_i F_i y_i}{\\sum_i F_i^2}." }, { "math_id": 52, "text": "\\operatorname{var}(\\hat{\\beta}_j)" }, { "math_id": 53, "text": "\\operatorname{var}(\\hat{\\beta}_j)= \\sigma^2\\left(\\left[X^\\mathsf{T}X\\right]^{-1}\\right)_{jj} \\approx \\hat{\\sigma}^2 C_{jj}," }, { "math_id": 54, "text": "\\hat{\\sigma}^2 \\approx \\frac S {n-m} " }, { "math_id": 55, "text": "C = \\left(X^\\mathsf{T}X\\right)^{-1}," }, { "math_id": 56, "text": "Y_i" }, { "math_id": 57, "text": "x_i" }, { "math_id": 58, "text": "y" }, { "math_id": 59, "text": "\\left\\|\\beta\\right\\|_2^2" }, { "math_id": 60, "text": "\\ell_2" }, { "math_id": 61, "text": "\\alpha \\left\\|\\beta\\right\\|_2^2" }, { "math_id": 62, "text": "\\alpha" }, { "math_id": 63, "text": "\\|\\beta\\|_1" }, { "math_id": 64, "text": "\\alpha\\|\\beta\\|_1" } ]
https://en.wikipedia.org/wiki?curid=82359
82361
Gram–Schmidt process
Orthonormalization of a set of vectors In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process or Gram-Schmidt algorithm is a way of finding a set of two or more vectors that are perpendicular to each other. By technical definition, it is a method of constructing an orthonormal basis from a set of vectors in an inner product space, most commonly the Euclidean space formula_0 equipped with the standard inner product. The Gram–Schmidt process takes a finite, linearly independent set of vectors formula_1 for "k" ≤ "n" and generates an orthogonal set formula_2 that spans the same formula_3-dimensional subspace of formula_0 as formula_4. The method is named after Jørgen Pedersen Gram and Erhard Schmidt, but Pierre-Simon Laplace had been familiar with it before Gram and Schmidt. In the theory of Lie group decompositions, it is generalized by the Iwasawa decomposition. The application of the Gram–Schmidt process to the column vectors of a full column rank matrix yields the QR decomposition (it is decomposed into an orthogonal and a triangular matrix). The Gram–Schmidt process. The vector projection of a vector formula_5 on a nonzero vector formula_6 is defined as formula_7 where formula_8 denotes the inner product of the vectors formula_6 and formula_5. This means that formula_9 is the orthogonal projection of formula_5 onto the line spanned by formula_6. If formula_6 is the zero vector, then formula_9 is defined as the zero vector. Given formula_3 vectors formula_10 the Gram–Schmidt process defines the vectors formula_11 as follows: formula_12 The sequence formula_11 is the required system of orthogonal vectors, and the normalized vectors formula_13 form an orthonormal set. The calculation of the sequence formula_11 is known as "Gram–Schmidt orthogonalization", and the calculation of the sequence formula_13 is known as "Gram–Schmidt orthonormalization". To check that these formulas yield an orthogonal sequence, first compute formula_14 by substituting the above formula for formula_15: we get zero. Then use this to compute formula_16 again by substituting the formula for formula_17: we get zero. For arbitrary formula_3 the proof is accomplished by mathematical induction. Geometrically, this method proceeds as follows: to compute formula_18, it projects formula_19 orthogonally onto the subspace formula_20 generated by formula_21, which is the same as the subspace generated by formula_22. The vector formula_18 is then defined to be the difference between formula_19 and this projection, guaranteed to be orthogonal to all of the vectors in the subspace formula_20. The Gram–Schmidt process also applies to a linearly independent countably infinite sequence {v"i"}"i". The result is an orthogonal (or orthonormal) sequence {u"i"}"i" such that for natural number n: the algebraic span of formula_23 is the same as that of formula_24. If the Gram–Schmidt process is applied to a linearly dependent sequence, it outputs the 0 vector on the formula_25th step, assuming that formula_19 is a linear combination of formula_22. If an orthonormal basis is to be produced, then the algorithm should test for zero vectors in the output and discard them because no multiple of a zero vector can have a length of 1. The number of vectors output by the algorithm will then be the dimension of the space spanned by the original inputs. A variant of the Gram–Schmidt process using transfinite recursion applied to a (possibly uncountably) infinite sequence of vectors formula_26 yields a set of orthonormal vectors formula_27 with formula_28 such that for any formula_29, the completion of the span of formula_30 is the same as that of formula_31. In particular, when applied to a (algebraic) basis of a Hilbert space (or, more generally, a basis of any dense subspace), it yields a (functional-analytic) orthonormal basis. Note that in the general case often the strict inequality formula_32 holds, even if the starting set was linearly independent, and the span of formula_27 need not be a subspace of the span of formula_26 (rather, it's a subspace of its completion). Example. Euclidean space. Consider the following set of vectors in formula_33 (with the conventional inner product) formula_34 Now, perform Gram–Schmidt, to obtain an orthogonal set of vectors: formula_35 formula_36 We check that the vectors formula_37 and formula_15 are indeed orthogonal: formula_38 noting that if the dot product of two vectors is 0 then they are orthogonal. For non-zero vectors, we can then normalize the vectors by dividing out their sizes as shown above: formula_39 formula_40 Properties. Denote by formula_41 the result of applying the Gram–Schmidt process to a collection of vectors formula_42. This yields a map formula_43. It has the following properties: Let formula_45 be orthogonal (with respect to the given inner product). Then we have formula_46 Further, a parametrized version of the Gram–Schmidt process yields a (strong) deformation retraction of the general linear group formula_47 onto the orthogonal group formula_48. Numerical stability. When this process is implemented on a computer, the vectors formula_49 are often not quite orthogonal, due to rounding errors. For the Gram–Schmidt process as described above (sometimes referred to as "classical Gram–Schmidt") this loss of orthogonality is particularly bad; therefore, it is said that the (classical) Gram–Schmidt process is numerically unstable. The Gram–Schmidt process can be stabilized by a small modification; this version is sometimes referred to as modified Gram-Schmidt or MGS. This approach gives the same result as the original formula in exact arithmetic and introduces smaller errors in finite-precision arithmetic. Instead of computing the vector u"k" as formula_50 it is computed as formula_51 This method is used in the previous animation, when the intermediate formula_52 vector is used when orthogonalizing the blue vector formula_53. Here is another description of the modified algorithm. Given the vectors formula_54, in our first step we produce vectors formula_55by removing components along the direction of formula_56. In formulas, formula_57. After this step we already have two of our desired orthogonal vectors formula_58, namely formula_59, but we also made formula_60 already orthogonal to formula_37. Next, we orthogonalize those remaining vectors against formula_61. This means we compute formula_62 by subtraction formula_63. Now we have stored the vectors formula_64 where the first three vectors are already formula_65 and the remaining vectors are already orthogonal to formula_66. As should be clear now, the next step orthogonalizes formula_67 against formula_68. Proceeding in this manner we find the full set of orthogonal vectors formula_58. If orthonormal vectors are desired, then we normalize as we go, so that the denominators in the subtraction formulas turn into ones. Algorithm. The following MATLAB algorithm implements classical Gram–Schmidt orthonormalization. The vectors v1, ..., v"k" (columns of matrix codice_0, so that codice_1 is the formula_69th vector) are replaced by orthonormal vectors (columns of codice_2) which span the same subspace. function U = gramschmidt(V) [n, k] = size(V); U = zeros(n,k); U(:,1) = V(:,1) / norm(V(:,1)); for i = 2:k U(:,i) = V(:,i); for j = 1:i-1 U(:,i) = U(:,i) - (U(:,j)'*U(:,i)) * U(:,j); end U(:,i) = U(:,i) / norm(U(:,i)); end end The cost of this algorithm is asymptotically O("nk"2) floating point operations, where n is the dimensionality of the vectors. Via Gaussian elimination. If the rows {v1, ..., v"k"} are written as a matrix formula_70, then applying Gaussian elimination to the augmented matrix formula_71 will produce the orthogonalized vectors in place of formula_70. However the matrix formula_72 must be brought to row echelon form, using only the row operation of adding a scalar multiple of one row to another. For example, taking formula_73 as above, we have formula_74 And reducing this to row echelon form produces formula_75 The normalized vectors are then formula_76 formula_77 as in the example above. Determinant formula. The result of the Gram–Schmidt process may be expressed in a non-recursive formula using determinants. formula_78 formula_79 where formula_80 and, for formula_81, formula_82 is the Gram determinant formula_83 Note that the expression for formula_49 is a "formal" determinant, i.e. the matrix contains both scalars and vectors; the meaning of this expression is defined to be the result of a cofactor expansion along the row of vectors. The determinant formula for the Gram-Schmidt is computationally (exponentially) slower than the recursive algorithms described above; it is mainly of theoretical interest. Expressed using geometric algebra. Expressed using notation used in geometric algebra, the unnormalized results of the Gram–Schmidt process can be expressed as formula_84 which is equivalent to the expression using the formula_85 operator defined above. The results can equivalently be expressed as formula_86 which is closely related to the expression using determinants above. Alternatives. Other orthogonalization algorithms use Householder transformations or Givens rotations. The algorithms using Householder transformations are more stable than the stabilized Gram–Schmidt process. On the other hand, the Gram–Schmidt process produces the formula_69th orthogonalized vector after the formula_69th iteration, while orthogonalization using Householder reflections produces all the vectors only at the end. This makes only the Gram–Schmidt process applicable for iterative methods like the Arnoldi iteration. Yet another alternative is motivated by the use of Cholesky decomposition for inverting the matrix of the normal equations in linear least squares. Let formula_87 be a full column rank matrix, whose columns need to be orthogonalized. The matrix formula_88 is Hermitian and positive definite, so it can be written as formula_89 using the Cholesky decomposition. The lower triangular matrix formula_90 with strictly positive diagonal entries is invertible. Then columns of the matrix formula_91 are orthonormal and span the same subspace as the columns of the original matrix formula_87. The explicit use of the product formula_88 makes the algorithm unstable, especially if the product's condition number is large. Nevertheless, this algorithm is used in practice and implemented in some software packages because of its high efficiency and simplicity. In quantum mechanics there are several orthogonalization schemes with characteristics better suited for certain applications than original Gram–Schmidt. Nevertheless, it remains a popular and effective algorithm for even the largest electronic structure calculations. Run-time complexity. Gram-Schmidt orthogonalization can be done in strongly-polynomial time. The run-time analysis is similar to that of Gaussian elimination.40
[ { "math_id": 0, "text": "\\mathbb{R}^n" }, { "math_id": 1, "text": "S = \\{ \\mathbf{v}_1, \\ldots , \\mathbf{v}_k \\}" }, { "math_id": 2, "text": "S' = \\{ \\mathbf{u}_1 , \\ldots , \\mathbf{u}_k \\}" }, { "math_id": 3, "text": "k" }, { "math_id": 4, "text": "S" }, { "math_id": 5, "text": "\\mathbf v" }, { "math_id": 6, "text": "\\mathbf u" }, { "math_id": 7, "text": "\\operatorname{proj}_{\\mathbf u} (\\mathbf{v}) = \\frac{\\langle \\mathbf{v}, \\mathbf{u}\\rangle}{\\langle \\mathbf{u}, \\mathbf{u}\\rangle} \\,\\mathbf{u} , " }, { "math_id": 8, "text": "\\langle \\mathbf{v}, \\mathbf{u}\\rangle" }, { "math_id": 9, "text": "\\operatorname{proj}_{\\mathbf u} (\\mathbf{v})" }, { "math_id": 10, "text": "\\mathbf{v}_1, \\ldots, \\mathbf{v}_k" }, { "math_id": 11, "text": "\\mathbf{u}_1, \\ldots, \\mathbf{u}_k" }, { "math_id": 12, "text": "\\begin{align}\n\\mathbf{u}_1 & = \\mathbf{v}_1, & \\!\\mathbf{e}_1 & = \\frac{\\mathbf{u}_1}{\\|\\mathbf{u}_1\\|} \\\\\n\\mathbf{u}_2 & = \\mathbf{v}_2-\\operatorname{proj}_{\\mathbf{u}_1} (\\mathbf{v}_2),\n& \\!\\mathbf{e}_2 & = \\frac{\\mathbf{u}_2}{\\|\\mathbf{u}_2\\|} \\\\\n\\mathbf{u}_3 & = \\mathbf{v}_3-\\operatorname{proj}_{\\mathbf{u}_1} (\\mathbf{v}_3) - \\operatorname{proj}_{\\mathbf{u}_2} (\\mathbf{v}_3),\n& \\!\\mathbf{e}_3 & = \\frac{\\mathbf{u}_3 }{\\|\\mathbf{u}_3\\|} \\\\\n\\mathbf{u}_4 & = \\mathbf{v}_4-\\operatorname{proj}_{\\mathbf{u}_1} (\\mathbf{v}_4)-\\operatorname{proj}_{\\mathbf{u}_2} (\\mathbf{v}_4)-\\operatorname{proj}_{\\mathbf{u}_3} (\\mathbf{v}_4),\n& \\!\\mathbf{e}_4 & = {\\mathbf{u}_4 \\over \\|\\mathbf{u}_4\\|} \\\\\n& {}\\ \\ \\vdots & & {}\\ \\ \\vdots \\\\\n\\mathbf{u}_k & = \\mathbf{v}_k - \\sum_{j=1}^{k-1}\\operatorname{proj}_{\\mathbf{u}_j} (\\mathbf{v}_k),\n& \\!\\mathbf{e}_k & = \\frac{\\mathbf{u}_k}{\\|\\mathbf{u}_k\\|}.\n\\end{align}" }, { "math_id": 13, "text": "\\mathbf{e}_1, \\ldots, \\mathbf{e}_k" }, { "math_id": 14, "text": "\\langle \\mathbf{u}_1, \\mathbf{u}_2 \\rangle" }, { "math_id": 15, "text": "\\mathbf{u}_2" }, { "math_id": 16, "text": "\\langle \\mathbf{u}_1, \\mathbf{u}_3 \\rangle" }, { "math_id": 17, "text": "\\mathbf{u}_3" }, { "math_id": 18, "text": "\\mathbf{u}_i" }, { "math_id": 19, "text": "\\mathbf{v}_i" }, { "math_id": 20, "text": "U" }, { "math_id": 21, "text": "\\mathbf{u}_1, \\ldots, \\mathbf{u}_{i-1}" }, { "math_id": 22, "text": "\\mathbf{v}_1, \\ldots, \\mathbf{v}_{i-1}" }, { "math_id": 23, "text": "\\mathbf{v}_1, \\ldots, \\mathbf{v}_{n}" }, { "math_id": 24, "text": "\\mathbf{u}_1, \\ldots, \\mathbf{u}_{n}" }, { "math_id": 25, "text": "i" }, { "math_id": 26, "text": "(v_\\alpha)_{\\alpha<\\lambda}" }, { "math_id": 27, "text": "(u_\\alpha)_{\\alpha<\\kappa}" }, { "math_id": 28, "text": "\\kappa\\leq\\lambda" }, { "math_id": 29, "text": "\\alpha\\leq\\lambda" }, { "math_id": 30, "text": "\\{ u_\\beta : \\beta<\\min(\\alpha,\\kappa) \\}" }, { "math_id": 31, "text": "\\{ v_\\beta : \\beta < \\alpha\\}" }, { "math_id": 32, "text": "\\kappa < \\lambda" }, { "math_id": 33, "text": "\\mathbb{R}^2" }, { "math_id": 34, "text": "S = \\left\\{\\mathbf{v}_1=\\begin{bmatrix} 3 \\\\ 1\\end{bmatrix}, \\mathbf{v}_2=\\begin{bmatrix}2 \\\\2\\end{bmatrix}\\right\\}." }, { "math_id": 35, "text": "\\mathbf{u}_1=\\mathbf{v}_1=\\begin{bmatrix}3\\\\1\\end{bmatrix}" }, { "math_id": 36, "text": " \\mathbf{u}_2 = \\mathbf{v}_2 - \\operatorname{proj}_{\\mathbf{u}_1} (\\mathbf{v}_2)\n= \\begin{bmatrix}2\\\\2\\end{bmatrix} - \\operatorname{proj}_{\\left[\\begin{smallmatrix}3 \\\\ 1\\end{smallmatrix}\\right]} {\\begin{bmatrix}2\\\\2\\end{bmatrix}}\n= \\begin{bmatrix}2\\\\2\\end{bmatrix} - \\frac{8}{10} \\begin{bmatrix} 3 \\\\1 \\end{bmatrix}\n= \\begin{bmatrix} -2/5 \\\\6/5 \\end{bmatrix}. " }, { "math_id": 37, "text": "\\mathbf{u}_1" }, { "math_id": 38, "text": "\\langle\\mathbf{u}_1,\\mathbf{u}_2\\rangle = \\left\\langle \\begin{bmatrix}3\\\\1\\end{bmatrix}, \\begin{bmatrix} -2/5 \\\\ 6/5 \\end{bmatrix} \\right\\rangle = -\\frac{6}{5} + \\frac{6}{5} = 0," }, { "math_id": 39, "text": "\\mathbf{e}_1 = \\frac{1}{\\sqrt {10}}\\begin{bmatrix}3\\\\1\\end{bmatrix}" }, { "math_id": 40, "text": "\\mathbf{e}_2 = \\frac{1}{\\sqrt{40 \\over 25}} \\begin{bmatrix}-2/5\\\\6/5\\end{bmatrix}\n= \\frac{1}{\\sqrt{10}} \\begin{bmatrix}-1\\\\3\\end{bmatrix}. " }, { "math_id": 41, "text": " \\operatorname{GS}(\\mathbf{v}_1, \\dots, \\mathbf{v}_k) " }, { "math_id": 42, "text": " \\mathbf{v}_1, \\dots, \\mathbf{v}_k " }, { "math_id": 43, "text": " \\operatorname{GS} \\colon (\\R^n)^{k} \\to (\\R^n)^{k} " }, { "math_id": 44, "text": " \\operatorname{or}(\\mathbf{v}_1,\\dots,\\mathbf{v}_k) = \\operatorname{or}(\\operatorname{GS}(\\mathbf{v}_1,\\dots,\\mathbf{v}_k)) " }, { "math_id": 45, "text": " g \\colon \\R^n \\to \\R^n " }, { "math_id": 46, "text": " \\operatorname{GS}(g(\\mathbf{v}_1),\\dots,g(\\mathbf{v}_k)) = \\left( g(\\operatorname{GS}(\\mathbf{v}_1,\\dots,\\mathbf{v}_k)_1),\\dots,g(\\operatorname{GS}(\\mathbf{v}_1,\\dots,\\mathbf{v}_k)_k) \\right) " }, { "math_id": 47, "text": " \\mathrm{GL}(\\R^n)" }, { "math_id": 48, "text": " O(\\R^n)" }, { "math_id": 49, "text": "\\mathbf{u}_k" }, { "math_id": 50, "text": " \\mathbf{u}_k = \\mathbf{v}_k - \\operatorname{proj}_{\\mathbf{u}_1} (\\mathbf{v}_k) - \\operatorname{proj}_{\\mathbf{u}_2} (\\mathbf{v}_k) - \\cdots - \\operatorname{proj}_{\\mathbf{u}_{k-1}} (\\mathbf{v}_k), " }, { "math_id": 51, "text": " \\begin{align}\n\\mathbf{u}_k^{(1)} &= \\mathbf{v}_k - \\operatorname{proj}_{\\mathbf{u}_1} (\\mathbf{v}_k), \\\\\n\\mathbf{u}_k^{(2)} &= \\mathbf{u}_k^{(1)} - \\operatorname{proj}_{\\mathbf{u}_2} \\left(\\mathbf{u}_k^{(1)}\\right), \\\\\n& \\;\\; \\vdots \\\\\n\\mathbf{u}_k^{(k-2)} &= \\mathbf{u}_k^{(k-3)} - \\operatorname{proj}_{\\mathbf{u}_{k-2}} \\left(\\mathbf{u}_k^{(k-3)}\\right), \\\\\n\\mathbf{u}_k^{(k-1)} &= \\mathbf{u}_k^{(k-2)} - \\operatorname{proj}_{\\mathbf{u}_{k-1}} \\left(\\mathbf{u}_k^{(k-2)}\\right), \\\\\n\\mathbf{e}_k &= \\frac{\\mathbf{u}_k^{(k-1)}}{\\left\\|\\mathbf{u}_k^{(k-1)}\\right\\|}\n\\end{align} " }, { "math_id": 52, "text": "\\mathbf{v}'_3" }, { "math_id": 53, "text": "\\mathbf{v}_3" }, { "math_id": 54, "text": "\\mathbf{v}_1, \\mathbf{v}_2, \\dots, \\mathbf{v}_n" }, { "math_id": 55, "text": "\\mathbf{v}_1, \\mathbf{v}_2^{(1)}, \\dots, \\mathbf{v}_n^{(1)}" }, { "math_id": 56, "text": "\\mathbf{v}_1" }, { "math_id": 57, "text": "\\mathbf{v}_k^{(1)} := \\mathbf{v}_k - \\frac{\\langle \\mathbf{v}_k, \\mathbf{v}_1 \\rangle}{\\langle \\mathbf{v}_1, \\mathbf{v}_1 \\rangle} \\mathbf{v}_1" }, { "math_id": 58, "text": "\\mathbf{u}_1, \\dots, \\mathbf{u}_n" }, { "math_id": 59, "text": "\\mathbf{u}_1 = \\mathbf{v}_1, \\mathbf{u}_2 = \\mathbf{v}_2^{(1)}" }, { "math_id": 60, "text": "\\mathbf{v}_3^{(1)}, \\dots, \\mathbf{v}_n^{(1)}" }, { "math_id": 61, "text": "\\mathbf{u}_2 = \\mathbf{v}_2^{(1)}" }, { "math_id": 62, "text": "\\mathbf{v}_3^{(2)}, \\mathbf{v}_4^{(2)}, \\dots, \\mathbf{v}_n^{(2)}" }, { "math_id": 63, "text": "\\mathbf{v}_k^{(2)} := \\mathbf{v}_k^{(1)} - \\frac{\\langle \\mathbf{v}_k^{(1)}, \\mathbf{u}_2 \\rangle}{\\langle \\mathbf{u}_2, \\mathbf{u}_2 \\rangle} \\mathbf{u}_2" }, { "math_id": 64, "text": "\\mathbf{v}_1, \\mathbf{v}_2^{(1)}, \\mathbf{v}_3^{(2)}, \\mathbf{v}_4^{(2)}, \\dots, \\mathbf{v}_n^{(2)}" }, { "math_id": 65, "text": "\\mathbf{u}_1, \\mathbf{u}_2, \\mathbf{u}_3" }, { "math_id": 66, "text": "\\mathbf{u}_1, \\mathbf{u}_2" }, { "math_id": 67, "text": "\\mathbf{v}_4^{(2)}, \\dots, \\mathbf{v}_n^{(2)}" }, { "math_id": 68, "text": "\\mathbf{u}_3 = \\mathbf{v}_3^{(2)}" }, { "math_id": 69, "text": "j" }, { "math_id": 70, "text": "A" }, { "math_id": 71, "text": "\\left[A A^\\mathsf{T} | A \\right]" }, { "math_id": 72, "text": "A A^\\mathsf{T}" }, { "math_id": 73, "text": "\\mathbf{v}_1 = \\begin{bmatrix} 3 & 1\\end{bmatrix}, \\mathbf{v}_2=\\begin{bmatrix}2 & 2\\end{bmatrix}" }, { "math_id": 74, "text": "\\left[A A^\\mathsf{T} | A \\right] = \\left[\\begin{array}{rr|rr} 10 & 8 & 3 & 1 \\\\ 8 & 8 & 2 & 2\\end{array}\\right]" }, { "math_id": 75, "text": "\\left[\\begin{array}{rr|rr} 1 & .8 & .3 & .1 \\\\ 0 & 1 & -.25 & .75\\end{array}\\right]" }, { "math_id": 76, "text": "\\mathbf{e}_1 = \\frac{1}{\\sqrt {.3^2+.1^2}}\\begin{bmatrix}.3 & .1\\end{bmatrix} = \\frac{1}{\\sqrt{10}} \\begin{bmatrix}3 & 1\\end{bmatrix}" }, { "math_id": 77, "text": "\\mathbf{e}_2 = \\frac{1}{\\sqrt{.25^2+.75^2}} \\begin{bmatrix}-.25 & .75\\end{bmatrix} = \\frac{1}{\\sqrt{10}} \\begin{bmatrix}-1 & 3\\end{bmatrix}, " }, { "math_id": 78, "text": " \\mathbf{e}_j = \\frac{1}{\\sqrt{D_{j-1} D_j}} \\begin{vmatrix}\n\\langle \\mathbf{v}_1, \\mathbf{v}_1 \\rangle & \\langle \\mathbf{v}_2, \\mathbf{v}_1 \\rangle & \\cdots & \\langle \\mathbf{v}_j, \\mathbf{v}_1 \\rangle \\\\\n\\langle \\mathbf{v}_1, \\mathbf{v}_2 \\rangle & \\langle \\mathbf{v}_2, \\mathbf{v}_2 \\rangle & \\cdots & \\langle \\mathbf{v}_j, \\mathbf{v}_2 \\rangle \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\\langle \\mathbf{v}_1, \\mathbf{v}_{j-1} \\rangle & \\langle \\mathbf{v}_2, \\mathbf{v}_{j-1} \\rangle & \\cdots & \\langle \\mathbf{v}_j, \\mathbf{v}_{j-1} \\rangle \\\\\n\\mathbf{v}_1 & \\mathbf{v}_2 & \\cdots & \\mathbf{v}_j\n\\end{vmatrix} " }, { "math_id": 79, "text": " \\mathbf{u}_j = \\frac{1}{D_{j-1} } \\begin{vmatrix}\n\\langle \\mathbf{v}_1, \\mathbf{v}_1 \\rangle & \\langle \\mathbf{v}_2, \\mathbf{v}_1 \\rangle & \\cdots & \\langle \\mathbf{v}_j, \\mathbf{v}_1 \\rangle \\\\\n\\langle \\mathbf{v}_1, \\mathbf{v}_2 \\rangle & \\langle \\mathbf{v}_2, \\mathbf{v}_2 \\rangle & \\cdots & \\langle \\mathbf{v}_j, \\mathbf{v}_2 \\rangle \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\\langle \\mathbf{v}_1, \\mathbf{v}_{j-1} \\rangle & \\langle \\mathbf{v}_2, \\mathbf{v}_{j-1} \\rangle & \\cdots & \\langle \\mathbf{v}_j, \\mathbf{v}_{j-1} \\rangle \\\\\n\\mathbf{v}_1 & \\mathbf{v}_2 & \\cdots & \\mathbf{v}_j\n\\end{vmatrix} " }, { "math_id": 80, "text": "D_0 = 1" }, { "math_id": 81, "text": "j \\ge 1" }, { "math_id": 82, "text": "D_j" }, { "math_id": 83, "text": " D_j = \\begin{vmatrix}\n\\langle \\mathbf{v}_1, \\mathbf{v}_1 \\rangle & \\langle \\mathbf{v}_2, \\mathbf{v}_1 \\rangle & \\cdots & \\langle \\mathbf{v}_j, \\mathbf{v}_1 \\rangle \\\\\n\\langle \\mathbf{v}_1, \\mathbf{v}_2 \\rangle & \\langle \\mathbf{v}_2, \\mathbf{v}_2 \\rangle & \\cdots & \\langle \\mathbf{v}_j, \\mathbf{v}_2 \\rangle \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n\\langle \\mathbf{v}_1, \\mathbf{v}_j \\rangle & \\langle \\mathbf{v}_2, \\mathbf{v}_j \\rangle & \\cdots & \\langle \\mathbf{v}_j, \\mathbf{v}_j \\rangle\n\\end{vmatrix}. " }, { "math_id": 84, "text": "\\mathbf{u}_k = \\mathbf{v}_k - \\sum_{j=1}^{k-1} (\\mathbf{v}_k \\cdot \\mathbf{u}_j)\\mathbf{u}_j^{-1}\\ ," }, { "math_id": 85, "text": "\\operatorname{proj}" }, { "math_id": 86, "text": "\\mathbf{u}_k = \\mathbf{v}_{k}\\wedge\\mathbf{v}_{k-1}\\wedge\\cdot\\cdot\\cdot\\wedge\\mathbf{v}_{1}(\\mathbf{v}_{k-1}\\wedge\\cdot\\cdot\\cdot\\wedge\\mathbf{v}_{1})^{-1}," }, { "math_id": 87, "text": "V" }, { "math_id": 88, "text": "V^* V " }, { "math_id": 89, "text": " V^* V = L L^*, " }, { "math_id": 90, "text": "L " }, { "math_id": 91, "text": "U = V\\left(L^{-1}\\right)^*" } ]
https://en.wikipedia.org/wiki?curid=82361
8236444
Sommerfeld–Kossel displacement law
The Sommerfeld–Kossel displacement law states that the first spark (singly ionized) spectrum of an element is similar in all details to the arc (neutral) spectrum of the element preceding it in the periodic table. Likewise, the second (doubly ionized) spark spectrum of an element is similar in all details to the first (singly ionized) spark spectrum of the element preceding it, or to the arc (neutral) spectrum of the element with atomic number two less, and so forth. Hence, the spectra of C I (neutral carbon), N II (singly ionized nitrogen), and O III (doubly ionized oxygen) atoms are similar, apart from shifts of the spectra to shorter wavelengths. C I, N II, and O III all have the same number of electrons, six, and the same ground-state electron configuration: formula_0 formula_1 formula_2 formula_3formula_4. The law was discovered by and named after Arnold Sommerfeld and Walther Kossel, who set it forth in a paper submitted to "Verhandungen der Deutschen Physikalischen Gesellschaft" in early 1919. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1s^2\\," }, { "math_id": 1, "text": "2s^2\\," }, { "math_id": 2, "text": "2p^2\\," }, { "math_id": 3, "text": "^3" }, { "math_id": 4, "text": "P_0\\," } ]
https://en.wikipedia.org/wiki?curid=8236444
82365
Spheroid
Surface formed by rotating an ellipse A spheroid, also known as an ellipsoid of revolution or rotational ellipsoid, is a quadric surface obtained by rotating an ellipse about one of its principal axes; in other words, an ellipsoid with two equal semi-diameters. A spheroid has circular symmetry. If the ellipse is rotated about its major axis, the result is a prolate spheroid, elongated like a rugby ball. The American football is similar but has a pointier end than a spheroid could. If the ellipse is rotated about its minor axis, the result is an oblate spheroid, flattened like a lentil or a plain M&amp;M. If the generating ellipse is a circle, the result is a sphere. Due to the combined effects of gravity and rotation, the figure of the Earth (and of all planets) is not quite a sphere, but instead is slightly flattened in the direction of its axis of rotation. For that reason, in cartography and geodesy the Earth is often approximated by an oblate spheroid, known as the reference ellipsoid, instead of a sphere. The current World Geodetic System model uses a spheroid whose radius is at the Equator and at the poles. The word "spheroid" originally meant "an approximately spherical body", admitting irregularities even beyond the bi- or tri-axial ellipsoidal shape; that is how the term is used in some older papers on geodesy (for example, referring to truncated spherical harmonic expansions of the Earth's gravity geopotential model). Equation. The equation of a tri-axial ellipsoid centred at the origin with semi-axes a, b and c aligned along the coordinate axes is formula_0 The equation of a spheroid with z as the symmetry axis is given by setting "a" "b": formula_1 The semi-axis a is the equatorial radius of the spheroid, and c is the distance from centre to pole along the symmetry axis. There are two possible cases: The case of "a" "c" reduces to a sphere. Properties. Area. An oblate spheroid with "c" &lt; "a" has surface area formula_2 The oblate spheroid is generated by rotation about the z-axis of an ellipse with semi-major axis a and semi-minor axis c, therefore e may be identified as the eccentricity. (See ellipse.) A prolate spheroid with "c" &gt; "a" has surface area formula_3 The prolate spheroid is generated by rotation about the z-axis of an ellipse with semi-major axis c and semi-minor axis a; therefore, e may again be identified as the eccentricity. (See ellipse.) These formulas are identical in the sense that the formula for "S"oblate can be used to calculate the surface area of a prolate spheroid and vice versa. However, e then becomes imaginary and can no longer directly be identified with the eccentricity. Both of these results may be cast into many other forms using standard mathematical identities and relations between parameters of the ellipse. Volume. The volume inside a spheroid (of any kind) is formula_4 If "A" 2"a" is the equatorial diameter, and "C" 2"c" is the polar diameter, the volume is formula_5 Curvature. Let a spheroid be parameterized as formula_6 where β is the "reduced latitude" or "parametric latitude", λ is the longitude, and − &lt; "β" &lt; + and −π &lt; "λ" &lt; +π. Then, the spheroid's Gaussian curvature is formula_7 and its mean curvature is formula_8 Both of these curvatures are always positive, so that every point on a spheroid is elliptic. Aspect ratio. The "aspect ratio" of an oblate spheroid/ellipse, "c" : "a", is the ratio of the polar to equatorial lengths, while the "flattening" (also called "oblateness") f, is the ratio of the equatorial-polar length difference to the equatorial length: formula_9 The first "eccentricity" (usually simply eccentricity, as above) is often used instead of flattening. It is defined by: formula_10 The relations between eccentricity and flattening are: formula_11 All modern geodetic ellipsoids are defined by the semi-major axis plus either the semi-minor axis (giving the aspect ratio), the flattening, or the first eccentricity. While these definitions are mathematically interchangeable, real-world calculations must lose some precision. To avoid confusion, an ellipsoidal definition considers its own values to be exact in the form it gives. Occurrence and applications. The most common shapes for the density distribution of protons and neutrons in an atomic nucleus are spherical, prolate, and oblate spheroidal, where the polar axis is assumed to be the spin axis (or direction of the spin angular momentum vector). Deformed nuclear shapes occur as a result of the competition between electromagnetic repulsion between protons, surface tension and quantum shell effects. Spheroids are common in 3D cell cultures. Rotating equilibrium spheroids include the Maclaurin spheroid and the Jacobi ellipsoid. Spheroid is also a shape of archaeological artifacts. Oblate spheroids. The oblate spheroid is the approximate shape of rotating planets and other celestial bodies, including Earth, Saturn, Jupiter, and the quickly spinning star Altair. Saturn is the most oblate planet in the Solar System, with a flattening of 0.09796. See planetary flattening and equatorial bulge for details. Enlightenment scientist Isaac Newton, working from Jean Richer's pendulum experiments and Christiaan Huygens's theories for their interpretation, reasoned that Jupiter and Earth are oblate spheroids owing to their centrifugal force. Earth's diverse cartographic and geodetic systems are based on reference ellipsoids, all of which are oblate. Prolate spheroids. The prolate spheroid is the approximate shape of the ball in several sports, such as in the rugby ball. Several moons of the Solar System approximate prolate spheroids in shape, though they are actually triaxial ellipsoids. Examples are Saturn's satellites Mimas, Enceladus, and Tethys and Uranus' satellite Miranda. In contrast to being distorted into oblate spheroids via rapid rotation, celestial objects distort slightly into prolate spheroids via tidal forces when they orbit a massive body in a close orbit. The most extreme example is Jupiter's moon Io, which becomes slightly more or less prolate in its orbit due to a slight eccentricity, causing intense volcanism. The major axis of the prolate spheroid does not run through the satellite's poles in this case, but through the two points on its equator directly facing toward and away from the primary. This combines with the smaller oblate distortion from the synchronous rotation to cause the body to become triaxial. The term is also used to describe the shape of some nebulae such as the Crab Nebula. Fresnel zones, used to analyze wave propagation and interference in space, are a series of concentric prolate spheroids with principal axes aligned along the direct line-of-sight between a transmitter and a receiver. The atomic nuclei of the actinide and lanthanide elements are shaped like prolate spheroids. In anatomy, near-spheroid organs such as testis may be measured by their long and short axes. Many submarines have a shape which can be described as prolate spheroid. Dynamical properties. For a spheroid having uniform density, the moment of inertia is that of an ellipsoid with an additional axis of symmetry. Given a description of a spheroid as having a major axis c, and minor axes a b, the moments of inertia along these principal axes are C, A, and B. However, in a spheroid the minor axes are symmetrical. Therefore, our inertial terms along the major axes are: formula_12 where M is the mass of the body defined as formula_13 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{x^2}{a^2}+\\frac{y^2}{b^2}+\\frac{z^2}{c^2} = 1." }, { "math_id": 1, "text": "\\frac{x^2+y^2}{a^2}+\\frac{z^2}{c^2}=1." }, { "math_id": 2, "text": "S_\\text{oblate} = 2\\pi a^2\\left(1+\\frac{1-e^2}{e}\\operatorname{arctanh}e\\right)=2\\pi a^2+\\pi \\frac{c^2}{e}\\ln \\left( \\frac{1+e}{1-e}\\right) \\qquad \\mbox{where} \\quad e^2=1-\\frac{c^2}{a^2}. " }, { "math_id": 3, "text": "S_\\text{prolate} = 2\\pi a^2\\left(1+\\frac{c}{ae}\\arcsin \\, e\\right) \\qquad \\mbox{where} \\quad e^2=1-\\frac{a^2}{c^2}. " }, { "math_id": 4, "text": "\\tfrac{4}{3}\\pi a^2c\\approx4.19a^2c." }, { "math_id": 5, "text": "\\tfrac{\\pi}{6}A^2C\\approx0.523A^2C." }, { "math_id": 6, "text": " \\boldsymbol\\sigma (\\beta,\\lambda) = (a \\cos \\beta \\cos \\lambda, a \\cos \\beta \\sin \\lambda, c \\sin \\beta)," }, { "math_id": 7, "text": " K(\\beta,\\lambda) = \\frac{c^2}{\\left(a^2 + \\left(c^2 - a^2\\right) \\cos^2 \\beta\\right)^2}," }, { "math_id": 8, "text": " H(\\beta,\\lambda) = \\frac{c \\left(2a^2 + \\left(c^2 - a^2\\right) \\cos^2 \\beta\\right)}{2a \\left(a^2 + \\left(c^2 - a^2\\right) \\cos^2\\beta\\right)^\\frac32}." }, { "math_id": 9, "text": "f = \\frac{a - c}{a} = 1 - \\frac{c}{a} ." }, { "math_id": 10, "text": "e = \\sqrt{1 - \\frac{c^2}{a^2}}" }, { "math_id": 11, "text": "\\begin{align} e &= \\sqrt{2f - f^2} \\\\ f &= 1 - \\sqrt{1 - e^2} \\end{align}" }, { "math_id": 12, "text": "\\begin{align}\nA = B &= \\tfrac15 M\\left(a^2+c^2\\right), \\\\\nC &= \\tfrac15 M\\left(a^2+b^2\\right) =\\tfrac25 M\\left(a^2\\right),\n\\end{align}" }, { "math_id": 13, "text": " M = \\tfrac43 \\pi a^2 c\\rho." } ]
https://en.wikipedia.org/wiki?curid=82365
82381
Electron capture
Process in which a proton-rich nuclide absorbs an inner atomic electron Electron capture (K-electron capture, also K-capture, or L-electron capture, L-capture) is a process in which the proton-rich nucleus of an electrically neutral atom absorbs an inner atomic electron, usually from the K or L electron shells. This process thereby changes a nuclear proton to a neutron and simultaneously causes the emission of an electron neutrino. or when written as a nuclear reaction equation, &lt;chem&gt;^{0}_{-1}e + ^{1}_{1}p -&gt; ^{1}_{0}n + ^{0}_{0} &lt;/chem&gt;νformula_0 Since this single emitted neutrino carries the entire decay energy, it has this single characteristic energy. Similarly, the momentum of the neutrino emission causes the daughter atom to recoil with a single characteristic momentum. The resulting daughter nuclide, if it is in an excited state, then transitions to its ground state. Usually, a gamma ray is emitted during this transition, but nuclear de-excitation may also take place by internal conversion. Following capture of an inner electron from the atom, an outer electron replaces the electron that was captured and one or more characteristic X-ray photons is emitted in this process. Electron capture sometimes also results in the Auger effect, where an electron is ejected from the atom's electron shell due to interactions between the atom's electrons in the process of seeking a lower energy electron state. Following electron capture, the atomic number is reduced by one, the neutron number is increased by one, and there is no change in mass number. Simple electron capture by itself results in a neutral atom, since the loss of the electron in the electron shell is balanced by a loss of positive nuclear charge. However, a positive atomic ion may result from further Auger electron emission. Electron capture is an example of weak interaction, one of the four fundamental forces. Electron capture is the primary decay mode for isotopes with a relative superabundance of protons in the nucleus, but with insufficient energy difference between the isotope and its prospective daughter (the isobar with one less positive charge) for the nuclide to decay by emitting a positron. Electron capture is always an alternative decay mode for radioactive isotopes that "do" have sufficient energy to decay by positron emission. Electron capture is sometimes included as a type of beta decay, because the basic nuclear process, mediated by the weak force, is the same. In nuclear physics, beta decay is a type of radioactive decay in which a beta ray (fast energetic electron or positron) and a neutrino are emitted from an atomic nucleus. Electron capture is sometimes called inverse beta decay, though this term usually refers to the interaction of an electron antineutrino with a proton. If the energy difference between the parent atom and the daughter atom is less than 1.022 MeV, positron emission is forbidden as not enough decay energy is available to allow it, and thus electron capture is the sole decay mode. For example, rubidium-83 (37 protons, 46 neutrons) will decay to krypton-83 (36 protons, 47 neutrons) solely by electron capture (the energy difference, or decay energy, is about 0.9 MeV). History. The theory of electron capture was first discussed by Gian-Carlo Wick in a 1934 paper, and then developed by Hideki Yukawa and others. K-electron capture was first observed by Luis Alvarez, in vanadium, [&lt;noinclude /&gt;[vanadium-48|V]&lt;noinclude /&gt;], which he reported in 1937. Alvarez went on to study electron capture in gallium ([&lt;noinclude /&gt;[gallium-67|Ga]&lt;noinclude /&gt;]) and other nuclides. Reaction details. The electron that is captured is one of the atom's own electrons, and not a new, incoming electron, as might be suggested by the way the reactions are written below. A few examples of electron capture are: Radioactive isotopes that decay by pure electron capture can be inhibited from radioactive decay if they are fully ionized ("stripped" is sometimes used to describe such ions). It is hypothesized that such elements, if formed by the r-process in exploding supernovae, are ejected fully ionized and so do not undergo radioactive decay as long as they do not encounter electrons in outer space. Anomalies in elemental distributions are thought to be partly a result of this effect on electron capture. Inverse decays can also be induced by full ionisation; for instance, [&lt;noinclude /&gt;[holmium-163|Ho]&lt;noinclude /&gt;] decays into [&lt;noinclude /&gt;[dysprosium-163|Dy]&lt;noinclude /&gt;] by electron capture; however, a fully ionised Dy decays into a bound state of Ho by the process of bound-state β− decay. Chemical bonds can also affect the rate of electron capture to a small degree (in general, less than 1%) depending on the proximity of electrons to the nucleus. For example, in 7Be, a difference of 0.9% has been observed between half-lives in metallic and insulating environments. This relatively large effect is due to the fact that beryllium is a small atom that employs valence electrons that are close to the nucleus, and also in orbitals with no orbital angular momentum. Electrons in s orbitals (regardless of shell or primary quantum number), have a probability antinode at the nucleus, and are thus far more subject to electron capture than p or d electrons, which have a probability node at the nucleus. Around the elements in the middle of the periodic table, isotopes that are lighter than stable isotopes of the same element tend to decay through electron capture, while isotopes heavier than the stable ones decay by electron emission. Electron capture happens most often in the heavier neutron-deficient elements where the mass change is smallest and positron emission is not always possible. When the loss of mass in a nuclear reaction is greater than zero but less than 2"m"e"c"2 the process cannot occur by positron emission, but occurs spontaneously for electron capture. Common examples. Some common radionuclides that decay solely by electron capture include: &lt;templatestyles src="Col-begin/styles.css"/&gt; For a full list, see the table of nuclides. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "_e" } ]
https://en.wikipedia.org/wiki?curid=82381
8238857
Schoen–Yau conjecture
In mathematics, the Schoen–Yau conjecture is a disproved conjecture in hyperbolic geometry, named after the mathematicians Richard Schoen and Shing-Tung Yau. It was inspired by a theorem of Erhard Heinz (1952). One method of disproof is the use of Scherk surfaces, as used by Harold Rosenberg and Pascal Collin (2006). Setting and statement of the conjecture. Let formula_0 be the complex plane considered as a Riemannian manifold with its usual (flat) Riemannian metric. Let formula_1 denote the hyperbolic plane, i.e. the unit disc formula_2 endowed with the hyperbolic metric formula_3 E. Heinz proved in 1952 that there can exist no harmonic diffeomorphism formula_4 In light of this theorem, Schoen conjectured that there exists no harmonic diffeomorphism formula_5 (It is not clear how Yau's name became associated with the conjecture: in unpublished correspondence with Harold Rosenberg, both Schoen and Yau identify Schoen as having postulated the conjecture). The Schoen(-Yau) conjecture has since been disproved. Comments. The emphasis is on the existence or non-existence of an "harmonic" diffeomorphism, and that this property is a "one-way" property. In more detail: suppose that we consider two Riemannian manifolds "M" and "N" (with their respective metrics), and write formula_6 if there exists a diffeomorphism from "M" onto "N" (in the usual terminology, "M" and "N" are diffeomorphic). Write formula_7 if there exists an harmonic diffeomorphism from "M" onto "N". It is not difficult to show that formula_8 (being diffeomorphic) is an equivalence relation on the objects of the category of Riemannian manifolds. In particular, formula_8 is a symmetric relation: formula_9 It can be shown that the hyperbolic plane and (flat) complex plane are indeed diffeomorphic: formula_10 so the question is whether or not they are "harmonically diffeomorphic". However, as the truth of Heinz's theorem and the falsity of the Schoen–Yau conjecture demonstrate, formula_11 is not a symmetric relation: formula_12 Thus, being "harmonically diffeomorphic" is a much stronger property than simply being diffeomorphic, and can be a "one-way" relation.
[ { "math_id": 0, "text": "\\mathbb{C}" }, { "math_id": 1, "text": "\\mathbb{H}" }, { "math_id": 2, "text": "\\mathbb{H} := \\{ (x, y) \\in \\mathbb{R}^2 | x^2 + y^2 < 1 \\}" }, { "math_id": 3, "text": "\\mathrm{d}s^2 = 4 \\frac{\\mathrm{d} x^2 + \\mathrm{d} y^2}{(1 - (x^2 + y^2))^2}." }, { "math_id": 4, "text": "f : \\mathbb{H} \\to \\mathbb{C}. \\, " }, { "math_id": 5, "text": "g : \\mathbb{C} \\to \\mathbb{H}. \\, " }, { "math_id": 6, "text": "M \\sim N\\," }, { "math_id": 7, "text": "M \\propto N" }, { "math_id": 8, "text": "\\sim" }, { "math_id": 9, "text": "M \\sim N \\iff N \\sim M." }, { "math_id": 10, "text": "\\mathbb{H} \\sim \\mathbb{C}," }, { "math_id": 11, "text": "\\propto" }, { "math_id": 12, "text": "\\mathbb{C} \\propto \\mathbb{H} \\text{ but } \\mathbb{H} \\not \\propto \\mathbb{C}." } ]
https://en.wikipedia.org/wiki?curid=8238857
8240558
Line–line intersection
Common point(s) shared by two lines in Euclidean geometry In Euclidean geometry, the intersection of a line and a line can be the empty set, a point, or another line. Distinguishing these cases and finding the intersection have uses, for example, in computer graphics, motion planning, and collision detection. In three-dimensional Euclidean geometry, if two lines are not in the same plane, they have no point of intersection and are called skew lines. If they are in the same plane, however, there are three possibilities: if they coincide (are not distinct lines), they have an infinitude of points in common (namely all of the points on either of them); if they are distinct but have the same slope, they are said to be parallel and have no points in common; otherwise, they have a single point of intersection. The distinguishing features of non-Euclidean geometry are the number and locations of possible intersections between two lines and the number of possible lines with no intersections (parallel lines) with a given line. Formulas. A necessary condition for two lines to intersect is that they are in the same plane—that is, are not skew lines. Satisfaction of this condition is equivalent to the tetrahedron with vertices at two of the points on one line and two of the points on the other line being degenerate in the sense of having zero volume. For the algebraic form of this condition, see . Given two points on each line. First we consider the intersection of two lines "L"1 and "L"2 in two-dimensional space, with line "L"1 being defined by two distinct points ("x"1, "y"1) and ("x"2, "y"2), and line "L"2 being defined by two distinct points ("x"3, "y"3) and ("x"4, "y"4). The intersection P of line "L"1 and "L"2 can be defined using determinants. formula_0 The determinants can be written out as: formula_1 When the two lines are parallel or coincident, the denominator is zero. Given two points on each line segment. The intersection point above is for the infinitely long lines defined by the points, rather than the line segments between the points, and can produce an intersection point not contained in either of the two line segments. In order to find the position of the intersection in respect to the line segments, we can define lines "L"1 and "L"2 in terms of first degree Bézier parameters: formula_2 (where t and u are real numbers). The intersection point of the lines is found with one of the following values of t or u, where formula_3 and formula_4 with formula_5 There will be an intersection if 0 ≤ "t" ≤ 1 and 0 ≤ "u" ≤ 1. The intersection point falls within the first line segment if 0 ≤ "t" ≤ 1, and it falls within the second line segment if 0 ≤ "u" ≤ 1. These inequalities can be tested without the need for division, allowing rapid determination of the existence of any line segment intersection before calculating its exact point. Given two line equations. The x and y coordinates of the point of intersection of two non-vertical lines can easily be found using the following substitutions and rearrangements. Suppose that two lines have the equations "y" "ax" + "c" and "y" "bx" + "d" where a and b are the slopes (gradients) of the lines and where c and d are the y-intercepts of the lines. At the point where the two lines intersect (if they do), both y coordinates will be the same, hence the following equality: formula_6 We can rearrange this expression in order to extract the value of x, formula_7 and so, formula_8 To find the y coordinate, all we need to do is substitute the value of x into either one of the two line equations, for example, into the first: formula_9 Hence, the point of intersection is formula_10 Note that if "a" "b" then the two lines are parallel and they do not intersect, unless "c" "d" as well, in which case the lines are coincident and they intersect at every point. Using homogeneous coordinates. By using homogeneous coordinates, the intersection point of two implicitly defined lines can be determined quite easily. In 2D, every point can be defined as a projection of a 3D point, given as the ordered triple ("x", "y", "w"). The mapping from 3D to 2D coordinates is ("x"′, "y"′) (, ). We can convert 2D points to homogeneous coordinates by defining them as ("x", "y", 1). Assume that we want to find intersection of two infinite lines in 2-dimensional space, defined as "a"1"x" + "b"1"y" + "c"1 0 and "a"2"x" + "b"2"y" + "c"2 0. We can represent these two lines in line coordinates as "U"1 ("a"1, "b"1, "c"1) and "U"2 ("a"2, "b"2, "c"2). The intersection "P"′ of two lines is then simply given by formula_11 If "cp" 0, the lines do not intersect. More than two lines. The intersection of two lines can be generalized to involve additional lines. The existence of and expression for the n-line intersection problem are as follows. In two dimensions. In two dimensions, more than two lines almost certainly do not intersect at a single point. To determine if they do and, if so, to find the intersection point, write the ith equation ("i" 1, …, "n") as formula_12 and stack these equations into matrix form as formula_13 where the ith row of the "n" × 2 matrix A is ["a""i"1, "a""i"2], w is the 2 × 1 vector [], and the ith element of the column vector b is "b""i". If A has independent columns, its rank is 2. Then if and only if the rank of the augmented matrix [A | b] is also 2, there exists a solution of the matrix equation and thus an intersection point of the n lines. The intersection point, if it exists, is given by formula_14 where Ag is the Moore–Penrose generalized inverse of A (which has the form shown because A has full column rank). Alternatively, the solution can be found by jointly solving any two independent equations. But if the rank of A is only 1, then if the rank of the augmented matrix is 2 there is no solution but if its rank is 1 then all of the lines coincide with each other. In three dimensions. The above approach can be readily extended to three dimensions. In three or more dimensions, even two lines almost certainly do not intersect; pairs of non-parallel lines that do not intersect are called skew lines. But if an intersection does exist it can be found, as follows. In three dimensions a line is represented by the intersection of two planes, each of which has an equation of the form formula_15 Thus a set of n lines can be represented by 2"n" equations in the 3-dimensional coordinate vector w: formula_16 where now A is 2"n" × 3 and b is 2"n" × 1. As before there is a unique intersection point if and only if A has full column rank and the augmented matrix [A | b] does not, and the unique intersection if it exists is given by formula_17 Nearest points to skew lines. In two or more dimensions, we can usually find a point that is mutually closest to two or more lines in a least-squares sense. In two dimensions. In the two-dimensional case, first, represent line i as a point p"i" on the line and a unit normal vector n̂"i", perpendicular to that line. That is, if x1 and x2 are points on line 1, then let p1 x1 and let formula_18 which is the unit vector along the line, rotated by a right angle. The distance from a point x to the line (p, n̂) is given by formula_19 And so the squared distance from a point x to a line is formula_20 The sum of squared distances to many lines is the cost function: formula_21 This can be rearranged: formula_22 To find the minimum, we differentiate with respect to x and set the result equal to the zero vector: formula_23 so formula_24 and so formula_25 In more than two dimensions. While n̂"i" is not well-defined in more than two dimensions, this can be generalized to any number of dimensions by noting that n̂"i" n̂"i"T is simply the symmetric matrix with all eigenvalues unity except for a zero eigenvalue in the direction along the line providing a seminorm on the distance between p"i" and another point giving the distance to the line. In any number of dimensions, if v̂"i" is a unit vector "along" the ith line, then formula_26 becomes formula_27 where I is the identity matrix, and so formula_28 General derivation. In order to find the intersection point of a set of lines, we calculate the point with minimum distance to them. Each line is defined by an origin a"i" and a unit direction vector n̂"i". The square of the distance from a point p to one of the lines is given from Pythagoras: formula_29 where (p − a"i")T n̂"i" is the projection of p − a"i" on line i. The sum of distances to the square to all lines is formula_30 To minimize this expression, we differentiate it with respect to p. formula_31 formula_32 which results in formula_33 where I is the identity matrix. This is a matrix Sp C, with solution p S+C, where S+ is the pseudo-inverse of S. Non-Euclidean geometry. In spherical geometry, any two great circles intersect. In hyperbolic geometry, given any line and any point, there are infinitely many lines through that point that do not intersect the given line.
[ { "math_id": 0, "text": "\nP_x = \\frac{\\begin{vmatrix} \\begin{vmatrix} x_1 & y_1\\\\x_2 & y_2\\end{vmatrix} & \\begin{vmatrix} x_1 & 1\\\\x_2 & 1\\end{vmatrix} \\\\\\\\ \\begin{vmatrix} x_3 & y_3\\\\x_4 & y_4\\end{vmatrix} & \\begin{vmatrix} x_3 & 1\\\\x_4 & 1\\end{vmatrix} \\end{vmatrix} }\n{\\begin{vmatrix} \\begin{vmatrix} x_1 & 1\\\\x_2 & 1\\end{vmatrix} & \\begin{vmatrix} y_1 & 1\\\\y_2 & 1\\end{vmatrix} \\\\\\\\ \\begin{vmatrix} x_3 & 1\\\\x_4 & 1\\end{vmatrix} & \\begin{vmatrix} y_3 & 1\\\\y_4 & 1\\end{vmatrix} \\end{vmatrix}}\\,\\!\n\\qquad\nP_y = \\frac{\\begin{vmatrix} \\begin{vmatrix} x_1 & y_1\\\\x_2 & y_2\\end{vmatrix} & \\begin{vmatrix} y_1 & 1\\\\y_2 & 1\\end{vmatrix} \\\\\\\\ \\begin{vmatrix} x_3 & y_3\\\\x_4 & y_4\\end{vmatrix} & \\begin{vmatrix} y_3 & 1\\\\y_4 & 1\\end{vmatrix} \\end{vmatrix} }\n{\\begin{vmatrix} \\begin{vmatrix} x_1 & 1\\\\x_2 & 1\\end{vmatrix} & \\begin{vmatrix} y_1 & 1\\\\y_2 & 1\\end{vmatrix} \\\\\\\\ \\begin{vmatrix} x_3 & 1\\\\x_4 & 1\\end{vmatrix} & \\begin{vmatrix} y_3 & 1\\\\y_4 & 1\\end{vmatrix} \\end{vmatrix}}\\,\\!\n" }, { "math_id": 1, "text": "\n\\begin{align}\nP_x&= \\frac{(x_1 y_2-y_1 x_2)(x_3-x_4)-(x_1-x_2)(x_3 y_4-y_3 x_4)}{(x_1-x_2) (y_3-y_4) - (y_1-y_2) (x_3-x_4)}\n\\\\[4px]\nP_y&= \\frac{(x_1 y_2-y_1 x_2)(y_3-y_4)-(y_1-y_2)(x_3 y_4-y_3 x_4)}{(x_1-x_2) (y_3-y_4) - (y_1-y_2) (x_3-x_4)}\n\\end{align}\n" }, { "math_id": 2, "text": "\nL_1 = \\begin{bmatrix}x_1 \\\\ y_1\\end{bmatrix}\n + t \\begin{bmatrix}x_2-x_1 \\\\ y_2-y_1\\end{bmatrix},\n\\qquad\nL_2 = \\begin{bmatrix}x_3 \\\\ y_3\\end{bmatrix}\n + u \\begin{bmatrix}x_4-x_3 \\\\ y_4-y_3\\end{bmatrix}\n" }, { "math_id": 3, "text": "\nt = \\frac{\\begin{vmatrix} x_1-x_3 & x_3-x_4\\\\y_1-y_3 & y_3-y_4\\end{vmatrix}}{\\begin{vmatrix} x_1-x_2 & x_3-x_4\\\\y_1-y_2 & y_3-y_4\\end{vmatrix}} = \\frac{(x_1 - x_3)(y_3-y_4)-(y_1-y_3)(x_3-x_4)}{(x_1-x_2)(y_3-y_4)-(y_1-y_2)(x_3-x_4)}\n" }, { "math_id": 4, "text": "\nu = - \\frac{\\begin{vmatrix} x_1-x_2 & x_1-x_3\\\\y_1-y_2 & y_1-y_3\\end{vmatrix}}{\\begin{vmatrix} x_1-x_2 & x_3-x_4\\\\y_1-y_2 & y_3-y_4\\end{vmatrix}} = -\\frac{(x_1 - x_2)(y_1-y_3)-(y_1-y_2)(x_1-x_3)}{(x_1-x_2)(y_3-y_4)-(y_1-y_2)(x_3-x_4)},\n" }, { "math_id": 5, "text": "\n(P_x, P_y)= \\bigl(x_1 + t (x_2-x_1),\\; y_1 + t (y_2-y_1)\\bigr) \\quad \\text{or} \\quad (P_x, P_y) = \\bigl(x_3 + u (x_4-x_3),\\; y_3 + u (y_4-y_3)\\bigr)\n" }, { "math_id": 6, "text": "ax+c = bx+d." }, { "math_id": 7, "text": "ax-bx = d-c," }, { "math_id": 8, "text": "x = \\frac{d-c}{a-b}." }, { "math_id": 9, "text": "y = a\\frac{d-c}{a-b}+c." }, { "math_id": 10, "text": "P = \\left( \\frac{d-c}{a-b}, a\\frac{d-c}{a-b}+c \\right) ." }, { "math_id": 11, "text": "P' = (a_p, b_p, c_p) = U_1 \\times U_2 = (b_1 c_2 - b_2 c_1, a_2 c_1-a_1 c_2, a_1 b_2 - a_2 b_1)" }, { "math_id": 12, "text": "\\begin{bmatrix} a_{i1} & a_{i2} \\end{bmatrix} \\begin{bmatrix} x \\\\ y \\end{bmatrix} = b_i," }, { "math_id": 13, "text": "\\mathbf{A}\\mathbf{w}=\\mathbf{b}," }, { "math_id": 14, "text": "\\mathbf{w} = \\mathbf{A}^\\mathrm{g} \\mathbf{b} = \\left(\\mathbf{A}^\\mathsf{T} \\mathbf{A}\\right)^{-1} \\mathbf{A}^\\mathsf{T} \\mathbf{b}," }, { "math_id": 15, "text": "\\begin{bmatrix} a_{i1} & a_{i2} & a_{i3} \\end{bmatrix} \\begin{bmatrix}x \\\\ y \\\\ z\\end{bmatrix} = b_i." }, { "math_id": 16, "text": "\\mathbf{A}\\mathbf{w}=\\mathbf{b}" }, { "math_id": 17, "text": "\\mathbf{w} = \\left(\\mathbf{A}^\\mathsf{T} \\mathbf{A} \\right)^{-1} \\mathbf{A}^\\mathsf{T} \\mathbf{b}." }, { "math_id": 18, "text": "\\mathbf{\\hat n}_1:= \\begin{bmatrix} 0 & -1 \\\\ 1 & 0 \\end{bmatrix} \\frac{\\mathbf{x}_2 - \\mathbf{x}_1} {\\|\\mathbf{x}_2-\\mathbf{x}_1\\|}" }, { "math_id": 19, "text": "d\\bigl(\\mathbf{x},(\\mathbf{p},\\mathbf{\\hat n})\\bigr) = \\bigl|(\\mathbf{x}-\\mathbf{p})\\cdot \\mathbf{\\hat n}\\bigr| = \\left|(\\mathbf{x}-\\mathbf{p})^\\mathsf{T} \\mathbf{\\hat n}\\right| = \\left|\\mathbf{\\hat n} ^\\mathsf{T} (\\mathbf{x}-\\mathbf{p})\\right| = \\sqrt{(\\mathbf{x}-\\mathbf{p})^\\mathsf{T} \\mathbf{\\hat n} \\mathbf{\\hat n}^\\mathsf{T} (\\mathbf{x}-\\mathbf{p})}." }, { "math_id": 20, "text": "d\\bigl(\\mathbf{x},(\\mathbf{p},\\mathbf{\\hat n})\\bigr)^2 = (\\mathbf{x}-\\mathbf{p})^\\mathsf{T} \\left(\\mathbf{\\hat n} \\mathbf{\\hat n}^\\mathsf{T} \\right) (\\mathbf{x}-\\mathbf{p})." }, { "math_id": 21, "text": "E(\\mathbf{x}) = \\sum_i (\\mathbf{x}-\\mathbf{p}_i)^\\mathsf{T} \\left(\\mathbf{\\hat n}_i \\mathbf{\\hat n}_i^\\mathsf{T}\\right) (\\mathbf{x}-\\mathbf{p}_i)." }, { "math_id": 22, "text": "\n\\begin{align}\nE(\\mathbf{x}) & = \\sum_i \\mathbf{x}^\\mathsf{T} \\mathbf{\\hat n}_i \\mathbf{\\hat n}_i^\\mathsf{T} \\mathbf{x} - \\mathbf{x}^\\mathsf{T} \\mathbf{\\hat n}_i \\mathbf{\\hat n}_i^\\mathsf{T} \\mathbf{p}_i - \\mathbf{p}_i^\\mathsf{T} \\mathbf{\\hat n}_i \\mathbf{\\hat n}_i^\\mathsf{T} \\mathbf{x} + \\mathbf{p}_i^\\mathsf{T} \\mathbf{\\hat n}_i \\mathbf{\\hat n}_i^\\mathsf{T} \\mathbf{p}_i \\\\\n& = \\mathbf{x}^\\mathsf{T} \\left(\\sum_i \\mathbf{\\hat n}_i \\mathbf{\\hat n}_i^\\mathsf{T}\\right) \\mathbf{x} - 2 \\mathbf{x}^\\mathsf{T} \\left(\\sum_i \\mathbf{\\hat n}_i \\mathbf{\\hat n}_i^\\mathsf{T} \\mathbf{p}_i\\right) + \\sum_i \\mathbf{p}_i^\\mathsf{T} \\mathbf{\\hat n}_i \\mathbf{\\hat n}_i^\\mathsf{T} \\mathbf{p}_i.\n\\end{align}\n" }, { "math_id": 23, "text": "\\frac{\\partial E(\\mathbf{x})}{\\partial \\mathbf{x}} = \\boldsymbol{0} = 2 \\left(\\sum_i \\mathbf{\\hat n}_i \\mathbf{\\hat n}_i^\\mathsf{T}\\right) \\mathbf{x} - 2 \\left(\\sum_i \\mathbf{\\hat n}_i \\mathbf{\\hat n}_i^\\mathsf{T} \\mathbf{p}_i\\right) " }, { "math_id": 24, "text": "\\left(\\sum_i \\mathbf{\\hat n}_i \\mathbf{\\hat n}_i^\\mathsf{T}\\right) \\mathbf{x} = \\sum_i \\mathbf{\\hat n}_i \\mathbf{\\hat n}_i^\\mathsf{T} \\mathbf{p}_i" }, { "math_id": 25, "text": "\\mathbf{x} = \\left(\\sum_i \\mathbf{\\hat n}_i \\mathbf{\\hat n}_i^\\mathsf{T}\\right)^{-1} \\left(\\sum_i \\mathbf{\\hat n}_i \\mathbf{\\hat n}_i^\\mathsf{T} \\mathbf{p}_i\\right)." }, { "math_id": 26, "text": "\\mathbf{\\hat n}_i \\mathbf{\\hat n}_i^\\mathsf{T}" }, { "math_id": 27, "text": "\\mathbf{I} - \\mathbf{\\hat v}_i \\mathbf{\\hat v}_i^\\mathsf{T}" }, { "math_id": 28, "text": " x= \\left(\\sum_i \\mathbf{I}-\\mathbf{\\hat v}_i \\mathbf{\\hat v}_i^\\mathsf{T}\\right)^{-1} \\left(\\sum_i \\left(\\mathbf{I}-\\mathbf{\\hat v}_i \\mathbf{\\hat v}_i^\\mathsf{T} \\right) \\mathbf{p}_i\\right)." }, { "math_id": 29, "text": " d_i^2 = \\left\\| \\mathbf{p} - \\mathbf{a}_i \\right\\|^2 - \\left( \\left( \\mathbf{p} - \\mathbf{a}_i \\right)^\\mathsf{T} \\mathbf{\\hat n}_i \\right)^2\n= \\left( \\mathbf{p} - \\mathbf{a}_i \\right)^\\mathsf{T} \\left( \\mathbf{p} - \\mathbf{a}_i \\right) - \\left( \\left( \\mathbf{p} - \\mathbf{a}_i \\right)^\\mathsf{T} \\mathbf{\\hat n}_i \\right)^2" }, { "math_id": 30, "text": " \\sum_i d_i^2 = \\sum_i \\left( {\\left( \\mathbf{p}- \\mathbf{a}_i \\right)^\\mathsf{T}} \\left( \\mathbf{p}- \\mathbf{a}_i \\right)- {\\left( \\left( \\mathbf{p}- \\mathbf{a}_i \\right)^\\mathsf{T} \\mathbf{\\hat n}_i \\right)^2} \\right) " }, { "math_id": 31, "text": " \\sum_i \\left( 2\\left( \\mathbf{p} - \\mathbf{a}_i \\right)- 2 \\left(\\left( \\mathbf{p} - \\mathbf{a}_i \\right)^\\mathsf{T} \\mathbf{\\hat n}_i\\right) \\mathbf{\\hat n}_i\\right)=\\boldsymbol{0}\n" }, { "math_id": 32, "text": " \\sum_i \\left( \\mathbf{p} - \\mathbf{a}_i \\right) = \\sum_i \\left( \\mathbf{\\hat n}_i \\mathbf{\\hat n}_i^\\mathsf{T} \\right) \\left( \\mathbf{p} - \\mathbf{a}_i \\right)\n" }, { "math_id": 33, "text": " \\left(\\sum_i\\left(\\mathbf{I} - \\mathbf{\\hat n}_i \\mathbf{\\hat n}_i^\\mathsf{T} \\right)\\right) \\mathbf{p}\n= \\sum_i \\left(\\mathbf{I} - \\mathbf{\\hat n}_i \\mathbf{\\hat n}_i^\\mathsf{T} \\right) \\mathbf{a}_i" } ]
https://en.wikipedia.org/wiki?curid=8240558
824435
Molar refractivity
Molar refractivity, formula_0, is a measure of the total polarizability of a mole of a substance and is dependent on the temperature, the index of refraction, and the pressure. The molar refractivity is defined as formula_1 where formula_2 is the Avogadro constant and formula_3 is the mean polarizability of a molecule. Substituting the molar refractivity into the Lorentz-Lorenz formula gives, for gasses formula_4 where formula_5 is the refractive index, formula_6 is the pressure of the gas, formula_7 is the universal gas constant, and formula_8 is the (absolute) temperature. For a gas, formula_9, so the molar refractivity can be approximated by formula_10 In SI units, formula_7 has units of J mol−1 K−1, formula_8 has units K, formula_5 has no units, and formula_6 has units of Pa, so the units of formula_0 are m3 mol−1. In terms of density ρ, molecular weight M, it can be shown that: formula_11 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": " A = \\frac{4 \\pi}{3} N_A \\alpha, " }, { "math_id": 2, "text": "N_A \\approx 6.022 \\times 10^{23}" }, { "math_id": 3, "text": "\\alpha" }, { "math_id": 4, "text": " A = \\frac{R T}{p} \\frac{n^2 - 1}{n^2 + 2} " }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "p" }, { "math_id": 7, "text": "R" }, { "math_id": 8, "text": "T" }, { "math_id": 9, "text": "n^2 \\approx 1" }, { "math_id": 10, "text": "A = \\frac{R T}{p} \\frac{n^2 - 1}{3}." }, { "math_id": 11, "text": "A = \\frac{M}{\\rho} \\frac{n^2 - 1}{n^2 + 2} \\approx \\frac{M}{\\rho} \\frac{n^2 - 1}{3}." } ]
https://en.wikipedia.org/wiki?curid=824435
8244667
Tarjan's strongly connected components algorithm
Graph algorithm Tarjan's strongly connected components algorithm is an algorithm in graph theory for finding the strongly connected components (SCCs) of a directed graph. It runs in linear time, matching the time bound for alternative methods including Kosaraju's algorithm and the path-based strong component algorithm. The algorithm is named for its inventor, Robert Tarjan. Overview. The algorithm takes a directed graph as input, and produces a partition of the graph's vertices into the graph's strongly connected components. Each vertex of the graph appears in exactly one of the strongly connected components. Any vertex that is not on a directed cycle forms a strongly connected component all by itself: for example, a vertex whose in-degree or out-degree is 0, or any vertex of an acyclic graph. The basic idea of the algorithm is this: a depth-first search (DFS) begins from an arbitrary start node (and subsequent depth-first searches are conducted on any nodes that have not yet been found). As usual with depth-first search, the search visits every node of the graph exactly once, refusing to revisit any node that has already been visited. Thus, the collection of search trees is a spanning forest of the graph. The strongly connected components will be recovered as certain subtrees of this forest. The roots of these subtrees are called the "roots" of the strongly connected components. Any node of a strongly connected component might serve as a root, if it happens to be the first node of a component that is discovered by search. Stack invariant. Nodes are placed on a stack in the order in which they are visited. When the depth-first search recursively visits a node codice_0 and its descendants, those nodes are not all necessarily popped from the stack when this recursive call returns. The crucial invariant property is that a node remains on the stack after it has been visited if and only if there exists a path in the input graph from it to some node earlier on the stack. In other words, it means that in the DFS a node would be only removed from the stack after all its connected paths have been traversed. When the DFS will backtrack it would remove the nodes on a single path and return to the root in order to start a new path. At the end of the call that visits codice_0 and its descendants, we know whether codice_0 itself has a path to any node earlier on the stack. If so, the call returns, leaving codice_0 on the stack to preserve the invariant. If not, then codice_0 must be the root of its strongly connected component, which consists of codice_0 together with any nodes later on the stack than codice_0 (such nodes all have paths back to codice_0 but not to any earlier node, because if they had paths to earlier nodes then codice_0 would also have paths to earlier nodes which is false). The connected component rooted at codice_0 is then popped from the stack and returned, again preserving the invariant. Bookkeeping. Each node codice_0 is assigned a unique integer codice_11, which numbers the nodes consecutively in the order in which they are discovered. It also maintains a value codice_12 that represents the smallest index of any node on the stack known to be reachable from codice_0 through codice_0's DFS subtree, including codice_0 itself. Therefore codice_0 must be left on the stack if codice_17, whereas v must be removed as the root of a strongly connected component if codice_18. The value codice_12 is computed during the depth-first search from codice_0, as this finds the nodes that are reachable from codice_0. The lowlink is different from the lowpoint, which is the smallest index reachable from codice_0 through any part of the graph. The algorithm in pseudocode. algorithm tarjan is input: graph "G" = ("V", "E") output: set of strongly connected components (sets of vertices) "index" := 0 "S" := empty stack for each "v" in "V" do if "v".index is undefined then strongconnect("v") function strongconnect("v") "// Set the depth index for v to the smallest unused index" "v".index := "index" "v".lowlink := "index" "index" := "index" + 1 "S".push("v") "v".onStack := true "// Consider successors of v" for each ("v", "w") in "E" do if "w".index is undefined then "// Successor w has not yet been visited; recurse on it" strongconnect("w") "v".lowlink := min("v".lowlink, "w".lowlink) else if "w".onStack then "// Successor w is in stack S and hence in the current SCC" "// If "w" is not on stack, then ("v", "w") is an edge pointing to an SCC already found and must be ignored "// See below regarding the next line" "v".lowlink := min("v".lowlink, "w".index) "// If v is a root node, pop the stack and generate an SCC" if "v".lowlink = "v".index then start a new strongly connected component repeat "w" := "S".pop() "w".onStack := false add "w" to current strongly connected component while "w" ≠ "v" output the current strongly connected component The codice_23 variable is the depth-first search node number counter. codice_24 is the node stack, which starts out empty and stores the history of nodes explored but not yet committed to a strongly connected component. This is not the normal depth-first search stack, as nodes are not popped as the search returns up the tree; they are only popped when an entire strongly connected component has been found. The outermost loop searches each node that has not yet been visited, ensuring that nodes which are not reachable from the first node are still eventually traversed. The function codice_25 performs a single depth-first search of the graph, finding all successors from the node codice_0, and reporting all strongly connected components of that subgraph. When each node finishes recursing, if its lowlink is still set to its index, then it is the root node of a strongly connected component, formed by all of the nodes above it on the stack. The algorithm pops the stack up to and including the current node, and presents all of these nodes as a strongly connected component. In Tarjan's paper, when codice_27 is on the stack, codice_28 is updated with the assignment codice_29. A common variation is to instead use codice_30. This modified algorithm does not compute the lowlink numbers as Tarjan defined them, but the test codice_31 still identifies root nodes of strongly connected components, and therefore the overall algorithm remains valid. Complexity. "Time Complexity": The Tarjan procedure is called once for each node; the forall statement considers each edge at most once. The algorithm's running time is therefore linear in the number of edges and nodes in G, i.e. formula_0. In order to achieve this complexity, the test for whether codice_32 is on the stack should be done in constant time. This can be done as in the pseudocode above: store a flag on each node that indicates whether it is on the stack, and performing this test by examining the flag. "Space Complexity": The Tarjan procedure requires two words of supplementary data per vertex for the codice_23 and codice_34 fields, along with one bit for codice_35 and another for determining when codice_23 is undefined. In addition, one word is required on each stack frame to hold codice_0 and another for the current position in the edge list. Finally, the worst-case size of the stack codice_24 must be formula_1 (i.e. when the graph is one giant component). This gives a final analysis of formula_2 where formula_3 is the machine word size. The variation of Nuutila and Soisalon-Soininen reduced this to formula_4 and, subsequently, that of Pearce requires only formula_5. Additional remarks. While there is nothing special about the order of the nodes within each strongly connected component, one useful property of the algorithm is that no strongly connected component will be identified before any of its successors. Therefore, the order in which the strongly connected components are identified constitutes a reverse topological sort of the DAG formed by the strongly connected components. Donald Knuth described Tarjan's SCC algorithm as one of his favorite implementations in the book "The Stanford GraphBase". He also wrote: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The data structures that he devised for this problem fit together in an amazingly beautiful way, so that the quantities you need to look at while exploring a directed graph are always magically at your fingertips. And his algorithm also does topological sorting as a byproduct.
[ { "math_id": 0, "text": "O(|V|+|E|)" }, { "math_id": 1, "text": "|V|" }, { "math_id": 2, "text": "O(|V|\\cdot(2+5w))" }, { "math_id": 3, "text": "w" }, { "math_id": 4, "text": "O(|V|\\cdot(1+4w))" }, { "math_id": 5, "text": "O(|V|\\cdot(1+3w))" } ]
https://en.wikipedia.org/wiki?curid=8244667
824552
Bayes factor
Statistical factor used to compare competing hypotheses The Bayes factor is a ratio of two competing statistical models represented by their evidence, and is used to quantify the support for one model over the other. The models in question can have a common set of parameters, such as a null hypothesis and an alternative, but this is not necessary; for instance, it could also be a non-linear model compared to its linear approximation. The Bayes factor can be thought of as a Bayesian analog to the likelihood-ratio test, although it uses the integrated (i.e., marginal) likelihood rather than the maximized likelihood. As such, both quantities only coincide under simple hypotheses (e.g., two specific parameter values). Also, in contrast with null hypothesis significance testing, Bayes factors support evaluation of evidence "in favor" of a null hypothesis, rather than only allowing the null to be rejected or not rejected. Although conceptually simple, the computation of the Bayes factor can be challenging depending on the complexity of the model and the hypotheses. Since closed-form expressions of the marginal likelihood are generally not available, numerical approximations based on MCMC samples have been suggested. For certain special cases, simplified algebraic expressions can be derived; for instance, the Savage–Dickey density ratio in the case of a precise (equality constrained) hypothesis against an unrestricted alternative. Another approximation, derived by applying Laplace's approximation to the integrated likelihoods, is known as the Bayesian information criterion (BIC); in large data sets the Bayes factor will approach the BIC as the influence of the priors wanes. In small data sets, priors generally matter and must not be improper since the Bayes factor will be undefined if either of the two integrals in its ratio is not finite. Definition. The Bayes factor is the ratio of two marginal likelihoods; that is, the likelihoods of two statistical models integrated over the prior probabilities of their parameters. The posterior probability formula_0 of a model "M" given data "D" is given by Bayes' theorem: formula_1 The key data-dependent term formula_2 represents the probability that some data are produced under the assumption of the model "M"; evaluating it correctly is the key to Bayesian model comparison. Given a model selection problem in which one wishes to choose between two models on the basis of observed data "D", the plausibility of the two different models "M"1 and "M"2, parametrised by model parameter vectors formula_3 and formula_4, is assessed by the Bayes factor "K" given by formula_5 When the two models have equal prior probability, so that formula_6, the Bayes factor is equal to the ratio of the posterior probabilities of "M"1 and "M"2. If instead of the Bayes factor integral, the likelihood corresponding to the maximum likelihood estimate of the parameter for each statistical model is used, then the test becomes a classical likelihood-ratio test. Unlike a likelihood-ratio test, this Bayesian model comparison does not depend on any single set of parameters, as it integrates over all parameters in each model (with respect to the respective priors). An advantage of the use of Bayes factors is that it automatically, and quite naturally, includes a penalty for including too much model structure. It thus guards against overfitting. For models where an explicit version of the likelihood is not available or too costly to evaluate numerically, approximate Bayesian computation can be used for model selection in a Bayesian framework, with the caveat that approximate-Bayesian estimates of Bayes factors are often biased. Other approaches are: Interpretation. A value of "K" &gt; 1 means that "M"1 is more strongly supported by the data under consideration than "M"2. Note that classical hypothesis testing gives one hypothesis (or model) preferred status (the 'null hypothesis'), and only considers evidence "against" it. The fact that a Bayes factor can produce evidence "for" and not just against a null hypothesis is one of the key advantages of this analysis method. Harold Jeffreys gave a scale for interpretation of "K": &lt;templatestyles src="alternating rows table/styles.css" /&gt; The second column gives the corresponding weights of evidence in decihartleys (also known as decibans); bits are added in the third column for clarity. According to I. J. Good a change in a weight of evidence of 1 deciban or 1/3 of a bit (i.e. a change in an odds ratio from evens to about 5:4) is about as finely as humans can reasonably perceive their degree of belief in a hypothesis in everyday use. An alternative table, widely cited, is provided by Kass and Raftery (1995): &lt;templatestyles src="alternating rows table/styles.css" /&gt; Example. Suppose we have a random variable that produces either a success or a failure. We want to compare a model "M"1 where the probability of success is "q" = &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2, and another model "M"2 where "q" is unknown and we take a prior distribution for "q" that is uniform on [0,1]. We take a sample of 200, and find 115 successes and 85 failures. The likelihood can be calculated according to the binomial distribution: formula_7 Thus we have for "M"1 formula_8 whereas for "M"2 we have formula_9 The ratio is then 1.2, which is "barely worth mentioning" even if it points very slightly towards "M"1. A frequentist hypothesis test of "M"1 (here considered as a null hypothesis) would have produced a very different result. Such a test says that "M"1 should be rejected at the 5% significance level, since the probability of getting 115 or more successes from a sample of 200 if "q" = &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 is 0.02, and as a two-tailed test of getting a figure as extreme as or more extreme than 115 is 0.04. Note that 115 is more than two standard deviations away from 100. Thus, whereas a frequentist hypothesis test would yield significant results at the 5% significance level, the Bayes factor hardly considers this to be an extreme result. Note, however, that a non-uniform prior (for example one that reflects the fact that you expect the number of success and failures to be of the same order of magnitude) could result in a Bayes factor that is more in agreement with the frequentist hypothesis test. A classical likelihood-ratio test would have found the maximum likelihood estimate for "q", namely formula_10, whence formula_11 (rather than averaging over all possible "q"). That gives a likelihood ratio of 0.1 and points towards "M"2. "M"2 is a more complex model than "M"1 because it has a free parameter which allows it to model the data more closely. The ability of Bayes factors to take this into account is a reason why Bayesian inference has been put forward as a theoretical justification for and generalisation of Occam's razor, reducing Type I errors. On the other hand, the modern method of relative likelihood takes into account the number of free parameters in the models, unlike the classical likelihood ratio. The relative likelihood method could be applied as follows. Model "M"1 has 0 parameters, and so its Akaike information criterion (AIC) value is formula_12. Model "M"2 has 1 parameter, and so its AIC value is formula_13. Hence "M"1 is about formula_14 times as probable as "M"2 to minimize the information loss. Thus "M"2 is slightly preferred, but "M"1 cannot be excluded. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Pr(M|D)" }, { "math_id": 1, "text": "\\Pr(M|D) = \\frac{\\Pr(D|M)\\Pr(M)}{\\Pr(D)}." }, { "math_id": 2, "text": "\\Pr(D|M)" }, { "math_id": 3, "text": " \\theta_1 " }, { "math_id": 4, "text": " \\theta_2 " }, { "math_id": 5, "text": " K = \\frac{\\Pr(D|M_1)}{\\Pr(D|M_2)}\n= \\frac{\\int \\Pr(\\theta_1|M_1)\\Pr(D|\\theta_1,M_1)\\,d\\theta_1}\n{\\int \\Pr(\\theta_2|M_2)\\Pr(D|\\theta_2,M_2)\\,d\\theta_2}\n= \\frac{\\frac{\\Pr(M_1|D)\\Pr(D)}{\\Pr(M_1)}}{\\frac{\\Pr(M_2|D)\\Pr(D)}{\\Pr(M_2)}}\n= \\frac{\\Pr(M_1|D)}{\\Pr(M_2|D)}\\frac{\\Pr(M_2)}{\\Pr(M_1)}.\n" }, { "math_id": 6, "text": "\\Pr(M_1) = \\Pr(M_2)" }, { "math_id": 7, "text": "{{200 \\choose 115}q^{115}(1-q)^{85}}." }, { "math_id": 8, "text": "P(X=115 \\mid M_1)={200 \\choose 115}\\left({1 \\over 2}\\right)^{200} \\approx 0.006" }, { "math_id": 9, "text": "P(X=115 \\mid M_2) = \\int_{0}^1{200 \\choose 115}q^{115}(1-q)^{85}dq = {1 \\over 201} \\approx 0.005 " }, { "math_id": 10, "text": "\\hat q =\\frac{115}{200} = 0.575" }, { "math_id": 11, "text": "\\textstyle P(X=115 \\mid M_2) = {{200 \\choose 115}\\hat q^{115}(1-\\hat q)^{85}} \\approx 0.06" }, { "math_id": 12, "text": "2\\cdot 0 - 2\\cdot \\ln(0.005956)\\approx 10.2467" }, { "math_id": 13, "text": "2\\cdot 1 - 2\\cdot\\ln(0.056991)\\approx 7.7297" }, { "math_id": 14, "text": "\\exp\\left(\\frac{7.7297- 10.2467}{2}\\right)\\approx 0.284" } ]
https://en.wikipedia.org/wiki?curid=824552
8245866
Alpha-particle spectroscopy
Quantitative measurement of the energy of alpha particles Alpha spectrometry (also known as alpha(-particle) spectroscopy) is the quantitative study of the energy of alpha particles emitted by a radioactive nuclide that is an alpha emitter. As emitted alpha particles are mono-energetic (i.e. not emitted with a spectrum of energies, such as beta decay) with energies often distinct to the decay they can be used to identify which radionuclide they originated from. Experimental methods. Counting with a source deposited onto a metal disk. It is common to place a drop of the test solution on a metal disk which is then dried out to give a uniform coating on the disk. This is then used as the test sample. If the thickness of the layer formed on the disk is too thick then the lines of the spectrum are broadened to lower energies. This is because some of the energy of the alpha particles is lost during their movement through the layer of active material. Liquid scintillation. An alternative method is to use liquid scintillation counting (LSC), where the sample is directly mixed with a scintillation cocktail. When the individual light emission events are counted, the LSC instrument records the amount of light energy per radioactive decay event. The alpha spectra obtained by liquid scintillation counting are broaden because of the two main intrinsic limitations of the LSC method: (1) because the random quenching reduces the number of photons emitted per radioactive decay, and (2) because the emitted photons can be absorbed by cloudy or coloured samples (Lambert-Beer law). The liquid scintillation spectra are subject to Gaussian broadening, rather than to the distortion caused by the absorption of alpha-particles by the sample when the layer of active material deposited onto a disk is too thick. Alpha spectra. From left to right the peaks are due to 209Po, 239Pu, 210Po and 241Am. The fact that isotopes such as 239Pu and 241Am have more than one alpha line indicates that the (daughter) nucleus can be in different discrete energy levels. Calibration: MCA does not work on energy, it works on voltage. To relate the energy to voltage one must calibrate the detection system. Here different alpha emitting sources of known energy were placed under the detector and the full energy peak is recorded. Measurement of thickness of thin foils: Energies of alpha particles from radioactive sources are measured before and after passing through the thin films. By measuring difference and using SRIM we can measure the thickness of thin foils. Kinematics of alpha decay. The decay energy, Q (also called the "Q-value of the reaction"), corresponds to a disappearance of mass. For the alpha decay nuclear reaction: &lt;chem&gt;^{A}_{Z}P -&gt; ^{(A-4)}_{(Z-2)}D + \alpha &lt;/chem&gt;, (where P is the parent nuclide and D the daughter). formula_0, or to put in the more commonly used units: "Q" (MeV) = -931.5 Δ"M" (Da), (where Δ"M = ΣMproducts - ΣMreactants"). When the daughter nuclide and alpha particle formed are in their ground states (common for alpha decay), the total decay energy is divided between the two in kinetic energy (T): formula_1 The size of T is dependent on the ratio of masses of the products and due to the conservation of momentum (the parent's momentum = 0 at the moment of decay) this can be calculated: formula_2 formula_3 and formula_4, formula_5 formula_6 formula_7 formula_8 The alpha particle, or 4He nucleus, is an especially strongly bound particle. This combined with the fact that the binding energy per nucleon has a maximum value near A=56 and systematically decreases for heavier nuclei, creates the situation that nuclei with A&gt;150 have positive Qα-values for the emission of alpha particles. For example, one of the heaviest naturally occurring isotopes, &lt;chem&gt;^238U -&gt; ^234Th + ^4He &lt;/chem&gt; (ignoring charges): Qα = -931.5 (234.043 601 + 4.002 603 254 13 - 238.050 788 2) = 4.2699 MeV Note that the decay energy will be divided between the alpha-particle and the heavy recoiling daughter so that the kinetic energy of the alpha particle (Tα) will be slightly less: Tα = (234.043 601 / 238.050 788 2) 4.2699 = 4.198 MeV, (note this is for the 238gU to 234gTh reaction, which in this case has the branching ratio of 79%). The kinetic energy of the recoiling 234Th daughter nucleus is TD = (mα / mP) Qα = (4.002 603 254 13 / 238.050 788 2) 4.2699 = 0.0718 MeV or 71.8 keV, which whilst much smaller is still substantially bigger than that of chemical bonds (&lt;10 eV) meaning the daughter nuclide will break away from whatever chemical environment the parent had been in. The recoil energy is also the reason that alpha spectrometers, whilst run under reduced pressure, are not operated at too low a pressure so that the air helps stop the recoiling daughter from moving completely out of the original alpha-source and cause serious contamination problems if the daughters are themselves radioactive. The Qα-values generally increase with increasing atomic number but the variation in the mass surface due to shell effects can overwhelm the systematic increase. The sharp peaks near A = 214 are due to the effects of the N = 126 shell. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q{_\\alpha} = (m_P - m_D - m_\\alpha) \\ c^2" }, { "math_id": 1, "text": "Q_\\alpha = T_\\alpha + T_D" }, { "math_id": 2, "text": "p_\\alpha + p_D = 0" }, { "math_id": 3, "text": "T = 0.5mv^2" }, { "math_id": 4, "text": "p = mv" }, { "math_id": 5, "text": "\\therefore p = \\sqrt{2mT}" }, { "math_id": 6, "text": "\\begin{align}\n\\sqrt{2m_\\alpha T_\\alpha} &= -\\sqrt{2m_D T_D} \\\\[4pt]\n2m_\\alpha T_\\alpha &= 2m_D T_D \\\\[4pt]\n\\frac{m_\\alpha}{m_D}T_\\alpha &= T_D\n\\end{align}" }, { "math_id": 7, "text": "\\begin{align} \nQ_\\alpha &= T_\\alpha + \\frac{m_\\alpha}{m_D}T_\\alpha \\\\[4pt]\n&= T_\\alpha\\bigg(1 + \\frac{m_\\alpha}{m_D}\\bigg) \\\\[4pt]\n&= T_\\alpha\\bigg(\\frac{m_D}{m_D}+\\frac{m_\\alpha}{m_D}\\bigg) \\\\[4pt]\n&= T_\\alpha\\bigg(\\frac{m_D+m_\\alpha}{m_D}\\bigg) \\\\[4pt]\n\\end{align}" }, { "math_id": 8, "text": "\\therefore T_\\alpha = \\frac{m_D}{m_P}Q_\\alpha" } ]
https://en.wikipedia.org/wiki?curid=8245866
8248391
Dendrite (mathematics)
Locally connected dendroid In mathematics, a dendrite is a certain type of topological space that may be characterized either as a locally connected dendroid or equivalently as a locally connected continuum that contains no simple closed curves. Importance. Dendrites may be used to model certain types of Julia set. For example, if 0 is pre-periodic, but not periodic, under the function formula_0, then the Julia set of formula_1 is a dendrite: connected, without interior. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(z) = z^2 + c" }, { "math_id": 1, "text": "f" } ]
https://en.wikipedia.org/wiki?curid=8248391
8251931
Hydrotrope
A hydrotrope is a compound that solubilizes hydrophobic compounds in aqueous solutions by means other than micellar solubilization. Typically, hydrotropes consist of a hydrophilic part and a hydrophobic part (similar to surfactants), but the hydrophobic part is generally too small to cause spontaneous self-aggregation. Hydrotropes do not have a critical concentration above which self-aggregation spontaneously starts to occur (as found for micelle- and vesicle-forming surfactants, which have a critical micelle concentration (cmc) and a critical vesicle concentration (cvc)). Instead, some hydrotropes aggregate in a step-wise self-aggregation process, gradually increasing aggregation size. However, many hydrotropes do not seem to self-aggregate at all, unless a solubilizate has been added. Examples of hydrotropes include urea, tosylate, cumenesulfonate and xylenesulfonate. The term "hydrotropy" was originally put forward by Carl Neuberg to describe the increase in the solubility of a solute by the addition of fairly high concentrations of alkali metal salts of various organic acids. However, the term has been used in the literature to designate non-micelle-forming substances, either liquids or solids, capable of solubilizing insoluble compounds. The chemical structure of the conventional Neuberg's hydrotropic salts (proto-type, sodium benzoate) consists generally of two essential parts, an anionic group and a hydrophobic aromatic ring or ring system. The anionic group is involved in bringing about high aqueous solubility, which is a prerequisite for a hydrotropic substance. The type of anion or metal ion appeared to have a minor effect on the phenomenon. On the other hand, planarity of the hydrophobic part has been emphasized as an important factor in the mechanism of hydrotropic solubilization To form a hydrotrope, an aromatic hydrocarbon solvent is sulfonated, creating an aromatic sulfonic acid. It is then neutralized with a base. Additives may either increase or decrease the solubility of a solute in a given solvent. These salts that increase solubility are said to "salt in" the solute and those salts that decrease the solubility "salt out" the solute. The effect of an additive depends very much on the influence it has on the structure of water or its ability to compete with the solvent water molecules. A convenient quantitation of the effect of a solute additive on the solubility of another solute may be obtained by the Setschetow equation: formula_0, where "S"0 is the solubility in the absence of the additive "S" is the solubility in the presence of the additive "Ca" is the concentration of the additive "K" is the salting coefficient, which is a measure of the sensitivity of the activity coefficient of the solute towards the salt. Applications. Hydrotropes are in use industrially and commercially in cleaning and personal care product formulations to allow more concentrated formulations of surfactants. About 29,000 metric tons are produced (i.e., manufactured and imported) annually in the US. Annual production (plus importation) in Europe and Australia is approximately 17,000 and 1,100 metric tons, respectively. Common products containing hydrotropes include laundry detergents, surface cleaners, dishwashing detergents, liquid soaps, shampoos and conditioners. They are coupling agents, used at concentrations from 0.1 to 15% to stabilize the formula, modify viscosity and cloud-point, reduce phase separation in low temperatures, and limit foaming. Adenosine triphosphate (ATP) has been shown to prevent aggregation of proteins at normal physiologic concentrations and to be approximately an order of magnitude more effective than sodium xylene sulfonate in a classic hydrotrope assay. The hydrotrope activity of ATP was shown to be independent of its activity as an "energy currency" in cells. Additionally, ATP function as biological hydrotope has been shown proteome-wide under near native conditions. In a recent study, however, the hydrotropic capabilities of ATP have been questioned as it has severe salting-out characteristics due to its triphosphate moiety. Environmental considerations. Hydrotropes have a low bioaccumulation potential, as the octanol-water partition coefficient is &lt;1.0. Studies have found hydrotopes to be very slightly volatile, with vapor pressures &lt;2.0x10-5 Pa. They are aerobically biodegradable. Removal via the secondary wastewater treatment process of activated sludge is &gt;94%. Acute toxicity studies on fish show an LC50 &gt;400 mg active ingredient (a.i.)/L. For Daphnia, the EC50 is &gt;318 mg a.i./L. The most sensitive species is green algae with EC50 values in the range of 230–236 mg a.i./ L and No Observed Effect Concentrations (NOEC) in the range of 31–75 mg a.i./L. The aquatic Predicted No Effect Concentration (PNEC) was found to be 0.23 mg a.i./L. The Predicted Environmental Concentration (PEC)/PNEC ratio has been determined to be &lt; 1 and, therefore, hydrotropes in household laundry and cleaning products have been determined to not be an environmental concern. Human health. Aggregate exposures to consumers (direct and indirect dermal contact, ingestion, and inhalation) have been estimated to be 1.42 ug/Kg bw/day. Calcium xylene sulfonate and sodium cumene sulfonate have been shown to cause temporary, slight eye irritation in animals. Studies have not found hydrotropes to be mutagenic, carcinogenic or have reproductive toxicity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\log{\\frac {S_0}{S}} = K \\cdot C_a" } ]
https://en.wikipedia.org/wiki?curid=8251931
8253098
Protein precipitation
Protein precipitation is widely used in downstream processing of biological products in order to concentrate proteins and purify them from various contaminants. For example, in the biotechnology industry protein precipitation is used to eliminate contaminants commonly contained in blood. The underlying mechanism of precipitation is to alter the solvation potential of the solvent, more specifically, by lowering the solubility of the solute by addition of a reagent. Biochemical laboratory technique General principles. The solubility of proteins in aqueous buffers depends on the distribution of hydrophilic and hydrophobic amino acid residues on the protein's surface. Hydrophobic residues predominantly occur in the globular protein core, but some exist in patches on the surface. Proteins that have high hydrophobic amino acid content on the surface have low solubility in an aqueous solvent. Charged and polar surface residues interact with ionic groups in the solvent and increase the solubility of a protein. Knowledge of a protein's amino acid composition will aid in determining an ideal precipitation solvent and methods. Repulsive electrostatic force. Repulsive electrostatic forces form when proteins are dissolved in an electrolyte solution. These repulsive forces between proteins prevent aggregation and facilitate dissolution. Upon dissolution in an electrolyte solution, solvent counterions migrate towards charged surface residues on the protein, forming a rigid matrix of counterions on the protein's surface. Next to this layer is another solvation layer that is less rigid and, as one moves away from the protein surface, contains a decreasing concentration of counterions and an increasing concentration of co-ions. The presence of these solvation layers cause the protein to have fewer ionic interactions with other proteins and decreases the likelihood of aggregation. Repulsive electrostatic forces also form when proteins are dissolved in water. Water forms a solvation layer around the hydrophilic surface residues of a protein. Water establishes a concentration gradient around the protein, with the highest concentration at the protein surface. This water network has a damping effect on the attractive forces between proteins. Attractive electrostatic force. Dispersive or attractive forces exist between proteins through permanent and induced dipoles. For example, basic residues on a protein can have electrostatic interactions with acidic residues on another protein. However, solvation by ions in an electrolytic solution or water will decrease protein–protein attractive forces. Therefore, to precipitate or induce accumulation of proteins, the hydration layer around the protein should be reduced. The purpose of the added reagents in protein precipitation is to reduce the hydration layer. Precipitate formation. Protein precipitate formation occurs in a stepwise process. First, a precipitating agent is added and the solution is steadily mixed. Mixing causes the precipitant and protein to collide. Enough mixing time is required for molecules to diffuse across the fluid eddies. Next, proteins undergo a nucleation phase, where submicroscopic sized protein aggregates, or particles, are generated. Growth of these particles is under Brownian diffusion control. Once the particles reach a critical size (0.1 μm to 10 μm for high and low shear fields, respectively), by diffusive addition of individual protein molecules to it, they continue to grow by colliding into each other and sticking or flocculating. This phase occurs at a slower rate. During the final step, called aging in a shear field, the precipitate particles repeatedly collide and stick, then break apart, until a stable mean particle size is reached, which is dependent upon individual proteins. The mechanical strength of the protein particles correlates with the product of the mean shear rate and the aging time, which is known as the Camp number. Aging helps particles withstand the fluid shear forces encountered in pumps and centrifuge feed zones without reducing in size. Methods. Salting out. Salting out is the most common method used to precipitate a protein. Addition of a neutral salt, such as ammonium sulfate, compresses the solvation layer and increases protein–protein interactions. As the salt concentration of a solution is increased, the charges on the surface of the protein interact with the salt, not the water, thereby exposing hydrophobic patches on the protein surface and causing the protein to fall out of solution (aggregate and precipitate). Energetics involved in salting out. Salting out is a spontaneous process when the right concentration of the salt is reached in solution. The hydrophobic patches on the protein surface generate highly ordered water shells. This results in a small decrease in enthalpy, Δ"H", and a larger decrease in entropy, Δ"S," of the ordered water molecules relative to the molecules in the bulk solution. The overall free energy change, Δ"G", of the process is given by the Gibbs free energy equation: formula_0 Δ"G" = Free energy change, Δ"H" = Enthalpy change upon precipitation, Δ"S" = Entropy change upon precipitation, "T" = Absolute temperature. When water molecules in the rigid solvation layer are brought back into the bulk phase through interactions with the added salt, their greater freedom of movement causes a significant increase in their entropy. Thus, Δ"G" becomes negative and precipitation occurs spontaneously. Hofmeister series. Kosmotropes or "water structure stabilizers" are salts which promote the dissipation / dispersion of water from the solvation layer around a protein. Hydrophobic patches are then exposed on the protein's surface, and they interact with hydrophobic patches on other proteins. These salts enhance protein aggregation and precipitation. Chaotropes or "water structure breakers," have the opposite effect of Kosmotropes. These salts promote an increase in the solvation layer around a protein. The effectiveness of the kosmotropic salts in precipitating proteins follows the order of the Hofmeister series: Most precipitation formula_1 least precipitation Most precipitation formula_2 least precipitation Salting out in practice. The decrease in protein solubility follows a normalized solubility curve of the type shown. The relationship between the solubility of a protein and increasing ionic strength of the solution can be represented by the Cohn equation: formula_3 "S" = solubility of the protein, "B" is idealized solubility, "K" is a salt-specific constant and "I" is the ionic strength of the solution, which is attributed to the added salt. formula_4 "z""i" is the ion charge of the salt and "c""i" is the salt concentration. The ideal salt for protein precipitation is most effective for a particular amino acid composition, inexpensive, non-buffering, and non-polluting. The most commonly used salt is ammonium sulfate. There is a low variation in salting out over temperatures 0 °C to 30 °C. Protein precipitates left in the salt solution can remain stable for years-protected from proteolysis and bacterial contamination by the high salt concentrations. Isoelectric precipitation. The isoelectric point (pI) is the pH of a solution at which the net primary charge of a protein becomes zero. At a solution pH that is above the pI the surface of the protein is predominantly negatively charged and therefore like-charged molecules will exhibit repulsive forces. Likewise, at a solution pH that is below the pI, the surface of the protein is predominantly positively charged and repulsion between proteins occurs. However, at the pI the negative and positive charges cancel, repulsive electrostatic forces are reduced and the attraction forces predominate. The attraction forces will cause aggregation and precipitation. The pI of most proteins is in the pH range of 4–6. Mineral acids, such as hydrochloric and sulfuric acid are used as precipitants. The greatest disadvantage to isoelectric point precipitation is the irreversible denaturation caused by the mineral acids. For this reason isoelectric point precipitation is most often used to precipitate contaminant proteins, rather than the target protein. The precipitation of casein during cheesemaking, or during production of sodium caseinate, is an isoelectric precipitation. Precipitation with miscible solvents. Addition of miscible solvents such as ethanol or methanol to a solution may cause proteins in the solution to precipitate. The solvation layer around the protein will decrease as the organic solvent progressively displaces water from the protein surface and binds it in hydration layers around the organic solvent molecules. With smaller hydration layers, the proteins can aggregate by attractive electrostatic and dipole forces. Important parameters to consider are temperature, which should be less than 0 °C to avoid denaturation, pH and protein concentration in solution. Miscible organic solvents decrease the dielectric constant of water, which in effect allows two proteins to come close together. At the isoelectric point the relationship between the dielectric constant and protein solubility is given by: formula_5 "S"0 is an extrapolated value of "S", "e" is the dielectric constant of the mixture and "k" is a constant that relates to the dielectric constant of water. The Cohn process for plasma protein fractionation relies on solvent precipitation with ethanol to isolate individual plasma proteins. a clinical application for the use of methanol as a protein precipitating agent is in the estimation of bilirubin. Non-ionic hydrophilic polymers. Polymers, such as dextrans and polyethylene glycols, are frequently used to precipitate proteins because they have low flammability and are less likely to denature biomaterials than isoelectric precipitation. These polymers in solution attract water molecules away from the solvation layer around the protein. This increases the protein–protein interactions and enhances precipitation. For the specific case of polyethylene glycol, precipitation can be modeled by the equation: formula_6 "C" is the polymer concentration, "P" is a protein–protein interaction coefficient, "a" is a protein–polymer interaction coefficient and formula_7 "μ" is the chemical potential of component I, "R" is the universal gas constant and "T" is the absolute temperature. Flocculation by polyelectrolytes. Alginate, carboxymethylcellulose, polyacrylic acid, tannic acid and polyphosphates can form extended networks between protein molecules in solution. The effectiveness of these polyelectrolytes depend on the pH of the solution. Anionic polyelectrolytes are used at pH values less than the isoelectric point. Cationic polyelectrolytes are at pH values above the pI. It is important to note that an excess of polyelectrolytes will cause the precipitate to dissolve back into the solution. An example of polyelectrolyte flocculation is the removal of protein cloud from beer wort using Irish moss. Polyvalent metallic ions. Metal salts can be used at low concentrations to precipitate enzymes and nucleic acids from solutions. Polyvalent metal ions frequently used are Ca2+, Mg2+, Mn2+ or Fe2+. Precipitation reactors. There are numerous industrial scaled reactors than can be used to precipitate large amounts of proteins, such as recombinant DNA polymerases from a solution. Batch reactors. Batch reactors are the simplest type of precipitation reactor. The precipitating agent is slowly added to the protein solution under mixing. The aggregating protein particles tend to be compact and regular in shape. Since the particles are exposed to a wide range of shear stresses for a long period of time, they tend to be compact, dense and mechanically stable. Tubular reactors. In tubular reactors, feed protein solution and the precipitating reagent are contacted in a zone of efficient mixing then fed into long tubes where precipitation takes place. The fluid in volume elements approach plug flow as they move though the tubes of the reactor. Turbulent flow is promoted through wire mesh inserts in the tube. The tubular reactor does not require moving mechanical parts and is inexpensive to build. However, the reactor can become impractically long if the particles aggregate slowly. Continuous stirred tank reactors (CSTR). CSTR reactors run at steady state with a continuous flow of reactants and products in a well-mixed tank. Fresh protein feed contacts slurry that already contains precipitate particles and the precipitation reagents. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\Delta G = \\Delta H - T \\Delta S." }, { "math_id": 1, "text": " \\mathrm{ PO_{4}^{3-} > SO_{4}^{2-} > COO^{-} > Cl^{-}}" }, { "math_id": 2, "text": " \\mathrm{ NH_{4}^{+} > K^{+} > Na^{+}}" }, { "math_id": 3, "text": " \\log S = B - KI \\," }, { "math_id": 4, "text": " I = \\begin{matrix}\\frac{1}{2}\\end{matrix}\\sum_{i=1}^{n} c_{i}z_{i}^{2} " }, { "math_id": 5, "text": " \\log S = k/e^{2} + \\log S^{0} \\," }, { "math_id": 6, "text": " \\ln(S) + pS = X - aC \\," }, { "math_id": 7, "text": " x = (\\mu_i - \\mu_i^{0})RT " } ]
https://en.wikipedia.org/wiki?curid=8253098
8253417
Plasma modeling
Plasma modeling refers to solving equations of motion that describe the state of a plasma. It is generally coupled with Maxwell's equations for electromagnetic fields or Poisson's equation for electrostatic fields. There are several main types of plasma models: single particle, kinetic, fluid, hybrid kinetic/fluid, gyrokinetic and as system of many particles. Single particle description. The single particle model describes the plasma as individual electrons and ions moving in imposed (rather than self-consistent) electric and magnetic fields. The motion of each particle is thus described by the Lorentz Force Law. In many cases of practical interest, this motion can be treated as the superposition of a relatively fast circular motion around a point called the guiding center and a relatively slow drift of this point. Kinetic description. The kinetic model is the most fundamental way to describe a plasma, resultantly producing a distribution function formula_0 where the independent variables formula_1 and formula_2 are position and velocity, respectively. A kinetic description is achieved by solving the Boltzmann equation or, when the correct description of long-range Coulomb interaction is necessary, by the Vlasov equation which contains self-consistent collective electromagnetic field, or by the Fokker–Planck equation, in which approximations have been used to derive manageable collision terms. The charges and currents produced by the distribution functions self-consistently determine the electromagnetic fields via Maxwell's equations. Fluid description. To reduce the complexities in the kinetic description, the fluid model describes the plasma based on macroscopic quantities (velocity moments of the distribution such as density, mean velocity, and mean energy). The equations for macroscopic quantities, called fluid equations, are obtained by taking velocity moments of the Boltzmann equation or the Vlasov equation. The fluid equations are not closed without the determination of transport coefficients such as mobility, diffusion coefficient, averaged collision frequencies, and so on. To determine the transport coefficients, the velocity distribution function must be assumed/chosen. But this assumption can lead to a failure of capturing some physics. Hybrid kinetic/fluid description. Although the kinetic model describes the physics accurately, it is more complex (and in the case of numerical simulations, more computationally intensive) than the fluid model. The hybrid model is a combination of fluid and kinetic models, treating some components of the system as a fluid, and others kinetically. The hybrid model is sometimes applied in space physics, when the simulation domain exceeds thousands of ion gyroradius scales, making it impractical to solve kinetic equations for electrons. In this approach, magnetohydrodynamic fluid equations describe electrons, while the kinetic Vlasov equation describes ions. Gyrokinetic description. In the gyrokinetic model, which is appropriate to systems with a strong background magnetic field, the kinetic equations are averaged over the fast circular motion of the gyroradius. This model has been used extensively for simulation of tokamak plasma instabilities (for example, the GYRO and Gyrokinetic ElectroMagnetic codes), and more recently in astrophysical applications. Quantum mechanical methods. Quantum methods are not yet very common in plasma modeling. They can be used to solve unique modeling problems; like situations where other methods do not apply. They involve the application of quantum field theory to plasma. In these cases, the electric and magnetic fields made by particles are modeled like a field; A web of forces. Particles that move, or are removed from the population push and pull on this web of forces, this field. The mathematical treatment for this involves Lagrangian mathematics. Collisional-radiative modeling is used to calculate quantum state densities and the emission/absorption properties of a plasma. This plasma radiation physics is critical for the diagnosis and simulation of astrophysical and nuclear fusion plasma. It is one of the most general approaches and lies between the extrema of a local thermal equilibrium and a coronal picture. In a local thermal equilibrium the population of excited states is distributed according to a Boltzmann distribution. However, this holds only if densities are high enough for an excited hydrogen atom to undergo many collisions such that the energy is distributed before the radiative process sets in. In a coronal picture the timescale of the radiative process is small compared to the collisions since densities are very small. The use of the term coronal equilibrium is ambiguous and may also refer to the non-transport ionization balance of recombination and ionization. The only thing they have in common is that a coronal equilibrium is not sufficient for tokamak plasma. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(\\vec{x},\\vec{v},t)" }, { "math_id": 1, "text": "\\vec{x}" }, { "math_id": 2, "text": "\\vec{v}" } ]
https://en.wikipedia.org/wiki?curid=8253417
825390
Luce's choice axiom
In probability theory, Luce's choice axiom, formulated by R. Duncan Luce (1959), states that the relative odds of selecting one item over another from a pool of many items is not affected by the presence or absence of other items in the pool. Selection of this kind is said to have "independence from irrelevant alternatives" (IIA). Overview. Consider a set formula_0 of possible outcomes, and consider a selection rule formula_1, such that for any formula_2 with formula_3 a finite set, the selector selects formula_4 from formula_3 with probability formula_5. Luce proposed two choice axioms. The second one is usually meant by "Luce's choice axiom", as the first one is usually called "independence from irrelevant alternatives" (IIA). Luce's choice axiom 1 (IIA): if formula_6, then for any formula_7, we still have formula_8. Luce's choice axiom 2 ("path independence"): formula_9for any formula_10. Luce's choice axiom 1 is implied by choice axiom 2. Matching law formulation. Define the matching law selection rule formula_11, for some "value" function formula_12. This is sometimes called the softmax function, or the Boltzmann distribution. Theorem: Any matching law selection rule satisfies Luce's choice axiom. Conversely, if formula_13 for all formula_2, then Luce's choice axiom implies that it is a matching law selection rule. Applications. In economics, it can be used to model a consumer's tendency to choose one brand of product over another. In behavioral psychology, it is used to model response behavior in the form of matching law. In cognitive science, it is used to model approximately rational decision processes. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "P" }, { "math_id": 2, "text": "a\\in A \\subset X" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": "a" }, { "math_id": 5, "text": "P(a \\mid A)" }, { "math_id": 6, "text": "P(a\\mid A) = 0, P(b\\mid A) > 0" }, { "math_id": 7, "text": "a, b \\in B \\subset A" }, { "math_id": 8, "text": "P(a\\mid B) = 0" }, { "math_id": 9, "text": "P(a \\mid A) = P(a\\mid B)\\sum_{b\\in B}P(b\\mid A)" }, { "math_id": 10, "text": "a \\in B \\subset A" }, { "math_id": 11, "text": "P(a\\mid A) = \\frac{u(a)}{\\sum_{a'\\in A} u(a')}" }, { "math_id": 12, "text": "u: A \\to (0, \\infty)" }, { "math_id": 13, "text": "P(a\\mid A)> 0" } ]
https://en.wikipedia.org/wiki?curid=825390
8254
Diode
Two-terminal electronic component A diode is a two-terminal electronic component that conducts current primarily in one direction (asymmetric conductance). It has low (ideally zero) resistance in one direction and high (ideally infinite) resistance in the other. A semiconductor diode, the most commonly used type today, is a crystalline piece of semiconductor material with a p–n junction connected to two electrical terminals. It has an exponential current–voltage characteristic. Semiconductor diodes were the first semiconductor electronic devices. The discovery of asymmetric electrical conduction across the contact between a crystalline mineral and a metal was made by German physicist Ferdinand Braun in 1874. Today, most diodes are made of silicon, but other semiconducting materials such as gallium arsenide and germanium are also used. The obsolete thermionic diode is a vacuum tube with two electrodes, a heated cathode and a plate, in which electrons can flow in only one direction, from the cathode the to plate. Among many uses, diodes are found in rectifiers to convert alternating current (AC) power to direct current (DC), demodulation in radio receivers, and can even be used for logic or as temperature sensors. A common variant of a diode is a light-emitting diode, which is used as electric lighting and status indicators on electronic devices. Main functions. Unidirectional current flow. The most common function of a diode is to allow an electric current to pass in one direction (called the diode's "forward" direction), while blocking it in the opposite direction (the "reverse" direction). Its hydraulic analogy is a check valve. This unidirectional behavior can convert alternating current (AC) to direct current (DC), a process called rectification. As rectifiers, diodes can be used for such tasks as extracting modulation from radio signals in radio receivers. Threshold voltage. A diode's behavior is often simplified as having a "forward threshold voltage" or "turn-on voltage" or "cut-in voltage", above which there is significant current and below which there is almost no current, which depends on a diode's composition: This voltage may loosely be referred to simply as the diode's "forward voltage drop" or just "voltage drop", since a consequence of the steepness of the exponential is that a diode's voltage drop will not significantly exceed the threshold voltage under normal forward bias operating conditions. Datasheets typically quote a typical or maximum "forward voltage" (VF) for a specified current and temperature (e.g. 20 mA and 25 °C for LEDs), so the user has a guarantee about when a certain amount of current will kick in. At higher currents, the forward voltage drop of the diode increases. For instance, a drop of 1 V to 1.5 V is typical at full rated current for silicon power diodes. (See also: ) However, a semiconductor diode's exponential current–voltage characteristic is really more gradual than this simple on–off action. Although an exponential function may appear to have a definite "knee" around this threshold when viewed on a linear scale, the knee is an illusion that depends on the scale of y-axis representing current. In a semi-log plot (using a logarithmic scale for current and a linear scale for voltage), the diode's exponential curve instead appears more like a straight line. Since a diode's forward-voltage drop varies only a little with the current, and is more so a function of temperature, this effect can be used as a temperature sensor or as a somewhat imprecise voltage reference. Reverse breakdown. A diode's high resistance to current flowing in the reverse direction suddenly drops to a low resistance when the reverse voltage across the diode reaches a value called the breakdown voltage. This effect is used to regulate voltage (Zener diodes) or to protect circuits from high voltage surges (avalanche diodes). Other functions. A semiconductor diode's current–voltage characteristic can be tailored by selecting the semiconductor materials and the doping impurities introduced into the materials during manufacture. These techniques are used to create special-purpose diodes that perform many different functions. For example, to electronically tune radio and TV receivers (varactor diodes), to generate radio-frequency oscillations (tunnel diodes, Gunn diodes, IMPATT diodes), and to produce light (light-emitting diodes). Tunnel, Gunn and IMPATT diodes exhibit negative resistance, which is useful in microwave and switching circuits. Diodes, both vacuum and semiconductor, can be used as shot-noise generators. History. Thermionic (vacuum-tube) diodes and solid-state (semiconductor) diodes were developed separately, at approximately the same time, in the early 1900s, as radio receiver detectors. Until the 1950s, vacuum diodes were used more frequently in radios because the early point-contact semiconductor diodes were less stable. In addition, most receiving sets had vacuum tubes for amplification that could easily have the thermionic diodes included in the tube (for example the 12SQ7 double diode triode), and vacuum-tube rectifiers and gas-filled rectifiers were capable of handling some high-voltage/high-current rectification tasks better than the semiconductor diodes (such as selenium rectifiers) that were available at that time. In 1873, Frederick Guthrie observed that a grounded, white-hot metal ball brought in close proximity to an electroscope would discharge a positively charged electroscope, but not a negatively charged electroscope. In 1880, Thomas Edison observed unidirectional current between heated and unheated elements in a bulb, later called Edison effect, and was granted a patent on application of the phenomenon for use in a DC voltmeter. About 20 years later, John Ambrose Fleming (scientific adviser to the Marconi Company and former Edison employee) realized that the Edison effect could be used as a radio detector. Fleming patented the first true thermionic diode, the Fleming valve, in Britain on 16 November 1904 (followed by U.S. patent 803684 in November 1905). Throughout the vacuum tube era, valve diodes were used in almost all electronics such as radios, televisions, sound systems, and instrumentation. They slowly lost market share beginning in the late 1940s due to selenium rectifier technology and then to semiconductor diodes during the 1960s. Today they are still used in a few high power applications where their ability to withstand transient voltages and their robustness gives them an advantage over semiconductor devices, and in musical instrument and audiophile applications. In 1874, German scientist Karl Ferdinand Braun discovered the "unilateral conduction" across a contact between a metal and a mineral. Indian scientist Jagadish Chandra Bose was the first to use a crystal for detecting radio waves in 1894. The crystal detector was developed into a practical device for wireless telegraphy by Greenleaf Whittier Pickard, who invented a silicon crystal detector in 1903 and received a patent for it on 20 November 1906. Other experimenters tried a variety of other minerals as detectors. Semiconductor principles were unknown to the developers of these early rectifiers. During the 1930s understanding of physics advanced and in the mid-1930s researchers at Bell Telephone Laboratories recognized the potential of the crystal detector for application in microwave technology. Researchers at Bell Labs, Western Electric, MIT, Purdue and in the UK intensively developed point-contact diodes ("crystal rectifiers" or "crystal diodes") during World War II for application in radar. After World War II, AT&amp;T used these in its microwave towers that criss-crossed the United States, and many radar sets use them even in the 21st century. In 1946, Sylvania began offering the 1N34 crystal diode. During the early 1950s, junction diodes were developed. In 2022, the first superconducting diode effect without an external magnetic field was realized. Etymology. At the time of their invention, asymmetrical conduction devices were known as rectifiers. In 1919, the year tetrodes were invented, William Henry Eccles coined the term "diode" from the Greek roots "di" (from "δί"), meaning 'two', and "ode" (from "οδός"), meaning 'path'. The word "diode" however was already in use, as were "triode, tetrode, pentode, hexode", as terms of multiplex telegraphy. Although all diodes "rectify", "rectifier" usually applies to diodes used for power supply, to differentiate them from diodes intended for small signal circuits. Vacuum tube diodes. A thermionic diode is a thermionic-valve device consisting of a sealed, evacuated glass or metal envelope containing two electrodes: a cathode and a plate. The cathode is either "indirectly heated" or "directly heated". If indirect heating is employed, a heater is included in the envelope. In operation, the cathode is heated to red heat, around . A directly heated cathode is made of tungsten wire and is heated by a current passed through it from an external voltage source. An indirectly heated cathode is heated by infrared radiation from a nearby heater that is formed of Nichrome wire and supplied with current provided by an external voltage source. The operating temperature of the cathode causes it to release electrons into the vacuum, a process called thermionic emission. The cathode is coated with oxides of alkaline earth metals, such as barium and strontium oxides. These have a low work function, meaning that they more readily emit electrons than would the uncoated cathode. The plate, not being heated, does not emit electrons; but is able to absorb them. The alternating voltage to be rectified is applied between the cathode and the plate. When the plate voltage is positive with respect to the cathode, the plate electrostatically attracts the electrons from the cathode, so a current of electrons flows through the tube from cathode to plate. When the plate voltage is negative with respect to the cathode, no electrons are emitted by the plate, so no current can pass from the plate to the cathode. Semiconductor diodes. Point-contact diodes. Point-contact diodes were developed starting in the 1930s, out of the early crystal detector technology, and are now generally used in the 3 to 30 gigahertz range. Point-contact diodes use a small diameter metal wire in contact with a semiconductor crystal, and are of either "non-welded" contact type or "welded contact" type. Non-welded contact construction utilizes the Schottky barrier principle. The metal side is the pointed end of a small diameter wire that is in contact with the semiconductor crystal. In the welded contact type, a small P region is formed in the otherwise N-type crystal around the metal point during manufacture by momentarily passing a relatively large current through the device. Point contact diodes generally exhibit lower capacitance, higher forward resistance and greater reverse leakage than junction diodes. Junction diodes. p–n junction diode. A p–n junction diode is made of a crystal of semiconductor, usually silicon, but germanium and gallium arsenide are also used. Impurities are added to it to create a region on one side that contains negative charge carriers (electrons), called an n-type semiconductor, and a region on the other side that contains positive charge carriers (holes), called a p-type semiconductor. When the n-type and p-type materials are attached together, a momentary flow of electrons occurs from the n to the p side resulting in a third region between the two where no charge carriers are present. This region is called the depletion region because there are no charge carriers (neither electrons nor holes) in it. The diode's terminals are attached to the n-type and p-type regions. The boundary between these two regions, called a p–n junction, is where the action of the diode takes place. When a sufficiently higher electrical potential is applied to the P side (the anode) than to the N side (the cathode), it allows electrons to flow through the depletion region from the N-type side to the P-type side. The junction does not allow the flow of electrons in the opposite direction when the potential is applied in reverse, creating, in a sense, an electrical check valve. Schottky diode. Another type of junction diode, the Schottky diode, is formed from a metal–semiconductor junction rather than a p–n junction, which reduces capacitance and increases switching speed. Current–voltage characteristic. A semiconductor diode's behavior in a circuit is given by its current–voltage characteristic. The shape of the curve is determined by the transport of charge carriers through the so-called "depletion layer" or "depletion region" that exists at the p–n junction between differing semiconductors. When a p–n junction is first created, conduction-band (mobile) electrons from the N-doped region diffuse into the P-doped region where there is a large population of holes (vacant places for electrons) with which the electrons "recombine". When a mobile electron recombines with a hole, both hole and electron vanish, leaving behind an immobile positively charged donor (dopant) on the N side and negatively charged acceptor (dopant) on the P side. The region around the p–n junction becomes depleted of charge carriers and thus behaves as an insulator. However, the width of the depletion region (called the depletion width) cannot grow without limit. For each electron–hole pair recombination made, a positively charged dopant ion is left behind in the N-doped region, and a negatively charged dopant ion is created in the P-doped region. As recombination proceeds and more ions are created, an increasing electric field develops through the depletion zone that acts to slow and then finally stop recombination. At this point, there is a "built-in" potential across the depletion zone. Reverse bias. If an external voltage is placed across the diode with the same polarity as the built-in potential, the depletion zone continues to act as an insulator, preventing any significant electric current flow (unless electron–hole pairs are actively being created in the junction by, for instance, light; see photodiode). Forward bias. However, if the polarity of the external voltage opposes the built-in potential, recombination can once again proceed, resulting in a substantial electric current through the p–n junction (i.e. substantial numbers of electrons and holes recombine at the junction) that increases exponentially with voltage. Operating regions. A diode's current–voltage characteristic can be approximated by four operating regions. From lower to higher bias voltages, these are: Shockley diode equation. The "Shockley ideal diode equation" or the "diode law" (named after the bipolar junction transistor co-inventor William Bradford Shockley) models the exponential current–voltage (I–V) relationship of diodes in moderate forward or reverse bias. The article Shockley diode equation provides details. Small-signal behavior. At forward voltages less than the saturation voltage, the voltage versus current characteristic curve of most diodes is not a straight line. The current can be approximated by formula_0 as explained in the Shockley diode equation article. In detector and mixer applications, the current can be estimated by a Taylor's series. The odd terms can be omitted because they produce frequency components that are outside the pass band of the mixer or detector. Even terms beyond the second derivative usually need not be included because they are small compared to the second order term. The desired current component is approximately proportional to the square of the input voltage, so the response is called "square law" in this region. Reverse-recovery effect. Following the end of forwarding conduction in a p–n type diode, a reverse current can flow for a short time. The device does not attain its blocking capability until the mobile charge in the junction is depleted. The effect can be significant when switching large currents very quickly. A certain amount of "reverse recovery time" tr (on the order of tens of nanoseconds to a few microseconds) may be required to remove the reverse recovery charge Qr from the diode. During this recovery time, the diode can actually conduct in the reverse direction. This might give rise to a large current in the reverse direction for a short time while the diode is reverse biased. The magnitude of such a reverse current is determined by the operating circuit (i.e., the series resistance) and the diode is said to be in the storage-phase. In certain real-world cases it is important to consider the losses that are incurred by this non-ideal diode effect. However, when the slew rate of the current is not so severe (e.g. Line frequency) the effect can be safely ignored. For most applications, the effect is also negligible for Schottky diodes. The reverse current ceases abruptly when the stored charge is depleted; this abrupt stop is exploited in step recovery diodes for the generation of extremely short pulses. Types of semiconductor diode. Normal (p–n) diodes, which operate as described above, are usually made of doped silicon or germanium. Before the development of silicon power rectifier diodes, cuprous oxide and later selenium was used. Their low efficiency required a much higher forward voltage to be applied (typically 1.4 to 1.7 V per "cell", with multiple cells stacked so as to increase the peak inverse voltage rating for application in high voltage rectifiers), and required a large heat sink (often an extension of the diode's metal substrate), much larger than the later silicon diode of the same current ratings would require. The vast majority of all diodes are the p–n diodes found in CMOS integrated circuits, which include two diodes per pin and many other internal diodes. These are diodes that conduct in the reverse direction when the reverse bias voltage exceeds the breakdown voltage. These are electrically very similar to Zener diodes (and are often mistakenly called Zener diodes), but break down by a different mechanism: the "avalanche effect". This occurs when the reverse electric field applied across the p–n junction causes a wave of ionization, reminiscent of an avalanche, leading to a large current. Avalanche diodes are designed to break down at a well-defined reverse voltage without being destroyed. The difference between the avalanche diode (which has a reverse breakdown above about 6.2 V) and the Zener is that the channel length of the former exceeds the mean free path of the electrons, resulting in many collisions between them on the way through the channel. The only practical difference between the two types is they have temperature coefficients of opposite polarities. These are actually JFETs with the gate shorted to the source, and function like a two-terminal current-limiting analog to the voltage-limiting Zener diode. They allow a current through them to rise to a certain value, and then level off at a specific value. Also called "CLDs", "constant-current diodes", "diode-connected transistors", or "current-regulating diodes". These are point-contact diodes. The 1N21 series and others are used in mixer and detector applications in radar and microwave receivers. The 1N34A is another example of a crystal diode. These are similar to tunnel diodes in that they are made of materials such as GaAs or InP that exhibit a region of negative differential resistance. With appropriate biasing, dipole domains form and travel across the diode, allowing high frequency microwave oscillators to be built. In a diode formed from a direct band-gap semiconductor, such as gallium arsenide, charge carriers that cross the junction emit photons when they recombine with the majority carrier on the other side. Depending on the material, wavelengths (or colors) from the infrared to the near ultraviolet may be produced. The first LEDs were red and yellow, and higher-frequency diodes have been developed over time. All LEDs produce incoherent, narrow-spectrum light; "white" LEDs are actually a blue LED with a yellow scintillator coating, or combinations of three LEDs of a different color. LEDs can also be used as low-efficiency photodiodes in signal applications. An LED may be paired with a photodiode or phototransistor in the same package, to form an opto-isolator. When an LED-like structure is contained in a resonant cavity formed by polishing the parallel end faces, a laser can be formed. Laser diodes are commonly used in optical storage devices and for high speed optical communication. This term is used both for conventional p–n diodes used to monitor temperature because of their varying forward voltage with temperature, and for Peltier heat pumps for thermoelectric heating and cooling. Peltier heat pumps may be made from semiconductors, though they do not have any rectifying junctions, they use the differing behavior of charge carriers in N and P-type semiconductor to move heat. All semiconductors are subject to optical charge carrier generation. This is typically an undesired effect, so most semiconductors are packaged in light-blocking material. Photodiodes are intended to sense light (photodetector), so they are packaged in materials that allow light to pass, and are usually PIN (the kind of diode most sensitive to light). A photodiode can be used in solar cells, in photometry, or in optical communications. Multiple photodiodes may be packaged in a single device, either as a linear array or as a two-dimensional array. These arrays should not be confused with charge-coupled devices. A PIN diode has a central un-doped, or "intrinsic", layer, forming a p-type/intrinsic/n-type structure. They are used as radio frequency switches and attenuators. They are also used as large-volume, ionizing-radiation detectors and as photodetectors. PIN diodes are also used in power electronics, as their central layer can withstand high voltages. Furthermore, the PIN structure can be found in many power semiconductor devices, such as IGBTs, power MOSFETs, and thyristors. Schottky diodes are constructed from metal to semiconductor contact. They have a lower forward voltage drop than p–n junction diodes. Their forward voltage drop at forward currents of about 1 mA is in the range 0.15 V to 0.45 V, which makes them useful in voltage clamping applications and prevention of transistor saturation. They can also be used as low loss rectifiers, although their reverse leakage current is in general higher than that of other diodes. Schottky diodes are majority carrier devices and so do not suffer from minority carrier storage problems that slow down many other diodes—so they have a faster reverse recovery than p–n junction diodes. They also tend to have much lower junction capacitance than p–n diodes, which provides for high switching speeds and their use in high-speed circuitry and RF devices such as switched-mode power supply, mixers, and detectors. Super barrier diodes are rectifier diodes that incorporate the low forward voltage drop of the Schottky diode with the surge-handling capability and low reverse leakage current of a normal p–n junction diode. As a dopant, gold (or platinum) acts as recombination centers, which helps the fast recombination of minority carriers. This allows the diode to operate at higher signal frequencies, at the expense of a higher forward voltage drop. Gold-doped diodes are faster than other p–n diodes (but not as fast as Schottky diodes). They also have less reverse-current leakage than Schottky diodes (but not as good as other p–n diodes). A typical example is the 1N914. The term "step recovery" relates to the form of the reverse recovery characteristic of these devices. After a forward current has been passing in an SRD and the current is interrupted or reversed, the reverse conduction will cease very abruptly (as in a step waveform). SRDs can, therefore, provide very fast voltage transitions by the very sudden disappearance of the charge carriers. The term "stabistor" refers to a special type of diodes featuring extremely stable forward voltage characteristics. These devices are specially designed for low-voltage stabilization applications requiring a guaranteed voltage over a wide current range and highly stable over temperature. These are avalanche diodes designed specifically to protect other semiconductor devices from high-voltage transients. Their p–n junctions have a much larger cross-sectional area than those of a normal diode, allowing them to conduct large currents to ground without sustaining damage. These have a region of operation showing negative resistance caused by quantum tunneling, allowing amplification of signals and very simple bistable circuits. Because of the high carrier concentration, tunnel diodes are very fast, may be used at low (mK) temperatures, high magnetic fields, and in high radiation environments. Because of these properties, they are often used in spacecraft. These are used as voltage-controlled capacitors. These are important in PLL (phase-locked loop) and FLL (frequency-locked loop) circuits, allowing tuning circuits, such as those in television receivers, to lock quickly on to the frequency. They also enabled tunable oscillators in the early discrete tuning of radios, where a cheap and stable, but fixed-frequency, crystal oscillator provided the reference frequency for a voltage-controlled oscillator. These can be made to conduct in reverse bias (backward), and are correctly termed reverse breakdown diodes. This effect called Zener breakdown, occurs at a precisely defined voltage, allowing the diode to be used as a precision voltage reference. The term Zener diodes is colloquially applied to several types of breakdown diodes, but strictly speaking, Zener diodes have a breakdown voltage of below 5 volts, whilst avalanche diodes are used for breakdown voltages above that value. In practical voltage reference circuits, Zener and switching diodes are connected in series and opposite directions to balance the temperature coefficient response of the diodes to near-zero. Some devices labeled as high-voltage Zener diodes are actually avalanche diodes (see above). Two (equivalent) Zeners in series and in reverse order, in the same package, constitute a transient absorber (or Transorb, a registered trademark). Graphic symbols. The symbol used to represent a particular type of diode in a circuit diagram conveys the general electrical function to the reader. There are alternative symbols for some types of diodes, though the differences are minor. The triangle in the symbols points to the forward direction, i.e. in the direction of conventional current flow. Numbering and coding schemes. There are a number of common, standard and manufacturer-driven numbering and coding schemes for diodes; the two most common being the EIA/JEDEC standard and the European Pro Electron standard: EIA/JEDEC. The standardized 1N-series numbering "EIA370" system was introduced in the US by EIA/JEDEC (Joint Electron Device Engineering Council) about 1960. Most diodes have a 1-prefix designation (e.g., 1N4003). Among the most popular in this series were: 1N34A/1N270 (germanium signal), 1N914/1N4148 (silicon signal), 1N400x (silicon 1A power rectifier), and 1N580x (silicon 3A power rectifier). JIS. The JIS semiconductor designation system has all semiconductor diode designations starting with "1S". Pro Electron. The European Pro Electron coding system for active components was introduced in 1966 and comprises two letters followed by the part code. The first letter represents the semiconductor material used for the component (A = germanium and B = silicon) and the second letter represents the general function of the part (for diodes, A = low-power/signal, B = variable capacitance, X = multiplier, Y = rectifier and Z = voltage reference); for example: Other common numbering/coding systems (generally manufacturer-driven) include: Related devices. In optics, an equivalent device for the diode but with laser light would be the optical isolator, also known as an optical diode, that allows light to only pass in one direction. It uses a Faraday rotator as the main component. Applications. Radio demodulation. The first use for the diode was the demodulation of amplitude modulated (AM) radio broadcasts. The history of this discovery is treated in depth in the crystal detector article. In summary, an AM signal consists of alternating positive and negative peaks of a radio carrier wave, whose amplitude or envelope is proportional to the original audio signal. The diode rectifies the AM radio frequency signal, leaving only the positive peaks of the carrier wave. The audio is then extracted from the rectified carrier wave using a simple filter and fed into an audio amplifier or transducer, which generates sound waves via audio speaker. In microwave and millimeter wave technology, beginning in the 1930s, researchers improved and miniaturized the crystal detector. Point contact diodes ("crystal diodes") and Schottky diodes are used in radar, microwave and millimeter wave detectors. Power conversion. Rectifiers are constructed from diodes, where they are used to convert alternating current (AC) electricity into direct current (DC). Automotive alternators are a common example, where the diode, which rectifies the AC into DC, provides better performance than the commutator or earlier, dynamo. Similarly, diodes are also used in "Cockcroft–Walton voltage multipliers" to convert AC into higher DC voltages. Reverse-voltage protection. Since most electronic circuits can be damaged when the polarity of their power supply inputs are reversed, a series diode is sometimes used to protect against such situations. This concept is known by multiple naming variations that mean the same thing: reverse voltage protection, reverse polarity protection, and reverse battery protection. Over-voltage protection. Diodes are frequently used to conduct damaging high voltages away from sensitive electronic devices. They are usually reverse-biased (non-conducting) under normal circumstances. When the voltage rises above the normal range, the diodes become forward-biased (conducting). For example, diodes are used in (stepper motor and H-bridge) motor controller and relay circuits to de-energize coils rapidly without the damaging voltage spikes that would otherwise occur. (A diode used in such an application is called a flyback diode). Many integrated circuits also incorporate diodes on the connection pins to prevent external voltages from damaging their sensitive transistors. Specialized diodes are used to protect from over-voltages at higher power (see Diode types above). Logic gates. Diode-resistor logic constructs AND and OR logic gates. Functional completeness can be achieved by adding an active device to provide inversion (as done with diode-transistor logic). Ionizing radiation detectors. In addition to light, mentioned above, semiconductor diodes are sensitive to more energetic radiation. In electronics, cosmic rays and other sources of ionizing radiation cause noise pulses and single and multiple bit errors. This effect is sometimes exploited by particle detectors to detect radiation. A single particle of radiation, with thousands or millions of electron volt, s of energy, generates many charge carrier pairs, as its energy is deposited in the semiconductor material. If the depletion layer is large enough to catch the whole shower or to stop a heavy particle, a fairly accurate measurement of the particle's energy can be made, simply by measuring the charge conducted and without the complexity of a magnetic spectrometer, etc. These semiconductor radiation detectors need efficient and uniform charge collection and low leakage current. They are often cooled by liquid nitrogen. For longer-range (about a centimeter) particles, they need a very large depletion depth and large area. For short-range particles, they need any contact or un-depleted semiconductor on at least one surface to be very thin. The back-bias voltages are near breakdown (around a thousand volts per centimeter). Germanium and silicon are common materials. Some of these detectors sense position as well as energy. They have a finite life, especially when detecting heavy particles, because of radiation damage. Silicon and germanium are quite different in their ability to convert gamma rays to electron showers. Semiconductor detectors for high-energy particles are used in large numbers. Because of energy loss fluctuations, accurate measurement of the energy deposited is of less use. Temperature measurements. A diode can be used as a temperature measuring device, since the forward voltage drop across the diode depends on temperature, as in a silicon bandgap temperature sensor. From the Shockley ideal diode equation given above, it might "appear" that the voltage has a "positive" temperature coefficient (at a constant current), but usually the variation of the reverse saturation current term is more significant than the variation in the thermal voltage term. Most diodes therefore have a "negative" temperature coefficient, typically −2 mV/°C for silicon diodes. The temperature coefficient is approximately constant for temperatures above about 20 kelvin. Some graphs are given for 1N400x series, and CY7 cryogenic temperature sensor. Current steering. Diodes will prevent currents in unintended directions. To supply power to an electrical circuit during a power failure, the circuit can draw current from a battery. An uninterruptible power supply may use diodes in this way to ensure that the current is only drawn from the battery when necessary. Likewise, small boats typically have two circuits each with their own battery/batteries: one used for engine starting; one used for domestics. Normally, both are charged from a single alternator, and a heavy-duty split-charge diode is used to prevent the higher-charge battery (typically the engine battery) from discharging through the lower-charge battery when the alternator is not running. Diodes are also used in electronic musical keyboards. To reduce the amount of wiring needed in electronic musical keyboards, these instruments often use keyboard matrix circuits. The keyboard controller scans the rows and columns to determine which note the player has pressed. The problem with matrix circuits is that, when several notes are pressed at once, the current can flow backward through the circuit and trigger "phantom keys" that cause "ghost" notes to play. To avoid triggering unwanted notes, most keyboard matrix circuits have diodes soldered with the switch under each key of the musical keyboard. The same principle is also used for the switch matrix in solid-state pinball machines. Waveform clipper. Diodes can be used to limit the positive or negative excursion of a signal to a prescribed voltage. Clamper. A diode clamp circuit can take a periodic alternating current signal that oscillates between positive and negative values, and vertically displace it such that either the positive or the negative peaks occur at a prescribed level. The clamper does not restrict the peak-to-peak excursion of the signal, it moves the whole signal up or down so as to place the peaks at the reference level. Computing exponentials and logarithms. The diode's exponential current–voltage relationship is exploited to evaluate exponentiation and its inverse function the logarithm using analog voltage signals (see ). Abbreviations. Diodes are usually referred to as "D" for diode on PCBs. Sometimes the abbreviation "CR" for "crystal rectifier" is used. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "I = I_\\text{S} e^{V_\\text{D}/(n V_\\text{T})}" } ]
https://en.wikipedia.org/wiki?curid=8254
825694
Eyepiece
Type of lens attached to a variety of optical devices such as telescopes and microscopes An eyepiece, or ocular lens, is a type of lens that is attached to a variety of optical devices such as telescopes and microscopes. It is named because it is usually the lens that is closest to the eye when someone looks through an optical device to observe an object or sample. The objective lens or mirror collects light from an object or sample and brings it to focus creating an image of the object. The eyepiece is placed near the focal point of the objective to magnify this image to the eyes. (The eyepiece and the eye together make an image of the image created by the objective, on the retina of the eye.) The amount of magnification depends on the focal length of the eyepiece. An eyepiece consists of several "lens elements" in a housing, with a "barrel" on one end. The barrel is shaped to fit in a special opening of the instrument to which it is attached. The image can be focused by moving the eyepiece nearer and further from the objective. Most instruments have a focusing mechanism to allow movement of the shaft in which the eyepiece is mounted, without needing to manipulate the eyepiece directly. The eyepieces of binoculars are usually permanently mounted in the binoculars, causing them to have a pre-determined magnification and field of view. With telescopes and microscopes, however, eyepieces are usually interchangeable. By switching the eyepiece, the user can adjust what is viewed. For instance, eyepieces will often be interchanged to increase or decrease the magnification of a telescope. Eyepieces also offer varying fields of view, and differing degrees of eye relief for the person who looks through them. Properties. Several properties of an eyepiece are likely to be of interest to a user of an optical instrument, when comparing eyepieces and deciding which eyepiece suits their needs. Design distance to entrance pupil. Eyepieces are optical systems where the entrance pupil is invariably located outside of the system. They must be designed for optimal performance for a specific distance to this entrance pupil (i.e. with minimum aberrations for this distance). In a refracting astronomical telescope the entrance pupil is identical with the objective. This may be several feet distant from the eyepiece; whereas with a microscope eyepiece the entrance pupil is close to the back focal plane of the objective, mere inches from the eyepiece. Microscope eyepieces may be corrected differently from telescope eyepieces; however, most are also suitable for telescope use. Elements and groups. "Elements" are the individual lenses, which may come as simple lenses or "singlets" and cemented doublets or (rarely) triplets. When lenses are cemented together in pairs or triples, the combined elements are called "groups" (of lenses). The first eyepieces had only a single lens element, which delivered highly distorted images. Two and three-element designs were invented soon after, and quickly became standard due to the improved image quality. Today, engineers assisted by computer-aided drafting software have designed eyepieces with seven or eight elements that deliver exceptionally large, sharp views. Internal reflection and scatter. Internal reflections, sometimes called "scatter", cause the light passing through an eyepiece to disperse and reduce the contrast of the image projected by the eyepiece. When the effect is particularly bad, "ghost images" are seen, called "ghosting". For many years, simple eyepiece designs with a minimum number of internal air-to-glass surfaces were preferred to avoid this problem. One solution to scatter is to use thin film coatings over the surface of the element. These thin coatings are only one or two wavelengths deep, and work to reduce reflections and scattering by changing the refraction of the light passing through the element. Some coatings may also absorb light that is not being passed through the lens in a process called total internal reflection where the light incident on the film is at a shallow angle. Chromatic aberration. "Lateral" or "transverse" chromatic aberration is caused because the refraction at glass surfaces differs for light of different wavelengths. Blue light, seen through an eyepiece element, will not focus to the same point but along the same axis as red light. The effect can create a ring of false colour around point sources of light and results in a general blurriness to the image. One solution is to reduce the aberration by using multiple elements of different types of glass. Achromats are lens groups that bring two different wavelengths of light to the same focus and exhibit greatly reduced false colour. Low dispersion glass may also be used to reduce chromatic aberration. "Longitudinal" chromatic aberration is a pronounced effect of optical telescope objectives, because the focal lengths are so long. Microscopes, whose focal lengths are generally shorter, do not tend to suffer from this effect. Focal length. The focal length of an eyepiece is the distance from the principal plane of the eyepiece to where parallel rays of light converge to a single point. When in use, the focal length of an eyepiece, combined with the focal length of the telescope or microscope objective, to which it is attached, determines the magnification. It is usually expressed in millimetres when referring to the eyepiece alone. When interchanging a set of eyepieces on a single instrument, however, some users prefer to identify each eyepiece by the magnification produced. For a telescope, the approximate angular magnification formula_0 produced by the combination of a particular eyepiece and objective can be calculated with the following formula: formula_1 where: Magnification increases, therefore, when the focal length of the eyepiece is shorter or the focal length of the objective is longer. For example, a 25 mm eyepiece in a telescope with a 1200 mm focal length would magnify objects 48 times. A 4 mm eyepiece in the same telescope would magnify 300 times. Amateur astronomers tend to refer to telescope eyepieces by their focal length in millimeters. These typically range from about 3 mm to 50 mm. Some astronomers, however, prefer to specify the resulting magnification power rather than the focal length. It is often more convenient to express magnification in observation reports, as it gives a more immediate impression of what view the observer actually saw. Due to its dependence on properties of the particular telescope in use, however, magnification power alone is meaningless for describing a telescope eyepiece. For a compound microscope the corresponding formula is formula_4 where By convention, microscope eyepieces are usually specified by "power" instead of focal length. Microscope eyepiece power formula_7 and objective power formula_8 are defined by formula_9 thus from the expression given earlier for the angular magnification of a compound microscope formula_10 The total angular magnification of a microscope image is then simply calculated by multiplying the eyepiece power by the objective power. For example, a 10× eyepiece with a 40× objective will magnify the image 400 times. This definition of lens power relies upon an arbitrary decision to split the angular magnification of the instrument into separate factors for the eyepiece and the objective. Historically, Abbe described microscope eyepieces differently, in terms of angular magnification of the eyepiece and 'initial magnification' of the objective. While convenient for the optical designer, this turned out to be less convenient from the viewpoint of practical microscopy and was thus subsequently abandoned. The generally accepted visual distance of closest focus formula_5 is 250 mm, and eyepiece power is normally specified assuming this value. Common eyepiece powers are 8×, 10×, 15×, and 20×. The focal length of the eyepiece (in mm) can thus be determined if required by dividing 250 mm by the eyepiece power. Modern instruments often use objectives optically corrected for an infinite tube length rather than 160 mm, and these require an auxiliary correction lens in the tube. Location of focal plane. In some eyepiece types, such as Ramsden eyepieces (described in more detail below), the eyepiece behaves as a magnifier, and its focal plane is located outside of the eyepiece in front of the field lens. This plane is therefore accessible as a location for a graticule or micrometer crosswires. In the Huygenian eyepiece, the focal plane is located between the eye and field lenses, inside the eyepiece, and is hence not accessible. Field of view. The field of view, often abbreviated FOV, describes the area of a target (measured as an angle from the location of viewing) that can be seen when looking through an eyepiece. The field of view seen through an eyepiece varies, depending on the magnification achieved when connected to a particular telescope or microscope, and also on properties of the eyepiece itself. Eyepieces are differentiated by their "field stop", which is the narrowest aperture that light entering the eyepiece must pass through to reach the field lens of the eyepiece. Due to the effects of these variables, the term "field of view" nearly always refers to one of two meanings: It is common for users of an eyepiece to want to calculate the actual field of view, because it indicates how much of the sky will be visible when the eyepiece is used with their telescope. The most convenient method of calculating the actual field of view depends on whether the apparent field of view is known. "If the apparent field of view is known," the actual field of view can be calculated from the following approximate formula: formula_11 where: The formula is accurate to 4% or better up to 40° apparent field of view, and has a 10% error for 60°. Since formula_16 where: The true field of view even without knowing the apparent field of view, given by: formula_19 The "focal length" of the telescope objective, formula_20 is the diameter of the objective times the focal ratio. It represents the distance at which the mirror or objective lens will cause light from a star to converge onto a single point (aberrations excepted). "If the apparent field of view is unknown," the actual field of view can be approximately found using: formula_21 where: The second formula is actually more accurate, but field stop size is not usually specified by most manufacturers. The first formula will not be accurate if the field is not flat, or is higher than 60° which is common for most ultra-wide eyepiece design. The above formulas are approximations. The ISO 14132-1:2002 standard gives the exact calculation for apparent field of view, formula_23 from the true field of view, formula_24 as: formula_25 If a diagonal or Barlow lens is used before the eyepiece, the eyepiece's field of view may be slightly restricted. This occurs when the preceding lens has a narrower field stop than the eyepiece's, causing the obstruction in the front to act as a smaller field stop in front of the eyepiece. The exact relationship is given by formula_26 An occasionally used approximation is formula_27 This formula also indicates that, for an eyepiece design with a given apparent field of view, the barrel diameter will determine the maximum focal length possible for that eyepiece, as no field stop can be larger than the barrel itself. For example, a Plössl with 45° apparent field of view in a 1.25 inch barrel would yield a maximum focal length of 35 mm. Anything longer requires larger barrel or the view is restricted by the edge, effectively making the field of view less than 45°. Barrel diameter. Eyepieces for telescopes and microscopes are usually interchanged to increase or decrease the magnification, and to enable the user to select a type with certain performance characteristics. To allow this, eyepieces come in standardized "Barrel diameters". Telescope eyepieces. There are six standard barrel diameters for telescopes. The barrel sizes (usually expressed in inches) are: Microscope eyepieces. Eyepieces for microscopes have a variety of barrel diameters, usually given in millimeters, such as 23.2 mm and 30 mm. Eye relief. The eye needs to be held at a certain distance behind the eye lens of an eyepiece to see images properly through it. This distance is called the eye relief. A larger eye relief means that the optimum position is farther from the eyepiece, making it easier to view an image. However, if the eye relief is too large it can be uncomfortable to hold the eye in the correct position for an extended period of time, for which reason some eyepieces with long eye relief have cups behind the eye lens to aid the observer in maintaining the correct observing position. The eye pupil should coincide with the exit pupil, the image of the entrance pupil, which in the case of an astronomical telescope corresponds to the object glass. Eye relief typically ranges from about 2 mm to 20 mm, depending on the construction of the eyepiece. Long focal-length eyepieces usually have ample eye relief, but short focal-length eyepieces are more problematic. Until recently, and still quite commonly, eyepieces of a short-focal length have had a short eye relief. Good design guidelines suggest a minimum of 5–6 mm to accommodate the eyelashes of the observer to avoid discomfort. Modern designs with many lens elements, however, can correct for this, and viewing at high power becomes more comfortable. This is especially the case for spectacle wearers, who may need up to 20 mm of eye relief to accommodate their glasses. Designs. Technology has developed over time and there are a variety of eyepiece "designs" for use with telescopes, microscopes, gun-sights, and other devices. Some of these designs are described in more detail below. Negative lens or "Galilean". The simple negative lens placed before the focus of the objective has the advantage of presenting an erect image but with limited field of view better suited to low magnification. It is suspected this type of lens was used in some of the first refracting telescopes that appeared in the Netherlands in about 1608. It was also used in Galileo Galilei's 1609 telescope design which gave this type of eyepiece arrangement the name "Galilean". This type of eyepiece is still used in very cheap telescopes, binoculars and in opera glasses. Convex lens. A simple convex lens placed after the focus of the objective lens presents the viewer with a magnified inverted image. This configuration may have been used in the first refracting telescopes from the Netherlands and was proposed as a way to have a much wider field of view and higher magnification in telescopes in Johannes Kepler's 1611 book "Dioptrice". Since the lens is placed after the focal plane of the objective it also allowed for use of a micrometer at the focal plane (used for determining the angular size and/or distance between objects observed). Huygens. Huygens eyepieces consist of two plano-convex lenses with the plane sides towards the eye separated by an air gap. The lenses are called the eye lens and the field lens. The focal plane is located between the two lenses. It was invented by Christiaan Huygens in the late 1660s and was the first compound (multi-lens) eyepiece. Huygens discovered that two air spaced lenses can be used to make an eyepiece with zero transverse chromatic aberration. If the lenses are made of glass of the same Abbe number, to be used with a relaxed eye and a telescope with an infinitely distant objective then the separation is given by: formula_28 where formula_29 and formula_30 are the focal lengths of the component lenses. These eyepieces work well with the very long focal length telescopes. This optical design is now considered obsolete since with today's shorter focal length telescopes the eyepiece suffers from short eye relief, high image distortion, axial chromatic aberration, and a very narrow apparent field of view. Since these eyepieces are cheap to make they can often be found on inexpensive telescopes and microscopes. Because Huygens eyepieces do not contain cement to hold the lens elements, telescope users sometimes use these eyepieces in the role of "solar projection", i.e. projecting an image of the Sun onto a screen for prolonged periods of time. Cemented eyepieces are traditionally regarded as potentially vulnerable to heat damage by the intense concentrations of light involved. Ramsden. The Ramsden eyepiece comprises two plano-convex lenses of the same glass and similar focal lengths, placed less than one eye-lens focal length apart, a design created by astronomical and scientific instrument maker Jesse Ramsden in 1782. The lens separation varies between different designs, but is typically somewhere between and of the focal length of the eye-lens, the choice being a trade off between residual transverse chromatic aberration (at low values) and at high values running the risk of the field lens touching the focal plane when used by an observer who works with a close virtual image such as a myopic observer, or a young person whose accommodation is able to cope with a close virtual image (this is a serious problem when used with a micrometer as it can result in damage to the instrument). A separation of exactly 1 focal length is also inadvisable since it renders the dust on the field lens disturbingly in focus. The two curved surfaces face inwards. The focal plane is thus located outside of the eyepiece and is hence accessible as a location where a graticule, or micrometer crosshairs may be placed. Because a separation of exactly one focal length would be required to correct transverse chromatic aberration, it is not possible to correct the Ramsden design completely for transverse chromatic aberration. The design is slightly better than Huygens but still not up to today's standards. It remains highly suitable for use with instruments operating using near-monochromatic light sources "e.g." polarimeters. Kellner or "Achromat". In a Kellner eyepiece an achromatic doublet is used in place of the simple plano-convex eye lens in the Ramsden design to correct the residual transverse chromatic aberration. Carl Kellner designed this first modern achromatic eyepiece in 1849, also called an "achromatized Ramsden". Kellner eyepieces are a 3-lens design. They are inexpensive and have fairly good image from low to medium power and are far superior to Huygenian or Ramsden design. The eye relief is better than the Huygenian and worse than the Ramsden eyepieces. The biggest problem of Kellner eyepieces was internal reflections. Today's anti-reflection coatings make these usable, economical choices for small to medium aperture telescopes with focal ratio f/6 or longer. The typical apparent field of view is 40–50°. Plössl or "Symmetrical". The Plössl is an eyepiece usually consisting of two sets of doublets, designed by Georg Plössl in 1860. Since the two doublets can be identical this design is sometimes called a "symmetrical eyepiece". The compound Plössl lens provides a large 50° or more "apparent" field of view, along with the proportionally large true FOV. This makes this eyepiece ideal for a variety of observational purposes including deep-sky and planetary viewing. The chief disadvantage of the Plössl optical design is short eye relief compared to an orthoscopic, since the Plössl eye relief is restricted to about 70–80% of focal length. The short eye relief is more critical in short focal lengths below about 10 mm, when viewing can become uncomfortable – especially for people wearing glasses. The Plössl eyepiece was an obscure design until the 1980s when astronomical equipment manufacturers started selling redesigned versions of it. Today it is a very popular design on the amateur astronomical market, where the name "Plössl" covers a range of eyepieces with at least four optical elements, sometimes overlapping with the Erfle design. This eyepiece is one of the more expensive to manufacture because of the quality of glass, and the need for well matched convex and concave lenses to prevent internal reflections. Due to this fact, the quality of different Plössl eyepieces varies. There are notable differences between cheap Plössls with simplest anti-reflection coatings and well made ones. Orthoscopic or "Abbe". The 4-element orthoscopic eyepiece consists of a plano-convex singlet eye lens and a cemented convex-convex triplet field lens achromatic field lens. This gives the eyepiece a nearly perfect image quality and good eye relief, but a narrow apparent field of view — about 40°–45°. It was invented by Ernst Abbe in 1880. It is called ""orthoscopic" or "orthographic"" because of its low degree of distortion and is also sometimes called an "ortho" or "Abbe". Until the advent of multicoatings and the popularity of the Plössl, orthoscopics were the most popular design for telescope eyepieces. Even today these eyepieces are considered good eyepieces for planetary and lunar viewing. They are preferred for reticle eyepieces, since they are one of the wide-field, long eye-relief designs with an external focal plane; slowly being supplanted by the König. Due to their low degree of distortion and the corresponding globe effect, they are less suitable for applications which require an extensive panning of the instrument. Monocentric. A Monocentric is an achromatic triplet lens with two pieces of crown glass cemented on both sides of a flint glass element. The elements are thick, strongly curved, and their surfaces have a common center giving it the name "monocentric". It was invented by H.A. Steinheil around 1883. This design, like the solid eyepiece designs of Tolles, Hastings, and Taylor, is free from ghost reflections and gives a bright contrasty image, a desirable feature when it was invented (before anti-reflective coatings). It has a narrow apparent field of view around 25° but was favored by planetary observers. Erfle. An Erfle is a 5 element eyepiece consisting of 2 achromatic doublets with an extra simple lens between them. They were invented by Heinrich Erfle during World War I for military use. The design is an elementary extension of 4 element eyepieces such as Plössls, enhanced for wider fields. Erfle eyepieces are designed to have wide field of view (about 60°), but are unusable at high powers because they suffer from astigmatism and ghost images. However, with lens coatings at low powers (focal lengths of 20~30 mm and up) they are acceptable, and at 40 mm they can be excellent. Erfles are very popular for wide-field views, because they have large eye lenses, and can be very comfortable to use because of their good eye relief in longer focal lengths. König. The König eyepiece has a concave-convex positive doublet and a plano-convex singlet. The strongly convex surfaces of the doublet and singlet face and (nearly) touch each other. The doublet has its concave surface facing the light source and the singlet has its almost flat (slightly convex) surface facing the eye. It was designed in 1915 by German optician Albert König (1871−1946) and is effectively a simplified Abbe. The design allows for high magnification with remarkably high eye relief – the longest eye relief proportional to focal length of any design before the Nagler, in 1979. The field of view of about 55° is slightly superior to the Plössl, with the further advantages of better eye relief and requiring one less lens element. Modern improvements typically have fields of view of 60°−70°. König design revisions use exotic glass and / or add more lens groups; the most typical adaptation is to add a simple positive, concave-convex lens before the doublet, with the concave face towards the light source and the convex surface facing the doublet. RKE. An RKE eyepiece has an achromatic field lens and double convex eye lens, a reversed adaptation of the Kellner eyepiece, with its lens layout similar to the König. It was designed by Dr. David Rank for the Edmund Scientific Corporation, who marketed it throughout the late 1960s and early 1970s. This design provides slightly wider field of view than classic Kellner design and makes its design similar to a widely spaced version of the König. According to Edmund Scientific Corporation, "RKE" stands for "Rank Kellner Eyepiece'". In an amendment to their trademark application on 16 January 1979 it was given as "Rank-Kaspereit-Erfle", the three designs from which the eyepiece was derived. "Edmund Astronomy News" (March 1978) called the eyepiece the "Rank-Kaspereit-Erfle" (RKE) a "redesign[ed] ... type II Kellner". However, the RKE deign does not resemble a Kellner, and is closer to a modified König. There is some speculation that at some point the "K" was mistakenly interpreted as the name of the more common Kellner, instead of the fairly rarely seen König. Nagler. Invented by Albert Nagler and patented in 1979, the Nagler eyepiece is a design optimized for astronomical telescopes to give an ultra-wide field of view (82°) that has good correction for astigmatism and other aberrations. Introduced in 2007, the Ethos is an enhanced ultra-wide field design developed principally by Paul Dellechiaie under Albert Nagler's guidance at Tele Vue Optics and claims a 100–110° AFOV. This is achieved using exotic high-index glass and up to eight optical elements in four or five groups; there are several similar designs called the "Nagler", "Nagler type 2", "Nagler type 4", "Nagler type 5", and "Nagler type 6". The newer Delos design is a modified Ethos design with a FOV of 'only' 72 degrees but with a long 20 mm eye relief. The number of elements in a Nagler makes them seem complex, but the idea of the design is fairly simple: every Nagler has a negative doublet field lens, which increases magnification, followed by several positive groups. The positive groups, considered separate from the first negative group, combine to have long focal length, and form a positive lens. That allows the design to take advantage of the many good qualities of low power lenses. In effect, a Nagler is a superior version of a Barlow lens combined with a long focal length eyepiece. This design has been widely copied in other wide field or long eye relief eyepieces. The main disadvantage to Naglers is in their weight; they are often ruefully referred to as ‘hand grenades’ because of their heft and large size. Long focal length versions exceed , which is enough to unbalance small to medium-sized telescopes. Another disadvantage is a high purchase cost, with large Naglers' prices comparable to the cost of a small telescope. Hence these eyepieces are regarded by many amateur astronomers as a luxury. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ M_\\mathsf{A}\\ " }, { "math_id": 1, "text": "\\ M_\\mathsf{A} \\approx \\frac{\\ f_\\mathsf{O}\\ }{\\ f_\\mathsf{E}\\ }\\ " }, { "math_id": 2, "text": "\\ f_\\mathsf{O}\\ " }, { "math_id": 3, "text": "\\ f_\\mathsf{E}\\ " }, { "math_id": 4, "text": "\\ M_\\mathsf{A} = \\frac{~~~ D\\ \\cdot \\ D_\\mathsf{EO}\\ }{\\ f_\\mathsf{E}\\ \\cdot \\ f_\\mathsf{O}\\ } = \\frac{D}{~ f_\\mathsf{E}\\ } \\cdot \\frac{~~ D_\\mathsf{EO}\\ }{~ f_\\mathsf{O}\\ }\\ " }, { "math_id": 5, "text": "\\ D\\ " }, { "math_id": 6, "text": "\\ D_\\mathsf{EO}\\ " }, { "math_id": 7, "text": "\\ P_\\mathrm{E}\\ " }, { "math_id": 8, "text": "\\ P_\\mathsf{O}\\ " }, { "math_id": 9, "text": "\\ P_\\mathsf{E} = \\frac{D}{~ f_\\mathsf{E}\\ }\\ , \\qquad P_\\mathsf{O} = \\frac{~~ D_\\mathsf{EO}\\ }{~ f_\\mathsf{O}\\ }\\ " }, { "math_id": 10, "text": "\\ M_\\mathsf{A} = P_\\mathsf{E} \\times P_\\mathsf{O}\\ " }, { "math_id": 11, "text": " T_\\mathsf{FOV} \\approx \\frac{\\ A_\\mathsf{FOV}\\ }{ M }" }, { "math_id": 12, "text": "\\ T_\\mathsf{FOV}\\ " }, { "math_id": 13, "text": " A_\\mathsf{FOV}\\ " }, { "math_id": 14, "text": "\\ A_\\mathsf{FOV}\\ " }, { "math_id": 15, "text": "\\ M\\ " }, { "math_id": 16, "text": "\\ M = \\frac{\\ f_\\mathsf{T}\\ }{ f_\\mathsf{E} }\\ ," }, { "math_id": 17, "text": "\\ f_\\mathsf{T}\\ " }, { "math_id": 18, "text": "\\ f_\\mathsf{T}\\ ;" }, { "math_id": 19, "text": " T_\\mathsf{FOV} \\approx \\frac{ A_\\mathsf{FOV} }{\\ \\left[ \\frac{ f_\\mathsf{T} }{\\ f_\\mathsf{E}\\ } \\right]\\ } = A_\\mathsf{FOV} \\times \\frac{\\ f_\\mathsf{E}\\ }{ f_\\mathsf{T} } ~." }, { "math_id": 20, "text": "\\ f_\\mathsf{T}\\ ," }, { "math_id": 21, "text": "\\ T_\\mathsf{FOV} ~\\approx~ \\frac{\\ 57.3\\ d\\ }{ f_\\mathsf{T} }\\ " }, { "math_id": 22, "text": "\\ d\\ " }, { "math_id": 23, "text": "\\ A_\\mathsf{FOV}\\ ," }, { "math_id": 24, "text": "\\ T_\\mathsf{FOV}\\ ," }, { "math_id": 25, "text": "\\ \\tan\\left( \\frac{\\ A_\\mathsf{FOV}\\ }{2} \\right) = M \\times \\tan\\left( \\frac{\\ T_\\mathsf{FOV}\\ }{2} \\right) ~." }, { "math_id": 26, "text": " A_\\mathsf{FOV} ~=~ 2 \\times \\arctan\\left( \\frac{ d }{\\ 2 \\times f_\\mathsf{E}\\ } \\right) ~." }, { "math_id": 27, "text": " A_\\mathsf{FOV} ~~\\approx~~ 57.3^\\circ \\times \\frac{ d }{\\ f_\\mathsf{E}\\ } ~." }, { "math_id": 28, "text": " d = \\tfrac{1}{2} \\left( f_\\mathsf{A} + f_\\mathsf{B} \\right) " }, { "math_id": 29, "text": "\\ f_\\mathsf{A}\\ " }, { "math_id": 30, "text": "\\ f_\\mathsf{B}\\ " } ]
https://en.wikipedia.org/wiki?curid=825694
825735
Verlet integration
Numerical integration algorithm Verlet integration () is a numerical method used to integrate Newton's equations of motion. It is frequently used to calculate trajectories of particles in molecular dynamics simulations and computer graphics. The algorithm was first used in 1791 by Jean Baptiste Delambre and has been rediscovered many times since then, most recently by Loup Verlet in the 1960s for use in molecular dynamics. It was also used by P. H. Cowell and A. C. C. Crommelin in 1909 to compute the orbit of Halley's Comet, and by Carl Størmer in 1907 to study the trajectories of electrical particles in a magnetic field (hence it is also called Størmer's method). The Verlet integrator provides good numerical stability, as well as other properties that are important in physical systems such as time reversibility and preservation of the symplectic form on phase space, at no significant additional computational cost over the simple Euler method. Basic Størmer–Verlet. For a second-order differential equation of the type formula_0 with initial conditions formula_1 and formula_2, an approximate numerical solution formula_3 at the times formula_4 with step size formula_5 can be obtained by the following method: Equations of motion. Newton's equation of motion for conservative physical systems is formula_8 or individually formula_9 where This equation, for various choices of the potential function formula_13, can be used to describe the evolution of diverse physical systems, from the motion of interacting molecules to the orbit of the planets. After a transformation to bring the mass to the right side and forgetting the structure of multiple particles, the equation may be simplified to formula_0 with some suitable vector-valued function formula_17 representing the position-dependent acceleration. Typically, an initial position formula_18 and an initial velocity formula_19 are also given. Verlet integration (without velocities). To discretize and numerically solve this initial value problem, a time step formula_5 is chosen, and the sampling-point sequence formula_20 considered. The task is to construct a sequence of points formula_21 that closely follow the points formula_22 on the trajectory of the exact solution. Where Euler's method uses the forward difference approximation to the first derivative in differential equations of order one, Verlet integration can be seen as using the central difference approximation to the second derivative: formula_23 "Verlet integration" in the form used as the "Størmer method" uses this equation to obtain the next position vector from the previous two without using the velocity as formula_24 Discretisation error. The time symmetry inherent in the method reduces the level of local errors introduced into the integration by the discretization by removing all odd-degree terms, here the terms in formula_25 of degree three. The local error is quantified by inserting the exact values formula_26 into the iteration and computing the Taylor expansions at time formula_27 of the position vector formula_28 in different time directions: formula_29 where formula_30 is the position, formula_31 the velocity, formula_32 the acceleration, and formula_33 the jerk (third derivative of the position with respect to the time). Adding these two expansions gives formula_34 We can see that the first- and third-order terms from the Taylor expansion cancel out, thus making the Verlet integrator an order more accurate than integration by simple Taylor expansion alone. Caution should be applied to the fact that the acceleration here is computed from the exact solution, formula_35, whereas in the iteration it is computed at the central iteration point, formula_36. In computing the global error, that is the distance between exact solution and approximation sequence, those two terms do not cancel exactly, influencing the order of the global error. A simple example. To gain insight into the relation of local and global errors, it is helpful to examine simple examples where the exact solution, as well as the approximate solution, can be expressed in explicit formulas. The standard example for this task is the exponential function. Consider the linear differential equation formula_37 with a constant formula_38. Its exact basis solutions are formula_39 and formula_40. The Størmer method applied to this differential equation leads to a linear recurrence relation formula_41 or formula_42 It can be solved by finding the roots of its characteristic polynomial formula_43. These are formula_44 The basis solutions of the linear recurrence are formula_45 and formula_46. To compare them with the exact solutions, Taylor expansions are computed: formula_47 The quotient of this series with the one of the exponential formula_48 starts with formula_49, so formula_50 From there it follows that for the first basis solution the error can be computed as formula_51 That is, although the local discretization error is of order 4, due to the second order of the differential equation the global error is of order 2, with a constant that grows exponentially in time. Starting the iteration. Note that at the start of the Verlet iteration at step formula_52, time formula_53, computing formula_54, one already needs the position vector formula_55 at time formula_56. At first sight, this could give problems, because the initial conditions are known only at the initial time formula_57. However, from these the acceleration formula_58 is known, and a suitable approximation for the position at the first time step can be obtained using the Taylor polynomial of degree two: formula_59 The error on the first time step then is of order formula_60. This is not considered a problem because on a simulation over a large number of time steps, the error on the first time step is only a negligibly small amount of the total error, which at time formula_61 is of the order formula_62, both for the distance of the position vectors formula_21 to formula_22 as for the distance of the divided differences formula_63 to formula_64. Moreover, to obtain this second-order global error, the initial error needs to be of at least third order. Non-constant time differences. A disadvantage of the Størmer–Verlet method is that if the time step (formula_25) changes, the method does not approximate the solution to the differential equation. This can be corrected using the formula formula_65 A more exact derivation uses the Taylor series (to second order) at formula_66 for times formula_67 and formula_68 to obtain after elimination of formula_69 formula_70 so that the iteration formula becomes formula_71 Computing velocities – Størmer–Verlet method. The velocities are not explicitly given in the basic Størmer equation, but often they are necessary for the calculation of certain physical quantities like the kinetic energy. This can create technical challenges in molecular dynamics simulations, because kinetic energy and instantaneous temperatures at time formula_10 cannot be calculated for a system until the positions are known at time formula_72. This deficiency can either be dealt with using the velocity Verlet algorithm or by estimating the velocity using the position terms and the mean value theorem: formula_73 Note that this velocity term is a step behind the position term, since this is for the velocity at time formula_10, not formula_72, meaning that formula_74 is a second-order approximation to formula_75. With the same argument, but halving the time step, formula_76 is a second-order approximation to formula_77, with formula_78. One can shorten the interval to approximate the velocity at time formula_72 at the cost of accuracy: formula_79 Velocity Verlet. A related, and more commonly used algorithm is the velocity Verlet algorithm, similar to the leapfrog method, except that the velocity and position are calculated at the same value of the time variable (leapfrog does not, as the name suggests). This uses a similar approach, but explicitly incorporates velocity, solving the problem of the first time step in the basic Verlet algorithm: formula_80 It can be shown that the error in the velocity Verlet is of the same order as in the basic Verlet. Note that the velocity algorithm is not necessarily more memory-consuming, because, in basic Verlet, we keep track of two vectors of position, while in velocity Verlet, we keep track of one vector of position and one vector of velocity. The standard implementation scheme of this algorithm is: This algorithm also works with variable time steps, and is identical to the 'kick-drift-kick' form of leapfrog method integration. Eliminating the half-step velocity, this algorithm may be shortened to Note, however, that this algorithm assumes that acceleration formula_83 only depends on position formula_84 and does not depend on velocity formula_88. One might note that the long-term results of velocity Verlet, and similarly of leapfrog are one order better than the semi-implicit Euler method. The algorithms are almost identical up to a shift by half a time step in the velocity. This can be proven by rotating the above loop to start at step 3 and then noticing that the acceleration term in step 1 could be eliminated by combining steps 2 and 4. The only difference is that the midpoint velocity in velocity Verlet is considered the final velocity in semi-implicit Euler method. The global error of all Euler methods is of order one, whereas the global error of this method is, similar to the midpoint method, of order two. Additionally, if the acceleration indeed results from the forces in a conservative mechanical or Hamiltonian system, the energy of the approximation essentially oscillates around the constant energy of the exactly solved system, with a global error bound again of order one for semi-explicit Euler and order two for Verlet-leapfrog. The same goes for all other conserved quantities of the system like linear or angular momentum, that are always preserved or nearly preserved in a symplectic integrator. The velocity Verlet method is a special case of the Newmark-beta method with formula_89 and formula_90. Error terms. The global truncation error of the Verlet method is formula_91, both for position and velocity. This is in contrast with the fact that the local error in position is only formula_92 as described above. The difference is due to the accumulation of the local truncation error over all of the iterations. The global error can be derived by noting the following: formula_93 and formula_94 Therefore formula_95 Similarly: formula_96 which can be generalized to (it can be shown by induction, but it is given here without proof): formula_97 If we consider the global error in position between formula_98 and formula_99, where formula_100, it is clear that formula_101 and therefore, the global (cumulative) error over a constant interval of time is given by formula_102 Because the velocity is determined in a non-cumulative way from the positions in the Verlet integrator, the global error in velocity is also formula_91. In molecular dynamics simulations, the global error is typically far more important than the local error, and the Verlet integrator is therefore known as a second-order integrator. Constraints. Systems of multiple particles with constraints are simpler to solve with Verlet integration than with Euler methods. Constraints between points may be, for example, potentials constraining them to a specific distance or attractive forces. They may be modeled as springs connecting the particles. Using springs of infinite stiffness, the model may then be solved with a Verlet algorithm. In one dimension, the relationship between the unconstrained positions formula_103 and the actual positions formula_104 of points formula_105 at time formula_10, given a desired constraint distance of formula_106, can be found with the algorithm formula_107 Verlet integration is useful because it directly relates the force to the position, rather than solving the problem using velocities. Problems, however, arise when multiple constraining forces act on each particle. One way to solve this is to loop through every point in a simulation, so that at every point the constraint relaxation of the last is already used to speed up the spread of the information. In a simulation this may be implemented by using small time steps for the simulation, using a fixed number of constraint-solving steps per time step, or solving constraints until they are met by a specific deviation. When approximating the constraints locally to first order, this is the same as the Gauss–Seidel method. For small matrices it is known that LU decomposition is faster. Large systems can be divided into clusters (for example, each ragdoll = cluster). Inside clusters the LU method is used, between clusters the Gauss–Seidel method is used. The matrix code can be reused: The dependency of the forces on the positions can be approximated locally to first order, and the Verlet integration can be made more implicit. Sophisticated software, such as SuperLU exists to solve complex problems using sparse matrices. Specific techniques, such as using (clusters of) matrices, may be used to address the specific problem, such as that of force propagating through a sheet of cloth without forming a sound wave. Another way to solve holonomic constraints is to use constraint algorithms. Collision reactions. One way of reacting to collisions is to use a penalty-based system, which basically applies a set force to a point upon contact. The problem with this is that it is very difficult to choose the force imparted. Use too strong a force, and objects will become unstable, too weak, and the objects will penetrate each other. Another way is to use projection collision reactions, which takes the offending point and attempts to move it the shortest distance possible to move it out of the other object. The Verlet integration would automatically handle the velocity imparted by the collision in the latter case; however, note that this is not guaranteed to do so in a way that is consistent with collision physics (that is, changes in momentum are not guaranteed to be realistic). Instead of implicitly changing the velocity term, one would need to explicitly control the final velocities of the objects colliding (by changing the recorded position from the previous time step). The two simplest methods for deciding on a new velocity are perfectly elastic and inelastic collisions. A slightly more complicated strategy that offers more control would involve using the coefficient of restitution.
[ { "math_id": 0, "text": "\\ddot{\\mathbf x}(t) = \\mathbf A\\bigl(\\mathbf x(t)\\bigr)" }, { "math_id": 1, "text": "\\mathbf x(t_0) = \\mathbf x_0" }, { "math_id": 2, "text": "\\dot{\\mathbf x}(t_0) = \\mathbf v_0" }, { "math_id": 3, "text": "\\mathbf x_n \\approx \\mathbf x(t_n)" }, { "math_id": 4, "text": "t_n = t_0 + n\\,\\Delta t" }, { "math_id": 5, "text": "\\Delta t > 0" }, { "math_id": 6, "text": "\\mathbf x_1 = \\mathbf x_0 + \\mathbf v_0\\,\\Delta t + \\tfrac 1 2 \\mathbf A(\\mathbf x_0)\\,\\Delta t^2" }, { "math_id": 7, "text": "\n\\mathbf x_{n+1} = 2 \\mathbf x_n - \\mathbf x_{n-1} + \\mathbf A(\\mathbf x_n)\\,\\Delta t^2.\n" }, { "math_id": 8, "text": "\\boldsymbol M \\ddot{\\mathbf x}(t) = F\\bigl(\\mathbf x(t)\\bigr) = -\\nabla V\\bigl(\\mathbf x(t)\\bigr)," }, { "math_id": 9, "text": "m_k \\ddot{\\mathbf x}_k(t) = F_k\\bigl(\\mathbf x(t)\\bigr) = -\\nabla_{\\mathbf x_k} V\\left(\\mathbf x(t)\\right)," }, { "math_id": 10, "text": "t" }, { "math_id": 11, "text": "\\mathbf x(t) = \\bigl(\\mathbf x_1(t), \\ldots, \\mathbf x_N(t)\\bigr)" }, { "math_id": 12, "text": "N" }, { "math_id": 13, "text": "V" }, { "math_id": 14, "text": "F" }, { "math_id": 15, "text": "\\boldsymbol M" }, { "math_id": 16, "text": "m_k" }, { "math_id": 17, "text": "\\mathbf A(\\mathbf x)" }, { "math_id": 18, "text": "\\mathbf x(0) = \\mathbf x_0" }, { "math_id": 19, "text": "\\mathbf v(0) = \\dot{\\mathbf x}(0) = \\mathbf v_0" }, { "math_id": 20, "text": "t_n = n\\,\\Delta t" }, { "math_id": 21, "text": "\\mathbf x_n" }, { "math_id": 22, "text": "\\mathbf x(t_n)" }, { "math_id": 23, "text": "\\begin{align}\n\\frac{\\Delta^2\\mathbf x_n}{\\Delta t^2}\n&= \\frac{\\frac{\\mathbf x_{n+1} - \\mathbf x_n}{\\Delta t} - \\frac{\\mathbf x_n - \\mathbf x_{n-1}}{\\Delta t}}{\\Delta t}\\\\[6pt]\n&= \\frac{\\mathbf x_{n+1} - 2 \\mathbf x_n + \\mathbf x_{n-1}}{\\Delta t^2} = \\mathbf a_n = \\mathbf A(\\mathbf x_n).\n\\end{align}" }, { "math_id": 24, "text": "\\begin{align}\n\\mathbf x_{n+1} &= 2 \\mathbf x_n - \\mathbf x_{n-1} + \\mathbf a_n\\,\\Delta t^2, \\\\[6pt]\n\\mathbf a_n &= \\mathbf A(\\mathbf x_n).\n\\end{align}" }, { "math_id": 25, "text": "\\Delta t" }, { "math_id": 26, "text": "\\mathbf x(t_{n-1}), \\mathbf x(t_n), \\mathbf x(t_{n+1})" }, { "math_id": 27, "text": "t = t_n" }, { "math_id": 28, "text": "\\mathbf{x}(t \\pm \\Delta t)" }, { "math_id": 29, "text": "\\begin{align}\n\\mathbf{x}(t + \\Delta t)\n&= \\mathbf{x}(t) + \\mathbf{v}(t)\\Delta t + \\frac{\\mathbf{a}(t) \\Delta t^2}{2}\n+ \\frac{\\mathbf{b}(t) \\Delta t^3}{6} + \\mathcal{O}\\left(\\Delta t^4\\right)\\\\\n\\mathbf{x}(t - \\Delta t)\n&= \\mathbf{x}(t) - \\mathbf{v}(t)\\Delta t + \\frac{\\mathbf{a}(t) \\Delta t^2}{2}\n- \\frac{\\mathbf{b}(t) \\Delta t^3}{6} + \\mathcal{O}\\left(\\Delta t^4\\right),\n\\end{align}" }, { "math_id": 30, "text": "\\mathbf{x}" }, { "math_id": 31, "text": "\\mathbf{v} = \\dot{\\mathbf x}" }, { "math_id": 32, "text": "\\mathbf{a} = \\ddot{\\mathbf x}" }, { "math_id": 33, "text": "\\mathbf{b} = \\dot{\\mathbf a} = \\overset{\\dots}{\\mathbf x}" }, { "math_id": 34, "text": "\\mathbf{x}(t + \\Delta t) = 2\\mathbf{x}(t) - \\mathbf{x}(t - \\Delta t) + \\mathbf{a}(t) \\Delta t^2 + \\mathcal{O}\\left(\\Delta t^4\\right)." }, { "math_id": 35, "text": "\\mathbf a(t) = \\mathbf A\\bigl(\\mathbf x(t)\\bigr)" }, { "math_id": 36, "text": "\\mathbf a_n = \\mathbf A(\\mathbf x_n)" }, { "math_id": 37, "text": "\\ddot x(t) = w^2 x(t)" }, { "math_id": 38, "text": "w" }, { "math_id": 39, "text": "e^{wt}" }, { "math_id": 40, "text": "e^{-wt}" }, { "math_id": 41, "text": "x_{n+1} - 2x_n + x_{n-1} = h^2 w^2 x_n," }, { "math_id": 42, "text": "x_{n+1} - 2\\left(1 + \\tfrac12(wh)^2\\right) x_n + x_{n-1} = 0." }, { "math_id": 43, "text": "q^2 - 2\\left(1 + \\tfrac12(wh)^2\\right)q + 1 = 0" }, { "math_id": 44, "text": "q_\\pm = 1 + \\tfrac 1 2 (wh)^2 \\pm wh \\sqrt{1 + \\tfrac 1 4 (wh)^2}." }, { "math_id": 45, "text": "x_n = q_+^n" }, { "math_id": 46, "text": "x_n = q_-^n" }, { "math_id": 47, "text": "\\begin{align}\nq_+ &= 1 + \\tfrac12(wh)^2 + wh\\left(1 + \\tfrac18(wh)^2 - \\tfrac{3}{128}(wh)^4 + \\mathcal O\\left(h^6\\right)\\right)\\\\\n &= 1 + (wh) + \\tfrac12(wh)^2 + \\tfrac18(wh)^3 - \\tfrac{3}{128}(wh)^5 + \\mathcal O\\left(h^7\\right).\n\\end{align}" }, { "math_id": 48, "text": "e^{wh}" }, { "math_id": 49, "text": "1 - \\tfrac1{24}(wh)^3 + \\mathcal O\\left(h^5\\right)" }, { "math_id": 50, "text": "\\begin{align}\nq_+ &= \\left(1 - \\tfrac1{24}(wh)^3 + \\mathcal O\\left(h^5\\right)\\right)e^{wh}\\\\\n &= e^{-\\frac{1}{24}(wh)^3 + \\mathcal O\\left(h^5\\right)}\\,e^{wh}.\n\\end{align}" }, { "math_id": 51, "text": "\\begin{align}\nx_n = q_+^{n}\n &= e^{-\\frac{1}{24}(wh)^2\\,wt_n + \\mathcal O\\left(h^4\\right)}\\,e^{wt_n}\\\\\n &= e^{wt_n}\\left(1 - \\tfrac{1}{24}(wh)^2\\,wt_n + \\mathcal O(h^4)\\right)\\\\\n &= e^{wt_n} + \\mathcal O\\left(h^2 t_n e^{wt_n}\\right).\n\\end{align}" }, { "math_id": 52, "text": "n = 1" }, { "math_id": 53, "text": "t = t_1 = \\Delta t" }, { "math_id": 54, "text": "\\mathbf x_2" }, { "math_id": 55, "text": "\\mathbf x_1" }, { "math_id": 56, "text": "t = t_1" }, { "math_id": 57, "text": "t_0 = 0" }, { "math_id": 58, "text": "\\mathbf a_0 = \\mathbf A(\\mathbf x_0)" }, { "math_id": 59, "text": "\\mathbf x_1 = \\mathbf{x}_0 + \\mathbf{v}_0 \\Delta t + \\tfrac12 \\mathbf a_0 \\Delta t^2\n\\approx \\mathbf{x}(\\Delta t) + \\mathcal{O}\\left(\\Delta t^3\\right)." }, { "math_id": 60, "text": "\\mathcal O\\left(\\Delta t^3\\right)" }, { "math_id": 61, "text": "t_n" }, { "math_id": 62, "text": "\\mathcal O\\left(e^{Lt_n} \\Delta t^2\\right)" }, { "math_id": 63, "text": "\\tfrac{\\mathbf x_{n+1} - \\mathbf x_n}{\\Delta t}" }, { "math_id": 64, "text": "\\tfrac{\\mathbf x(t_{n+1}) - \\mathbf x(t_n)}{\\Delta t}" }, { "math_id": 65, "text": "\n\\mathbf x_{i+1}\n= \\mathbf x_i + \\left(\\mathbf x_i - \\mathbf x_{i-1}\\right) \\frac{\\Delta t_i}{\\Delta t_{i-1}} + \\mathbf a_i \\Delta t_i^2.\n" }, { "math_id": 66, "text": "t_i" }, { "math_id": 67, "text": "t_{i+1} = t_i + \\Delta t_i" }, { "math_id": 68, "text": "t_{i-1} = t_i - \\Delta t_{i-1}" }, { "math_id": 69, "text": "\\mathbf v_i" }, { "math_id": 70, "text": "\n\\frac{\\mathbf x_{i+1} - \\mathbf x_i}{\\Delta t_i}\n+ \\frac{\\mathbf x_{i-1} - \\mathbf x_i}{\\Delta t_{i-1}}\n= \\mathbf a_i\\,\\frac{\\Delta t_{i} + \\Delta t_{i-1}}2,\n" }, { "math_id": 71, "text": "\n\\mathbf x_{i+1}\n= \\mathbf x_i\n+ (\\mathbf x_i - \\mathbf x_{i-1}) \\frac{\\Delta t_i}{\\Delta t_{i-1}}\n+ \\mathbf a_i\\,\\frac{\\Delta t_{i} + \\Delta t_{i-1}}2\\,\\Delta t_i.\n" }, { "math_id": 72, "text": "t + \\Delta t" }, { "math_id": 73, "text": "\n\\mathbf{v}(t) =\n\\frac{\\mathbf{x}(t + \\Delta t) - \\mathbf{x}(t - \\Delta t)}{2\\Delta t}\n+ \\mathcal{O}\\left(\\Delta t^2\\right).\n" }, { "math_id": 74, "text": "\\mathbf v_n = \\tfrac{\\mathbf x_{n+1} - \\mathbf x_{n-1}}{2\\Delta t}" }, { "math_id": 75, "text": "\\mathbf{v}(t_n)" }, { "math_id": 76, "text": "\\mathbf v_{n+\\frac12} = \\tfrac{\\mathbf x_{n+1} - \\mathbf x_n}{\\Delta t}" }, { "math_id": 77, "text": "\\mathbf{v}\\left(t_{n+\\frac12}\\right)" }, { "math_id": 78, "text": "t_{n+\\frac12} = t_n + \\tfrac12 \\Delta t" }, { "math_id": 79, "text": "\\mathbf{v}(t + \\Delta t) = \\frac{\\mathbf{x}(t + \\Delta t) - \\mathbf{x}(t)}{\\Delta t} + \\mathcal{O}(\\Delta t)." }, { "math_id": 80, "text": "\\begin{align}\n\\mathbf{x}(t + \\Delta t) &= \\mathbf{x}(t) + \\mathbf{v}(t)\\, \\Delta t + \\tfrac{1}{2} \\,\\mathbf{a}(t) \\Delta t^2, \\\\[6pt]\n\\mathbf{v}(t + \\Delta t) &= \\mathbf{v}(t) + \\frac{\\mathbf{a}(t) + \\mathbf{a}(t + \\Delta t)}{2} \\Delta t.\n\\end{align}" }, { "math_id": 81, "text": "\\mathbf{v}\\left(t + \\tfrac12\\,\\Delta t\\right) = \\mathbf{v}(t) + \\tfrac12\\,\\mathbf{a}(t)\\,\\Delta t" }, { "math_id": 82, "text": "\\mathbf{x}(t + \\Delta t) = \\mathbf{x}(t) + \\mathbf{v}\\left(t + \\tfrac12\\,\\Delta t\\right)\\, \\Delta t" }, { "math_id": 83, "text": "\\mathbf{a}(t + \\Delta t)" }, { "math_id": 84, "text": "\\mathbf{x}(t + \\Delta t)" }, { "math_id": 85, "text": "\\mathbf{v}(t + \\Delta t) = \\mathbf{v}\\left(t + \\tfrac12\\,\\Delta t\\right) + \\tfrac12\\,\\mathbf{a}(t + \\Delta t)\\Delta t" }, { "math_id": 86, "text": "\\mathbf{x}(t + \\Delta t) = \\mathbf{x}(t) + \\mathbf{v}(t)\\,\\Delta t + \\tfrac12 \\,\\mathbf{a}(t)\\,\\Delta t^2" }, { "math_id": 87, "text": "\\mathbf{v}(t + \\Delta t) = \\mathbf{v}(t) + \\tfrac12\\,\\bigl(\\mathbf{a}(t) + \\mathbf{a}(t + \\Delta t)\\bigr)\\Delta t" }, { "math_id": 88, "text": "\\mathbf{v}(t + \\Delta t)" }, { "math_id": 89, "text": "\\beta = 0" }, { "math_id": 90, "text": "\\gamma = \\tfrac12" }, { "math_id": 91, "text": "\\mathcal O\\left(\\Delta t^2\\right)" }, { "math_id": 92, "text": "\\mathcal O\\left(\\Delta t^4\\right)" }, { "math_id": 93, "text": "\\operatorname{error}\\bigl(x(t_0 + \\Delta t)\\bigr) = \\mathcal O\\left(\\Delta t^4\\right)" }, { "math_id": 94, "text": "x(t_0 + 2\\Delta t) = 2x(t_0 + \\Delta t) - x(t_0) + \\Delta t^2 \\ddot{x}(t_0 + \\Delta t) + \\mathcal O\\left(\\Delta t^4\\right)." }, { "math_id": 95, "text": "\\operatorname{error}\\bigl(x(t_0 + 2\\Delta t)\\bigr) = 2\\cdot\\operatorname{error}\\bigl(x(t_0 + \\Delta t)\\bigr) + \\mathcal O\\left(\\Delta t^4\\right) = 3\\,\\mathcal O\\left(\\Delta t^4\\right)." }, { "math_id": 96, "text": "\\begin{align}\n\\operatorname{error}\\bigl(x(t_0 + 3\\Delta t)\\bigl) &= 6\\,\\mathcal O\\left(\\Delta t^4\\right), \\\\[6px]\n\\operatorname{error}\\bigl(x(t_0 + 4\\Delta t)\\bigl) &= 10\\,\\mathcal O\\left(\\Delta t^4\\right), \\\\[6px]\n\\operatorname{error}\\bigl(x(t_0 + 5\\Delta t)\\bigl) &= 15\\,\\mathcal O\\left(\\Delta t^4\\right),\n\\end{align}" }, { "math_id": 97, "text": "\\operatorname{error}\\bigl(x(t_0 + n\\Delta t)\\bigr) = \\frac{n(n+1)}{2}\\,\\mathcal O\\left(\\Delta t^4\\right)." }, { "math_id": 98, "text": "x(t)" }, { "math_id": 99, "text": "x(t + T)" }, { "math_id": 100, "text": "T = n\\Delta t" }, { "math_id": 101, "text": "\\operatorname{error}\\bigl(x(t_0 + T)\\bigr) = \\left(\\frac{T^2}{2\\Delta t^2} + \\frac{T}{2\\Delta t}\\right) \\mathcal O\\left(\\Delta t^4\\right)," }, { "math_id": 102, "text": "\\operatorname{error}\\bigr(x(t_0 + T)\\bigl) = \\mathcal O\\left(\\Delta t^2\\right)." }, { "math_id": 103, "text": "\\tilde{x}_i^{(t)}" }, { "math_id": 104, "text": "x_i^{(t)}" }, { "math_id": 105, "text": "i" }, { "math_id": 106, "text": "r" }, { "math_id": 107, "text": "\\begin{align}\nd_1 &= x_2^{(t)} - x_1^{(t)}, \\\\[6px]\nd_2 &= \\|d_1\\|, \\\\[6px]\nd_3 &= \\frac{d_2 - r}{d_2}, \\\\[6px]\nx_1^{(t + \\Delta t)} &= \\tilde{x}_1^{(t + \\Delta t)} + \\tfrac{1}{2} d_1 d_3, \\\\[6px]\nx_2^{(t + \\Delta t)} &= \\tilde{x}_2^{(t + \\Delta t)} - \\tfrac{1}{2} d_1 d_3.\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=825735
825767
Cyclopropene
Organic ring compound (C₃H₄) &lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound Cyclopropene is an organic compound with the formula . It is the simplest cycloalkene. Because the ring is highly strained, cyclopropene is difficult to prepare and highly reactive. This colorless gas has been the subject for many fundamental studies of bonding and reactivity. It does not occur naturally, but derivatives are known in some fatty acids. Derivatives of cyclopropene are used commercially to control ripening of some fruit. Structure and bonding. The molecule has a triangular structure. The reduced length of the double bond compared to a single bond causes the angle opposite the double bond to narrow to about 51° from the 60° angle found in cyclopropane. As with cyclopropane, the carbon–carbon bonding in the ring has increased p character: the alkene carbon atoms use sp2.68 hybridization for the ring. Synthesis of cyclopropene and derivatives. Early syntheses. The first confirmed synthesis of cyclopropene, carried out by Dem'yanov and Doyarenko, involved the thermal decomposition of trimethylcyclopropylammonium hydroxide over platinized clay at approximately 300 °C. This reaction produces mainly trimethylamine and dimethylcyclopropyl amine, together with about 5% of cyclopropene. Later Schlatter improved the pyrolytic reaction conditions using platinized asbestos as a catalyst at 320–330 °C and obtained cyclopropene in 45% yield. Cyclopropene can also be obtained in about 1% yield by thermolysis of the adduct of cycloheptatriene and dimethyl acetylenedicarboxylate. Modern syntheses from allyl chlorides. Allyl chloride undergoes dehydrohalogenation upon treatment with the base sodium amide at 80 °C to produce cyclopropene in about 10% yield. formula_0 The major byproduct of the reaction is allylamine. Adding allyl chloride to sodium bis(trimethylsilyl)amide in boiling toluene over a period of 45–60 minutes produces the targeted compound in about 40% yield with an improvement in purity: formula_1 1-Methylcyclopropene is synthesized similarly but at room temperature from methallylchloride using phenyllithium as the base: formula_2 Syntheses of derivatives. Treatment of nitrocyclopropanes with sodium methoxide eliminates the nitrite, giving the respective cyclopropene derivative. The synthesis of purely aliphatic cyclopropenes was first illustrated by the copper-catalyzed additions of carbenes to alkynes. In the presence of a copper catalyst, ethyl diazoacetate reacts with acetylenes to give cyclopropenes. 1,2-Dimethylcyclopropene-3-carboxylate arises via this method from 2-butyne. Copper, as copper sulfate and copper dust, are among the more popular forms of copper used to promote such reactions. Rhodium acetate has also been used. Reactions of cyclopropene. Studies on cyclopropene mainly focus on the consequences of its high ring strain. At 425 °C, cyclopropene isomerizes to methylacetylene (propyne). &lt;chem&gt;C3H4 -&gt; H3CC#CH&lt;/chem&gt; Attempted fractional distillation of cyclopropene at –36 °C (its predicted boiling point) results in polymerization. The mechanism is assumed to be a free-radical chain reaction, and the product, based on NMR spectra, is thought to be polycyclopropane. Cyclopropene undergoes the Diels–Alder reaction with cyclopentadiene to give endo-tricyclo[3.2.1.02,4]oct-6-ene. This reaction is commonly used to check for the presence of cyclopropene, following its synthesis. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ce{CH2=CHCH2Cl + NaNH2} \\longrightarrow \\underset{\\text{cyclo-} \\atop \\text{propene}}{\\ce{C3H4}} + \\ce{NaCl + NH3}" }, { "math_id": 1, "text": "\\ce{CH2=CHCH2Cl + NaN(TMS)2} \\longrightarrow \\underset{\\text{cyclo-} \\atop \\text{propene}}{\\ce{C3H4}} + \\ce{NaCl + NH(TMS)2}" }, { "math_id": 2, "text": "\\ce{CH2=C(CH3)CH2Cl + LiC6H5} \\longrightarrow \\underset{\\text{1-methyl-} \\atop \\text{cyclopropene}}{\\ce{CH3C3H3}} + \\ce{LiCl + C6H6}" } ]
https://en.wikipedia.org/wiki?curid=825767
8260329
Polytope model
The polyhedral model (also called the polytope method) is a mathematical framework for programs that perform large numbers of operations -- too large to be explicitly enumerated -- thereby requiring a "compact" representation. Nested loop programs are the typical, but not the only example, and the most common use of the model is for loop nest optimization in program optimization. The polyhedral method treats each loop iteration within nested loops as lattice points inside mathematical objects called polyhedra, performs affine transformations or more general non-affine transformations such as tiling on the polytopes, and then converts the transformed polytopes into equivalent, but optimized (depending on targeted optimization goal), loop nests through polyhedra scanning. Simple example. Consider the following example written in C: const int n = 100; int i, j, a[n][n]; for (i = 1; i &lt; n; i++) { for (j = 1; j &lt; (i + 2) &amp;&amp; j &lt; n; j++) { a[i][j] = a[i - 1][j] + a[i][j - 1]; The essential problem with this code is that each iteration of the inner loop on codice_0 requires that the previous iteration's result, codice_1, be available already. Therefore, this code cannot be parallelized or pipelined as it is currently written. An application of the polytope model, with the affine transformation formula_0 and the appropriate change in the boundaries, will transform the nested loops above into: a[i - j][j] = a[i - j - 1][j] + a[i - j][j - 1]; In this case, no iteration of the inner loop depends on the previous iteration's results; the entire inner loop can be executed in parallel. Indeed, given codice_2 then codice_3 only depends on codice_4, with formula_1. (However, each iteration of the outer loop does depend on previous iterations.) Detailed example. The following C code implements a form of error-distribution dithering similar to Floyd–Steinberg dithering, but modified for pedagogical reasons. The two-dimensional array codice_5 contains codice_6 rows of codice_7 pixels, each pixel having a grayscale value between 0 and 255 inclusive. After the routine has finished, the output array codice_8 will contain only pixels with value 0 or value 255. During the computation, each pixel's dithering error is collected by adding it back into the codice_5 array. (Notice that codice_5 and codice_8 are both read and written during the computation; codice_5 is not read-only, and codice_8 is not write-only.) Each iteration of the inner loop modifies the values in codice_14 based on the values of codice_15, codice_16, and codice_17. (The same dependencies apply to codice_18. For the purposes of loop skewing, we can think of codice_14 and codice_18 as the same element.) We can illustrate the dependencies of codice_14 graphically, as in the diagram on the right. Performing the affine transformation formula_2 on the original dependency diagram gives us a new diagram, which is shown in the next image. We can then rewrite the code to loop on codice_22 and codice_23 instead of codice_24 and codice_25, obtaining the following "skewed" routine.
[ { "math_id": 0, "text": "(i',\\, j') = (i+j,\\, j)" }, { "math_id": 1, "text": "x \\in \\{j - 1,\\, j\\}" }, { "math_id": 2, "text": "(p,\\, t) = (i,\\, 2j+i)" } ]
https://en.wikipedia.org/wiki?curid=8260329
8260919
Putnam model
The Putnam model is an empirical software effort estimation model. The original paper by Lawrence H. Putnam published in 1978 is seen as pioneering work in the field of software process modelling. As a group, empirical models work by collecting software project data (for example, effort and size) and fitting a curve to the data. Future effort estimates are made by providing size and calculating the associated effort using the equation which fit the original data (usually with some error). Created by Lawrence Putnam, Sr. the Putnam model describes the "time" and "effort" required to finish a software project of specified "size". SLIM (Software LIfecycle Management) is the name given by Putnam to the proprietary suite of tools his company QSM, Inc. has developed based on his model. It is one of the earliest of these types of models developed, and is among the most widely used. Closely related software parametric models are Constructive Cost Model (COCOMO), Parametric Review of Information for Costing and Evaluation – Software (PRICE-S), and Software Evaluation and Estimation of Resources – Software Estimating Model (SEER-SEM). The software equation. While managing R&amp;D projects for the Army and later at GE, Putnam noticed software staffing profiles followed the well-known Rayleigh distribution. Putnam used his observations about productivity levels to derive the software equation: formula_0 where: In practical use, when making an estimate for a software task the software equation is solved for "effort": formula_1 An estimated software size at project completion and organizational process productivity is used. Plotting "effort" as a function of "time" yields the "Time-Effort Curve". The points along the curve represent the estimated total effort to complete the project at some "time". One of the distinguishing features of the Putnam model is that total effort decreases as the time to complete the project is extended. This is normally represented in other parametric models with a schedule relaxation parameter. This estimating method is fairly sensitive to uncertainty in both "size" and "process productivity". Putnam advocates obtaining process productivity by calibration: formula_2 Putnam makes a sharp distinction between 'conventional productivity' : "size" / "effort" and process productivity. One of the key advantages to this model is the simplicity with which it is calibrated. Most software organizations, regardless of maturity level can easily collect "size", "effort" and duration ("time") for past projects. Process Productivity, being exponential in nature is typically converted to a linear "productivity index" an organization can use to track their own changes in productivity and apply in future effort estimates.
[ { "math_id": 0, "text": "\\frac {B^{1/3} \\cdot \\text{Size}}{ \\text{Productivity} } = \\text{Effort}^{1/3} \\cdot \\text{Time}^{4/3}" }, { "math_id": 1, "text": "\\text{Effort} = \\left[ \\frac {\\text{Size}} {\\text{Productivity} \\cdot \\text{Time}^{4/3}}\\right]^3 \\cdot B " }, { "math_id": 2, "text": "\\text{Process Productivity} = \\frac {\\text{Size}} { \\left[ \\frac {\\text{Effort}}{B} \\right]^{1/3} \\cdot \\text{Time}^{4/3} } " } ]
https://en.wikipedia.org/wiki?curid=8260919
8262
Dominoes
Family of tile-based games Dominoes is a family of tile-based games played with gaming pieces. Each domino is a rectangular tile, usually with a line dividing its face into two square "ends". Each end is marked with a number of spots (also called "pips" or "dots") or is blank. The backs of the tiles in a set are indistinguishable, either blank or having some common design. The gaming pieces make up a domino set, sometimes called a "deck" or "pack". The traditional European domino set consists of 28 tiles, also known as pieces, bones, rocks, stones, men, cards or just dominoes, featuring all combinations of spot counts between zero and six. A domino set is a generic gaming device, similar to playing cards or dice, in that a variety of games can be played with a set. Another form of entertainment using domino pieces is the practice of domino toppling. The earliest mention of dominoes is from Song dynasty China found in the text "Former Events in Wulin" by Zhou Mi (1232–1298). Modern dominoes first appeared in Italy during the 18th century, but they differ from Chinese dominoes in a number of respects, and there is no confirmed link between the two. European dominoes may have developed independently, or Italian missionaries in China may have brought the game to Europe. The name "domino" is probably derived from the resemblance to a kind of carnival costume worn during the Venetian Carnival, often consisting of a black-hooded robe and a white mask. Despite the coinage of the word "polyomino" as a generalization, there is no connection between the word "domino" and the number 2 in any language. The most commonly played domino games are Domino Whist, Matador, and Muggins (All Fives). Other popular forms include Texas 42, Chicken Foot, Concentration, Double Fives, and Mexican Train. In Britain, the most popular league and pub game is Fives and Threes. Dominoes have sometimes been used for divination, such as bone throwing in Chinese culture and in the African diaspora. Construction and composition of domino sets. European-style dominoes are traditionally made of bone, silver lip ocean pearl oyster shell (mother of pearl), ivory, or a dark hardwood such as ebony, with contrasting black or white pips (inlaid or painted). Some sets feature the top half thickness in MOP, ivory, or bone, with the lower half in ebony. Alternatively, domino sets have been made from many different natural materials: stone (e.g., marble, granite or soapstone); other woods (e.g., ash, oak, redwood, and cedar); metals (e.g., brass or pewter); ceramic clay, or even frosted glass or crystal. These sets have a more novel look, and the often heavier weight makes them feel more substantial; also, such materials and the resulting products are usually much more expensive than polymer materials. Modern commercial domino sets are usually made of synthetic materials, such as ABS or polystyrene plastics, or Bakelite and other phenolic resins; many sets approximate the look and feel of ivory while others use colored or even translucent plastics to achieve a more contemporary look. Modern sets also commonly use a different color for the dots of each different end value (one-spots might have black pips while two-spots might be green, three red, etc.) to facilitate finding matching ends. Occasionally, one may find a domino set made of card stock like that for playing cards. Such sets are lightweight, compact, and inexpensive, and like cards are more susceptible to minor disturbances such as a sudden breeze. Sometimes, the tiles have a metal pin (called a spinner or pivot) in the middle. The traditional domino set contains one unique piece for each possible combination of two ends with zero to six spots, and is known as a double-six set because the highest-value piece has six pips on each end (the "double six"). The spots from one to six are generally arranged as they are on six-sided dice, but because blank ends having no spots are used, seven faces are possible, allowing 28 unique pieces in a double-six set. However, this is a relatively small number especially when playing with more than four people, so many domino sets are "extended" by introducing ends with greater numbers of spots, which increases the number of unique combinations of ends and thus of pieces. Each progressively larger set increases the maximum number of pips on an end by three; so the common extended sets are double-nine (55 tiles), double-12 (91 tiles), double-15 (136 tiles), and double-18 (190 tiles), which is the maximum in practice. Larger sets such as double-21 (253 tiles) could theoretically exist, but they seem to be extremely rare if not nonexistent, as that would be far more than is normally necessary for most domino games, even with eight players. As the set becomes larger, identifying the number of pips on each domino becomes more difficult, so some large domino sets use more readable Arabic numerals instead of pips. History. Chinese dominoes. In China, early "domino" tiles were functionally identical to playing cards. An identifiable version of Chinese dominoes developed in the 12th or 13th century. The oldest written mention of domino tiles in China dates to the 13th century and comes from Hangzhou where "pupai" (gambling plaques or tiles) and dice are listed as items sold by peddlers during the reign of Emperor Xiaozong of Song (r. 1162–1189). It is not entirely clear that "pupai" means dominoes, but the same term is used two centuries later by the Ming author Lu Rong (1436–1494) in a context that clearly describes domino tiles. The earliest known manual on dominoes is the "Manual of the Xuanhe Period" which purports to be written by Qu You (1341–1427), but some scholars believe it is a later forgery. The traditional 32-piece Chinese domino set, made to represent each possible face of two thrown dice and thus have no blank faces, differs from the 28-piece domino set found in the West during the mid 18th century, although Chinese dominoes with blank faces were known during the 17th century. Each domino originally represented one of the 21 results of throwing two six-sided dice (2d6). One half of each domino is set with the pips from one die and the other half contains the pips from the second die. Chinese sets also introduce duplicates of some throws and divide the tiles into two suits: military and civil. Chinese dominoes are also longer than typical European ones. Dominoes in Europe and North America. Modern dominoes first appeared in Italy during the 18th century, but they differ from Chinese dominoes in a number of respects, and there is no confirmed link between the two. European dominoes may have developed independently, or Italian missionaries in China may have brought the game to Europe. Having been established in Italy, the game of dominoes spread rapidly to Austria, southern Germany and France. The game became fashionable in France in the mid-18th century. The name "domino" does not appear before that time, being first recorded in 1771, in the "Dictionnaire de Trévoux". There are two earlier recorded meanings for the French word "domino", one referring to the masquerades of the period, derived from the term for the hooded garment of a priest, the other referred to crude and brightly colored woodcuts on paper formerly popular among French peasants. The way by which this word became the name of the game of domino remains unclear. The earliest game rules in Europe describe a simple block game for two or four players. Later French rules add the variant of "Domino à la Pêche" ("Fishing Domino"), an early draw game as well as a three-hand game with a pool. From France, the game was introduced to England by the late 1700s, purportedly brought in by French prisoners-of-war. The early forms of the game in England were the "Block Game" and "Draw Game". The rules for these games were reprinted, largely unchanged, for over half a century. In 1863, a new game variously described as "All Fives, Fives" or "Cribbage Dominoes" appeared for the first time in both English and American sources; this was the first scoring game and it borrowed the counting and scoring features of cribbage, but 5 domino spots instead of 15 card points became the basic scoring unit, worth 1 game point. The game was played to 31 and employed a cribbage board to keep score. In 1864, "The American Hoyle" describes three new variants: Muggins, Bergen and Rounce; alongside the Block Dominoes and Draw Dominoes. In Muggins, the cribbage board was dropped, 5 spots scored 5 points, and game was now 200 for two players and 150 for three or four. Despite the name, there was no 'muggins rule' as in cribbage to challenge a player who fails to declare his scoring combinations. This omission was rectified in the 1868 edition of "The Modern Pocket Hoyle", but reprints of both rule sets continued to be produced in parallel for around twenty years before the version with the muggins rule prevailed. From around 1871, however, the names of All Fives and Muggins, became conflated and many publications issued rules for 'Muggins or All Fives' or 'Muggins or Fives' without making any distinction between the two. This confusion continues to the present day with some publications equating the names and others describing All Fives as a separate game. In 1889, dominoes was described as having spread worldwide, "but nowhere is it more popular than in the cafés of France and Belgium. From the outset, the European game was different from the Chinese one. European domino sets contain neither the military-civilian suit distinctions of Chinese dominoes nor the duplicates that went with them. Moreover, according to Michael Dummett, in the Chinese games it is only the identity of the tile that matters; there is no concept of matching. Instead, the basic set of 28 unique tiles contains seven additional pieces, six of them representing the values that result from throwing a single die with the other half of the tile left blank, and the seventh domino representing the blank-blank (0–0) combination. Subsequently 45-piece (double eight) sets appeared in Austria and, in recent times, 55-piece (double nine) and 91-piece (double twelve) sets have been produced. All the early games are still played today alongside games that have sprung up in the last 60 years such as Five Up, Mexican Train and Chicken Foot, the last two taking advantage of the larger domino sets available. Some modern descriptions of All Fives are quite different from the original, having lost much of their cribbage character and incorporating a single spinner, making it identical, or closely related, to Sniff. Most published rule sets for Muggins include the rule that gives the game its name, but some modern publications omit it even though the muggins rule has been described as the unique feature of this game. Dominoes is now played internationally. It is recognized as an "ingrained cultural activity within the Caribbean" but is also popular with the Windrush generation (who have Caribbean heritage) in the UK. In the U.S. state of Alabama, although rarely prosecuted, it was illegal to play dominoes on Sunday within the state until the relevant section of the Alabama Criminal Code was repealed, effective April 21, 2015. Tiles and suits. Dominoes (also known as bones, cards, men, pieces or tiles), are normally twice as long as they are wide, which makes it easier to re-stack pieces after use. A domino usually features a line in the middle to divide it visually into two squares, called ends. The value of either side is the number of spots or pips. In the most common variant (double-six), the values range from six pips down to none or blank. The sum of the two values, i.e. the total number of pips, may be referred to as the rank or weight of a tile; a tile may be described as "heavier" than a "lighter" one that has fewer (or no) pips. Tiles are generally named after their two values. For instance, the following are descriptions of the tile &lt;includeonly&gt;&amp;#x;&lt;/includeonly&gt; bearing the values two and five: A tile that has the same pips-value on each end is called a double or doublet, and is typically referred to as double-zero &lt;includeonly&gt;&amp;#x;&lt;/includeonly&gt;, double-one &lt;includeonly&gt;&amp;#x;&lt;/includeonly&gt;, and so on. Conversely, a tile bearing different values is called a single. Every tile which features a given number is a member of the suit of that number. A single tile is a member of two suits: for example, &lt;includeonly&gt;&amp;#x;&lt;/includeonly&gt; belongs both to the suit of threes and the suit of blanks, or 0 suit. In some versions the doubles can be treated as an additional suit of doubles. In these versions, the &lt;includeonly&gt;&amp;#x;&lt;/includeonly&gt; belongs both to the suit of sixes and the suit of doubles. However, the dominant approach is that each double belongs to only one suit. The most common domino sets commercially available are double six (with 28 tiles) and double nine (with 55 tiles). Larger sets exist and are popular for games involving several players or for players looking for long domino games. The number of tiles in a double-n set obeys the following formula: formula_0 which is also the (n+1)th triangular number, as in the following table. This formula can be simplified a little bit when formula_1 is made equal to the "total number of doubles in the domino set": formula_2 The total number of pips in a double-n set is found by: formula_3 i.e. the number of tiles multiplied by the maximum pip-count (n) e.g. a 6-6 set has (7 × 8) / 2 = 56/2 = 28 tiles, the average number of pips per tile is 6 (range is from 0 to 12), giving a total pip count of 6 × 28 = 168 Rules. The most popular type of play are layout games, which fall into two main categories, blocking games and scoring games. Blocking game. The most basic domino variant is for two players and requires a double-six set. The 28 tiles are shuffled face down and form the "stock" or "boneyard". Each player draws seven tiles from the stock. Once the players begin drawing tiles, they are typically placed on-edge in front of the players, so players can see their own tiles, but not the value of their opponents' tiles. Players can thus see how many tiles remain in their opponents' hands at all times. One player begins by downing (playing the first tile) one of their tiles. This tile starts the line of play, in which values of adjacent pairs of tile ends must match. The players alternately extend the line of play with one tile at one of its two ends; if a player is unable to place a valid tile, they must continue drawing tiles from the stock until they are able to place a tile. The game ends when one player wins by playing their last tile, or when the game is blocked because neither player can play. If that occurs, whoever caused the block receives all of the remaining player points not counting their own. Middle Eastern version. A common variant of the blocking game that is played in the Middle East features four players with slightly altered rules. The stock is divided equally on all players, each having seven tiles in hand. After drawing the tiles, the player with the double-six tile starts by downing that tile on the table and the game then proceeds counter-clockwise. Since there is no boneyard, a player without a matching tile passes their turn. A player that is unable to play is called a downed or sitting player. A less common alternation of the middle eastern game requires the player to the left of the sitting player to transfer one of their tiles (not necessarily playable) tile to the downed player. In this variant, if the transferred tile can be played, they have to down it. Similar to a normal blocking game, the game ends when a player empties their hand or the game is blocked. If the game is blocked, the player with the lightest hand receives points equal to the sum of all losing players' hands. A set of games ends when any player reaches a set amount of points, in which they win. If no player reached a winning score, the winning player from the previous round starts the next game with any tile in their hand and the game proceeds normally. Latin American version. Another variant of the blocking game is the Latin American version and is played in teams of two. The stock is divided equally among all players, each having seven tiles in hand. Players sitting on opposite ends of the table are part of the same team. The game ends when one of the players has no tiles left or when the game is blocked. In the first case, the team of the player without any tiles left earns the sum of the points left in the opposing teams' hands. When the game is blocked, the team with the least points in its hands earns the points left in the opposing teams' hands. If both teams have the same points, the team that started wins the round. Scoring game. Players accrue points during game play for certain configurations, moves, or emptying one's hand. Most scoring games use variations of the draw game. If a player does not call "domino" before the tile is laid on the table, and another player says domino after the tile is laid, the first player must pick up an extra domino. Draw game. In a draw game (blocking or scoring), players are additionally allowed to draw as many tiles as desired from the stock before playing a tile, and they are not allowed to pass before the stock is (nearly) empty. The score of a game is the number of pips in the losing player's hand plus the number of pips in the stock. Most rules prescribe that two tiles need to remain in the stock. The draw game is often referred to as simply "dominoes". Adaptations of both games can accommodate more than two players, who may play individually or in teams. Line of play. The line of play is the configuration of played tiles on the table. It starts with a single tile and typically grows in two opposite directions when players add matching tiles. In practice, players often play tiles at right angles when the line of play gets too close to the edge of the table. The rules for the line of play often differ from one variant to another. In many rules, the doubles serve as spinners, i.e., they can be played on all four sides, causing the line of play to branch. Sometimes, the first tile is required to be a double, which serves as the only spinner. In some games such as Chicken Foot, all sides of a spinner must be occupied before anybody is allowed to play elsewhere. Matador has unusual rules for matching. Bendomino uses curved tiles, so one side of the line of play (or both) may be blocked for geometrical reasons. In Mexican Train and other train games, the game starts with a spinner from which various trains branch off. Most trains are owned by a player and in most situations players are allowed to extend only their own train. Scoring. In blocking games, scoring happens at the end of the game. After a player has emptied their hand, thereby winning the game for the team, the score consists of the total pip count of the losing team's hands. In some rules, the pip count of the remaining stock is added. If a game is blocked because no player can move, the winner is often determined by adding the pips in players' hands. In scoring games, each individual can potentially add to the score. For example, in Bergen, players score two points whenever they cause a configuration in which both open ends have the same value and three points if additionally one open end is formed by a double. In Muggins, players score by ensuring the total pip count of the open ends is a multiple of a certain number. In variants of Muggins, the line of play may branch due to spinners. In the common U.S. variant known as Fives players score by making the open ends a multiple of five. In British public houses and social clubs, a scoring version of "5s-and-3s" is used. The game is normally played in pairs (two against two) and is played as a series of "ends". In each "end", the objective is for players to attach a domino from their hand to one end of those already played so that the sum of the end tiles is divisible by five or three. One point is scored for each time five or three can be divided into the sum of the two tiles, i.e. four at one end and five at the other makes nine, which is divisible by three three times, resulting in three points. Double five at one end and five at the other makes 15, which is divisible by three five times (five points) and divisible by five three times (three points) for a total of eight points. An "end" stops when one of the players is out, i.e., has played all of their tiles. In the event no player is able to empty their hand, then the player with the lowest domino left in hand is deemed to be out and scores one point. A game consists of any number of ends with points scored in the ends accumulating towards a total. The game ends when one of the pair's total score exceeds a set number of points. A running total score is often kept on a cribbage board. 5s-and-3s is played in a number of competitive leagues in the British Isles. Card games using domino sets. Apart from the usual blocking and scoring games, games of a very different character are also played with dominoes, such as solitaire or trick-taking games. Most of these are adaptations of card games and were once popular in certain areas to circumvent religious proscriptions against playing cards. A very simple example is a Concentration variant played with a double-six set; two tiles are considered to match if their total pip count is 12. A popular domino game in Texas is 42. The game is similar to the card game spades. It is played with four players paired into teams. Each player draws seven tiles, and the tiles are played into tricks. Each trick counts as one point, and any domino with a multiple of five dots counts toward the total of the hand. These 35 points of "five count" and seven tricks equals 42 points, hence the name. Competitive play. Dominoes is played at a professional level, similar to poker. Numerous organisations and clubs of amateur domino players exist around the world. Some organizations organize international competitions. Examples include the Anglo Caribbean Dominoes League (ACDL) in the UK which includes over 40 clubs including the Brixton Immortals. Dominoes in Unicode. Since April 2008, the character encoding standard Unicode includes characters that represent the double-six domino tiles. While a complete domino set has only 28 tiles, the Unicode set has "reversed" versions of the 21 tiles with different numbers on each end, a "back" image, and everything duplicated as horizontal and vertical orientations, for a total of 100 glyphs. Few fonts are known to support these glyphs. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{(n+1)(n+2)}{2}" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "\\frac{(n)(n+1)}{2}" }, { "math_id": 3, "text": "\\frac{n(n+1)(n+2)}{2}" } ]
https://en.wikipedia.org/wiki?curid=8262
826216
Galaxy morphological classification
System for categorizing galaxies based on appearance Galaxy morphological classification is a system used by astronomers to divide galaxies into groups based on their visual appearance. There are several schemes in use by which galaxies can be classified according to their morphologies, the most famous being the Hubble sequence, devised by Edwin Hubble and later expanded by Gérard de Vaucouleurs and Allan Sandage. However, galaxy classification and morphology are now largely done using computational methods and physical morphology. Hubble sequence. The Hubble sequence is a morphological classification scheme for galaxies invented by Edwin Hubble in 1926. It is often known colloquially as the “Hubble tuning-fork” because of the shape in which it is traditionally represented. Hubble's scheme divides galaxies into three broad classes based on their visual appearance (originally on photographic plates): formula_0 These broad classes can be extended to enable finer distinctions of appearance and to encompass other types of galaxies, such as irregular galaxies, which have no obvious regular structure (either disk-like or ellipsoidal). The Hubble sequence is often represented in the form of a two-pronged fork, with the ellipticals on the left (with the degree of ellipticity increasing from left to right) and the barred and unbarred spirals forming the two parallel prongs of the fork on the right. Lenticular galaxies are placed between the ellipticals and the spirals, at the point where the two prongs meet the “handle”. To this day, the Hubble sequence is the most commonly used system for classifying galaxies, both in professional astronomical research and in amateur astronomy. Nonetheless, in June 2019, citizen scientists through Galaxy Zoo reported that the usual Hubble classification, particularly concerning spiral galaxies, may not be supported, and may need updating. De Vaucouleurs system. The de Vaucouleurs system for classifying galaxies is a widely used extension to the Hubble sequence, first described by Gérard de Vaucouleurs in 1959. De Vaucouleurs argued that Hubble's two-dimensional classification of spiral galaxies—based on the tightness of the spiral arms and the presence or absence of a bar—did not adequately describe the full range of observed galaxy morphologies. In particular, he argued that rings and lenses are important structural components of spiral galaxies. The de Vaucouleurs system retains Hubble's basic division of galaxies into ellipticals, lenticulars, spirals and irregulars. To complement Hubble's scheme, de Vaucouleurs introduced a more elaborate classification system for spiral galaxies, based on three morphological characteristics: The different elements of the classification scheme are combined — in the order in which they are listed — to give the complete classification of a galaxy. For example, a weakly barred spiral galaxy with loosely wound arms and a ring is denoted SAB(r)c. Visually, the de Vaucouleurs system can be represented as a three-dimensional version of Hubble's tuning fork, with stage (spiralness) on the "x"-axis, family (barredness) on the "y"-axis, and variety (ringedness) on the "z"-axis. Numerical Hubble stage. De Vaucouleurs also assigned numerical values to each class of galaxy in his scheme. Values of the numerical Hubble stage "T" run from −6 to +10, with negative numbers corresponding to early-type galaxies (ellipticals and lenticulars) and positive numbers to late types (spirals and irregulars). Thus, as a rough rule, lower values of "T" correspond to a larger fraction of the stellar mass contained in a spheroid/bulge relative to the disk. The approximate mapping between the spheroid-to-total stellar mass ratio (MB/MT) and the Hubble stage is MB/MT=(10−T)2/256 based on local galaxies. Elliptical galaxies are divided into three 'stages': compact ellipticals (cE), normal ellipticals (E) and late types (E+). Lenticulars are similarly subdivided into early (S−), intermediate (S0) and late (S+) types. Irregular galaxies can be of type magellanic irregulars ("T" = 10) or 'compact' ("T" = 11). The use of numerical stages allows for more quantitative studies of galaxy morphology. Yerkes (or Morgan) scheme. The Yerkes scheme was created by American astronomer William Wilson Morgan. Together with Philip Keenan, Morgan also developed the MK system for the classification of stars through their spectra. The Yerkes scheme uses the spectra of stars in the galaxy; the shape, real and apparent; and the degree of the central concentration to classify galaxies. Thus, for example, the Andromeda Galaxy is classified as kS5.
[ { "math_id": 0, "text": "E = 10 \\times \\left( 1-\\frac{b}{a} \\right)" } ]
https://en.wikipedia.org/wiki?curid=826216
826258
Scintillation (physics)
Production of light due to absorption of high-energy photons or particles In condensed matter physics, scintillation ( ) is the physical process where a material, called a scintillator, emits ultraviolet or visible light under excitation from high energy photons (X-rays or gamma rays) or energetic particles (such as electrons, alpha particles, neutrons, or ions). See scintillator and scintillation counter for practical applications. Overview. Scintillation is an example of luminescence, whereby light of a characteristic spectrum is emitted following the absorption of radiation. The scintillation process can be summarized in three main stages: (A) conversion, (B) transport and energy transfer to the luminescence center, and (C) luminescence. The emitted radiation is usually less energetic than the absorbed radiation, hence scintillation is generally a down-conversion process. Conversion processes. The first stage of scintillation, conversion, is the process where the energy from the incident radiation is absorbed by the scintillator and highly energetic electrons and holes are created in the material. The energy absorption mechanism by the scintillator depends on the type and energy of radiation involved. For highly energetic photons such as X-rays ( 0.1 keV &lt; formula_0 &lt; 100 keV) and γ-rays (formula_0 &gt; 100 keV), three types of interactions are responsible for the energy conversion process in scintillation: photoelectric absorption, Compton scattering, and pair production, which only occurs when formula_0 &gt; 1022 keV, i.e. the photon has enough energy to create an electron-positron pair. These processes have different attenuation coefficients, which depend mainly on the energy of the incident radiation, the average atomic number of the material and the density of the material. Generally the absorption of high energy radiation is described by: formula_1 where formula_2 is the intensity of the incident radiation, formula_3 is the thickness of the material, and formula_4 is the linear attenuation coefficient, which is the sum of the attenuation coefficients of the various contributions: formula_5, which will be explained below. At lower X-ray energies (formula_6 60 keV), the most dominant process is the photoelectric effect, where the photons are fully absorbed by bound electrons in the material, usually core electrons in the K- or L-shell of the atom, and then ejected, leading to the ionization of the host atom. The linear attenuation coefficient contribution for the photoelectric effect is given by: formula_7 where formula_8 is the density of the scintillator, formula_9 is the average atomic number, formula_10 is a constant that varies between 3 and 4, and formula_0 is the energy of the photon. At low X-ray energies, scintillator materials with atoms with high atomic numbers and densities are favored for more efficient absorption of the incident radiation. At higher energies (formula_0 formula_11 60 keV) Compton scattering, the inelastic scattering of photons by bound electrons, often also leading to ionization of the host atom, becomes the more dominant conversion process. The linear attenuation coefficient contribution for Compton scattering is given by: formula_12 Unlike the photoelectric effect, the absorption resulting from Compton scattering is independent of the atomic number of the atoms present in the crystal, but linearly on their density. At γ-ray energies higher than formula_0 &gt; 1022 keV, i.e. energies than twice the rest-mass energy of the electron, pair production starts to occur. Pair production is the relativistic phenomenon where the energy of a photon is converted into an electron-positron pair. The created electron and positron will then further interact with the scintillating material to generate energetic electron and holes. The attenuation coefficient contribution for pair production is given by: formula_13 where formula_14 is the rest mass of the electron and formula_15 is the speed of light. Hence, at high γ-ray energies, the energy absorption depends both on the density and average atomic number of the scintillator. In addition, unlike for the photoelectric effect and Compton scattering, pair production becomes more probable as the energy of the incident photons increases, and pair production becomes the most dominant conversion process above formula_0~ 8 MeV. The formula_16 term includes other (minor) contributions, such as Rayleigh (coherent) scattering at low energies and photonuclear reactions at very high energies, which also contribute to the conversion, however the contribution from Rayleigh scattering is almost negligible and photonuclear reactions become relevant only at very high energies. After the energy of the incident radiation is absorbed and converted into so-called hot electrons and holes in the material, these energetic charge carriers will interact with other particles and quasi-particles in the scintillator (electrons, plasmons, phonons), leading to an "avalanche event", where a great number of secondary electron–hole pairs are produced until the hot electrons and holes have lost sufficient energy. The large number of electrons and holes that result from this process will then undergo thermalization, i.e. dissipation of part of their energy through interaction with phonons in the material The resulting large number of energetic charge carriers will then undergo further energy dissipation called thermalization. This occurs via interaction with phonons for electrons and Auger processes for holes. The average timescale for conversion, including energy absorption and thermalization has been estimated to be in the order of 1 ps, which is much faster than the average decay time in photoluminescence. Charge transport of excited carriers. The second stage of scintillation is the charge transport of thermalized electrons and holes towards luminescence centers and the energy transfer to the atoms involved in the luminescence process. In this stage, the large number of electrons and holes that have been generated during the conversion process, migrate inside the material. This is probably one of the most critical phases of scintillation, since it's generally in this stage where most loss of efficiency occur due to effects such as trapping or non-radiative recombination. These are mainly caused by the presence of defects in the scintillator crystal, such as impurities, ionic vacancies, and grain boundaries. The charge transport can also become a bottleneck for the timing of the scintillation process. The charge transport phase is also one of the least understood parts of scintillation and depends strongly on the type material involved and its intrinsic charge conduction properties. Luminescence. Once the electrons and holes reach the luminescence centers, the third and final stage of scintillation occurs: luminescence. In this stage the electrons and holes are captured potential paths by the luminescent center, and then the electrons and hole recombine radiatively. The exact details of the luminescence phase also depend on the type of material used for scintillation. Inorganic Crystals. For photons such as gamma rays, thallium activated NaI crystals (NaI(Tl)) are often used. For a faster response (but only 5% of the output) CsF crystals can be used. Organic scintillators. In organic molecules scintillation is a product of π-orbitals. Organic materials form molecular crystals where the molecules are loosely bound by Van der Waals forces. The ground state of 12C is 1s2 2s2 2p2. In valence bond theory, when carbon forms compounds, one of the 2s electrons is excited into the 2p state resulting in a configuration of 1s2 2s1 2p3. To describe the different valencies of carbon, the four valence electron orbitals, one 2s and three 2p, are considered to be mixed or hybridized in several alternative configurations. For example, in a tetrahedral configuration the s and p3 orbitals combine to produce four hybrid orbitals. In another configuration, known as trigonal configuration, one of the p-orbitals (say pz) remains unchanged and three hybrid orbitals are produced by mixing the s, px and py orbitals. The orbitals that are symmetrical about the bonding axes and plane of the molecule (sp2) are known as σ-electrons and the bonds are called σ-bonds. The pz orbital is called a π-orbital. A π-bond occurs when two π-orbitals interact. This occurs when their nodal planes are coplanar. In certain organic molecules π-orbitals interact to produce a common nodal plane. These form delocalized π-electrons that can be excited by radiation. The de-excitation of the delocalized π-electrons results in luminescence. The excited states of π-electron systems can be explained by the perimeter free-electron model (Platt 1949). This model is used for describing polycyclic hydrocarbons consisting of condensed systems of benzenoid rings in which no C atom belongs to more than two rings and every C atom is on the periphery. The ring can be approximated as a circle with circumference l. The wave-function of the electron orbital must satisfy the condition of a plane rotator: formula_17 The corresponding solutions to the Schrödinger wave equation are: formula_18 where q is the orbital ring quantum number; the number of nodes of the wave-function. Since the electron can have spin up and spin down and can rotate about the circle in both directions all of the energy levels except the lowest are doubly degenerate. The above shows the π-electronic energy levels of an organic molecule. Absorption of radiation is followed by molecular vibration to the S1 state. This is followed by a de-excitation to the S0 state called fluorescence. The population of triplet states is also possible by other means. The triplet states decay with a much longer decay time than singlet states, which results in what is called the slow component of the decay process (the fluorescence process is called the fast component). Depending on the particular energy loss of a certain particle (dE/dx), the "fast" and "slow" states are occupied in different proportions. The relative intensities in the light output of these states thus differs for different dE/dx. This property of scintillators allows for pulse shape discrimination: it is possible to identify which particle was detected by looking at the pulse shape. Of course, the difference in shape is visible in the trailing side of the pulse, since it is due to the decay of the excited states. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_{\\gamma}" }, { "math_id": 1, "text": "I= I_0\\cdot e^{-\\mu d}" }, { "math_id": 2, "text": "I_0" }, { "math_id": 3, "text": "d" }, { "math_id": 4, "text": "\\mu" }, { "math_id": 5, "text": "\\mu = \\mu_{pe} + \\mu_{cs} + \\mu_{pp} + \\mu_{oc}" }, { "math_id": 6, "text": "E_{\\gamma} \\lesssim" }, { "math_id": 7, "text": "\\mu_{pe} \\propto {\\rho Z^n \\over E_{\\gamma}^{3.5}}" }, { "math_id": 8, "text": "\\rho" }, { "math_id": 9, "text": "Z" }, { "math_id": 10, "text": "n" }, { "math_id": 11, "text": "\\gtrsim" }, { "math_id": 12, "text": "\\mu_{cs} \\propto {\\rho \\over \\sqrt{E_{\\gamma}}}" }, { "math_id": 13, "text": "\\mu_{pp} \\propto \\rho Z \\ln \\Bigl( {2 E_{\\gamma} \\over m_e c^2}\\Bigr)" }, { "math_id": 14, "text": "m_e" }, { "math_id": 15, "text": "c " }, { "math_id": 16, "text": "\\mu_{oc}" }, { "math_id": 17, "text": "\\psi(x)=\\psi(x+l) \\," }, { "math_id": 18, "text": "\\begin{align}\n \\psi_0 &= \\left( \\frac{1}{l} \\right)^{\\frac{1}{2}} \\\\\n \\psi_{q1} &= \\left( \\frac{2}{l} \\right)^{\\frac{1}{2}} \\cos{\\left( \\frac{2\\pi\\ qx}{l} \\right)} \\\\\n \\psi_{q2} &= \\left( \\frac{2}{l} \\right)^{\\frac{1}{2}} \\sin{\\left( \\frac{2\\pi\\ qx}{l} \\right)} \\\\\n E_q &= \\frac{q^2\\hbar^2}{2m_0l^2}\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=826258
8263
Dissociation constant
Chemical property In chemistry, biochemistry, and pharmacology, a dissociation constant ("K"D) is a specific type of equilibrium constant that measures the propensity of a larger object to separate (dissociate) reversibly into smaller components, as when a complex falls apart into its component molecules, or when a salt splits up into its component ions. The dissociation constant is the inverse of the association constant. In the special case of salts, the dissociation constant can also be called an ionization constant. For a general reaction: &lt;chem&gt; A_\mathit{x} B_\mathit{y} &lt;=&gt; \mathit{x} A{} + \mathit{y} B &lt;/chem&gt; in which a complex formula_0 breaks down into "x" A subunits and "y" B subunits, the dissociation constant is defined as formula_1 where [A], [B], and [A"x" B"y"] are the equilibrium concentrations of A, B, and the complex A"x" B"y", respectively. One reason for the popularity of the dissociation constant in biochemistry and pharmacology is that in the frequently encountered case where "x" = "y" = 1, "K"D has a simple physical interpretation: when [A] = "K"D, then [B] = [AB] or, equivalently, formula_2. That is, "K"D, which has the dimensions of concentration, equals the concentration of free A at which half of the total molecules of B are associated with A. This simple interpretation does not apply for higher values of "x" or "y". It also presumes the absence of competing reactions, though the derivation can be extended to explicitly allow for and describe competitive binding. It is useful as a quick description of the binding of a substance, in the same way that EC50 and IC50 describe the biological activities of substances. Concentration of bound molecules. Molecules with one binding site. Experimentally, the concentration of the molecule complex [AB] is obtained indirectly from the measurement of the concentration of a free molecules, either [A] or [B]. In principle, the total amounts of molecule [A]0 and [B]0 added to the reaction are known. They separate into free and bound components according to the mass conservation principle: formula_3 To track the concentration of the complex [AB], one substitutes the concentration of the free molecules ([A] or [B]), of the respective conservation equations, by the definition of the dissociation constant, formula_4 This yields the concentration of the complex related to the concentration of either one of the free molecules formula_5 Macromolecules with identical independent binding sites. Many biological proteins and enzymes can possess more than one binding site. Usually, when a ligand L binds with a macromolecule M, it can influence binding kinetics of other ligands L binding to the macromolecule. A simplified mechanism can be formulated if the affinity of all binding sites can be considered independent of the number of ligands bound to the macromolecule. This is valid for macromolecules composed of more than one, mostly identical, subunits. It can be then assumed that each of these n subunits are identical, symmetric and that they possess only a single binding site. Then the concentration of bound ligands &lt;chem&gt;[L]_{bound}&lt;/chem&gt; becomes formula_6 In this case, formula_7, but comprises all partially saturated forms of the macromolecule: formula_8 where the saturation occurs stepwise formula_9 For the derivation of the general binding equation a saturation function formula_10 is defined as the quotient from the portion of bound ligand to the total amount of the macromolecule: formula_11 "K′n" are so-called macroscopic or apparent dissociation constants and can result from multiple individual reactions. For example, if a macromolecule "M" has three binding sites, "K′"1 describes a ligand being bound to any of the three binding sites. In this example, "K′"2 describes two molecules being bound and "K′3" three molecules being bound to the macromolecule. The microscopic or individual dissociation constant describes the equilibrium of ligands binding to specific binding sites. Because we assume identical binding sites with no cooperativity, the microscopic dissociation constant must be equal for every binding site and can be abbreviated simply as "K"D. In our example, "K′"1 is the amalgamation of a ligand binding to either of the three possible binding sites (I, II and III), hence three microscopic dissociation constants and three distinct states of the ligand–macromolecule complex. For "K′"2 there are six different microscopic dissociation constants (I–II, I–III, II–I, II–III, III–I, III–II) but only three distinct states (it does not matter whether you bind pocket I first and then II or II first and then I). For "K′"3 there are three different dissociation constants — there are only three possibilities for which pocket is filled last (I, II or III) — and one state (I–II–III). Even when the microscopic dissociation constant is the same for each individual binding event, the macroscopic outcome ("K′"1, "K′"2 and "K′"3) is not equal. This can be understood intuitively for our example of three possible binding sites. "K′"1 describes the reaction from one state (no ligand bound) to three states (one ligand bound to either of the three binding sides). The apparent "K′"1 would therefore be three times smaller than the individual "K"D. "K′"2 describes the reaction from three states (one ligand bound) to three states (two ligands bound); therefore, "K′"2 would be equal to "K"D. "K′"3 describes the reaction from three states (two ligands bound) to one state (three ligands bound); hence, the apparent dissociation constant "K′"3 is three times bigger than the microscopic dissociation constant "K"D. The general relationship between both types of dissociation constants for "n" binding sites is formula_12 Hence, the ratio of bound ligand to macromolecules becomes formula_13 where formula_14 is the binomial coefficient. Then the first equation is proved by applying the binomial rule formula_15 Protein–ligand binding. The dissociation constant is commonly used to describe the affinity between a ligand &lt;chem&gt;L&lt;/chem&gt; (such as a drug) and a protein &lt;chem&gt;P&lt;/chem&gt;; i.e., how tightly a ligand binds to a particular protein. Ligand–protein affinities are influenced by non-covalent intermolecular interactions between the two molecules such as hydrogen bonding, electrostatic interactions, hydrophobic and van der Waals forces. Affinities can also be affected by high concentrations of other macromolecules, which causes macromolecular crowding. The formation of a ligand–protein complex &lt;chem&gt;LP&lt;/chem&gt; can be described by a two-state process &lt;chem&gt; L + P &lt;=&gt; LP &lt;/chem&gt; the corresponding dissociation constant is defined formula_16 where &lt;chem&gt;[P], [L]&lt;/chem&gt;, and &lt;chem&gt;[LP]&lt;/chem&gt; represent molar concentrations of the protein, ligand, and protein–ligand complex, respectively. The dissociation constant has molar units (M) and corresponds to the ligand concentration &lt;chem&gt;[L]&lt;/chem&gt; at which half of the proteins are occupied at equilibrium, i.e., the concentration of ligand at which the concentration of protein with ligand bound &lt;chem&gt;[LP]&lt;/chem&gt; equals the concentration of protein with no ligand bound &lt;chem&gt;[P]&lt;/chem&gt;. The smaller the dissociation constant, the more tightly bound the ligand is, or the higher the affinity between ligand and protein. For example, a ligand with a nanomolar (nM) dissociation constant binds more tightly to a particular protein than a ligand with a micromolar (μM) dissociation constant. Sub-picomolar dissociation constants as a result of non-covalent binding interactions between two molecules are rare. Nevertheless, there are some important exceptions. Biotin and avidin bind with a dissociation constant of roughly 10−15 M = 1 fM = 0.000001 nM. Ribonuclease inhibitor proteins may also bind to ribonuclease with a similar 10−15 M affinity. The dissociation constant for a particular ligand–protein interaction can change with solution conditions (e.g., temperature, pH and salt concentration). The effect of different solution conditions is to effectively modify the strength of any intermolecular interactions holding a particular ligand–protein complex together. Drugs can produce harmful side effects through interactions with proteins for which they were not meant to or designed to interact. Therefore, much pharmaceutical research is aimed at designing drugs that bind to only their target proteins (negative design) with high affinity (typically 0.1–10 nM) or at improving the affinity between a particular drug and its "in vivo" protein target (positive design). Antibodies. In the specific case of antibodies (Ab) binding to antigen (Ag), usually the term affinity constant refers to the association constant. &lt;chem&gt; Ab + Ag &lt;=&gt; AbAg &lt;/chem&gt; formula_17 This chemical equilibrium is also the ratio of the on-rate ("k"forward or "k"a) and off-rate ("k"back or "k"d) constants. Two antibodies can have the same affinity, but one may have both a high on- and off-rate constant, while the other may have both a low on- and off-rate constant. formula_18 Acid–base reactions. For the deprotonation of acids, "K" is known as "K"a, the acid dissociation constant. Strong acids, such as sulfuric or phosphoric acid, have large dissociation constants; weak acids, such as acetic acid, have small dissociation constants. The symbol "K"a, used for the acid dissociation constant, can lead to confusion with the association constant, and it may be necessary to see the reaction or the equilibrium expression to know which is meant. Acid dissociation constants are sometimes expressed by p"K"a, which is defined by formula_19 This formula_20 notation is seen in other contexts as well; it is mainly used for covalent dissociations (i.e., reactions in which chemical bonds are made or broken) since such dissociation constants can vary greatly. A molecule can have several acid dissociation constants. In this regard, that is depending on the number of the protons they can give up, we define "monoprotic", "diprotic" and "triprotic" acids. The first (e.g., acetic acid or ammonium) have only one dissociable group, the second (e.g., carbonic acid, bicarbonate, glycine) have two dissociable groups and the third (e.g., phosphoric acid) have three dissociable groups. In the case of multiple p"K" values they are designated by indices: p"K"1, p"K"2, p"K"3 and so on. For amino acids, the p"K"1 constant refers to its carboxyl (–COOH) group, p"K"2 refers to its amino (–NH2) group and the p"K"3 is the p"K" value of its side chain. formula_21 Dissociation constant of water. The dissociation constant of water is denoted "K"w: formula_22 The concentration of water, [H2O], is omitted by convention, which means that the value of "K"w differs from the value of "K"eq that would be computed using that concentration. The value of "K"w varies with temperature, as shown in the table below. This variation must be taken into account when making precise measurements of quantities such as pH. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ce{A}_x \\ce{B}_y" }, { "math_id": 1, "text": "\n K_\\mathrm{D} = \\frac{[\\ce A]^x [\\ce B]^y}{[\\ce A_x \\ce B_y]}\n" }, { "math_id": 2, "text": "\\tfrac {[\\ce{AB}]}{{[\\ce B]} + [\\ce{AB}]} = \\tfrac{1}{2}" }, { "math_id": 3, "text": "\\begin{align}\n \\ce{[A]_0} &= \\ce{{[A]} + [AB]} \\\\\n \\ce{[B]_0} &= \\ce{{[B]} + [AB]}\n\\end{align}" }, { "math_id": 4, "text": "\n [\\ce A]_0 = K_\\mathrm{D} \\frac{[\\ce{AB}]}{[\\ce B]} + [\\ce{AB}]\n" }, { "math_id": 5, "text": "\n \\ce{[AB]} = \\frac\\ce{[A]_0 [B]}{K_\\mathrm{D} + [\\ce B]} = \\frac\\ce{[B]_0 [A]}{K_\\mathrm{D} + [\\ce A]}\n" }, { "math_id": 6, "text": "\n \\ce{[L]}_\\text{bound} = \\frac{n\\ce{[M]}_0 \\ce{[L]}}{K_\\mathrm{D} + \\ce{[L]}}\n" }, { "math_id": 7, "text": "\\ce{[L]}_\\text{bound} \\neq \\ce{[LM]}" }, { "math_id": 8, "text": "\n \\ce{[L]}_\\text{bound} = \\ce{[LM]} + \\ce{2[L_2 M]} + \\ce{3[L_3 M]} + \\ldots + n \\ce{[L_\\mathit{n} M]} \n" }, { "math_id": 9, "text": "\\begin{align}\n \\ce{{[L]} + [M]} &\\ce{{} <=> {[LM]}} & K'_1 &= \\frac\\ce{[L][M]}{[LM]} & \\ce{[LM]} &= \\frac\\ce{[L][M]}{K'_1} \\\\\n \\ce{{[L]} + [LM]} &\\ce{{} <=> {[L2 M]}} & K'_2 &= \\frac\\ce{[L][LM]}{[L_2 M]} & \\ce{[L_2 M]} &= \\frac\\ce{[L]^2[M]}{K'_1 K'_2} \\\\\n \\ce{{[L]} + [L2 M]} &\\ce{{} <=> {[L3 M]}} & K'_3 &= \\frac\\ce{[L][L_2 M]}{[L_3 M]} & \\ce{[L_3 M]} &= \\frac\\ce{[L]^3[M]}{K'_1 K'_2 K'_3} \\\\\n & \\vdots & & \\vdots & & \\vdots \\\\\n \\ce{{[L]} + [L_\\mathit{n - 1} M]} &\\ce{{} <=> {[L_\\mathit{n} M]}} & K'_n &= \\frac\\ce{[L][L_{n - 1} M]}{[L_n M]} & [\\ce L_n \\ce M] &= \\frac{[\\ce L]^n[\\ce M]}{K'_1 K'_2 K'_3 \\cdots K'_n}\n\\end{align}" }, { "math_id": 10, "text": "r" }, { "math_id": 11, "text": "\n r = \\frac\\ce{[L]_{bound}}\\ce{[M]_0} \n = \\frac\\ce{{[LM]} + {2[L_2 M]} + {3[L_3 M]} + ... + \\mathit n[L_\\mathit{n} M]}\\ce{{[M]} + {[LM]} + {[L_2 M]} + {[L_3 M]} + ... + [L_\\mathit{n} M]}\n = \\frac{\\sum_{i=1}^n \\left( \\frac{i [\\ce L]^i}{\\prod_{j=1}^i K_j'} \\right) }{1 + \\sum_{i=1}^n \\left( \\frac{[\\ce L]^i}{\\prod_{j=1}^i K_j'} \\right)}\n" }, { "math_id": 12, "text": "\n K_i' = K_\\mathrm{D} \\frac{i}{n - i + 1}\n" }, { "math_id": 13, "text": "\n r = \\frac{\\sum_{i=1}^n i \\left( \\prod_{j=1}^i \\frac{n - j + 1}{j} \\right) \\left( \\frac\\ce{[L]}{K_\\mathrm{D}} \\right)^i }{1 + \\sum_{i=1}^n \\left( \\prod_{j=1}^i \\frac{n - j + 1}{j} \\right) \\left( \\frac{[L]}{K_\\mathrm{D}} \\right)^i}\n = \\frac{\\sum_{i=1}^n i \\binom{n}{i} \\left( \\frac{[L]}{K_\\mathrm{D}} \\right)^i }{1 + \\sum_{i=1}^n \\binom{n}{i} \\left( \\frac\\ce{[L]}{K_\\mathrm{D}} \\right)^i}\n" }, { "math_id": 14, "text": "\\binom{n}{i} = \\frac{n!}{(n - i)!i!}" }, { "math_id": 15, "text": "\n r = \\frac{n \\left( \\frac\\ce{[L]}{K_\\mathrm{D}} \\right) \\left(1 + \\frac\\ce{[L]}{K_\\mathrm{D}} \\right)^{n - 1} }{\\left(1 + \\frac\\ce{[L]}{K_\\mathrm{D}} \\right)^n}\n = \\frac{n \\left( \\frac\\ce{[L]}{K_\\mathrm{D}} \\right) }{\\left(1 + \\frac\\ce{[L]}{K_\\mathrm{D}} \\right)}\n = \\frac{n [\\ce L]}{K_\\mathrm{D} + [\\ce L]}\n = \\frac\\ce{[L]_{bound}}\\ce{[M]_0}\n" }, { "math_id": 16, "text": "\n K_\\mathrm{D} = \\frac{\\left[ \\ce{L} \\right] \\left[ \\ce{P} \\right]}{\\left[ \\ce{LP} \\right]}\n" }, { "math_id": 17, "text": "\n K_\\mathrm{A} = \\frac{\\left[ \\ce{AbAg} \\right]}{\\left[ \\ce{Ab} \\right] \\left[ \\ce{Ag} \\right]} = \\frac{1}{K_\\mathrm{D}} \n" }, { "math_id": 18, "text": "\n K_A = \\frac{k_\\text{forward}}{k_\\text{back}} = \\frac{\\mbox{on-rate}}{\\mbox{off-rate}}\n" }, { "math_id": 19, "text": "\n \\text{p}K_\\text{a} = -\\log_{10}{K_\\mathrm{a}}\n" }, { "math_id": 20, "text": "\\mathrm{p}K" }, { "math_id": 21, "text": "\\begin{align}\n \\ce{H3 B} &\\ce{{} <=> {H+} + {H2 B^-}} & K_1 &= \\ce{[H+] . [H2 B^-] \\over [H3 B]} & \\mathrm{p}K_1 &= -\\log K_1 \\\\\n \\ce{H2 B^-} &\\ce{{} <=> {H+} + {H B^{2-}}} & K_2 &= \\ce{[H+] . [H B ^{2-}] \\over [H2 B^-]} & \\mathrm{p}K_2 &= -\\log K_2 \\\\\n \\ce{H B^{-2}} &\\ce{{} <=> {H+} + {B^{3-}}} & K_3 &= \\ce{[H+] . [B^{3-}] \\over [H B^{2-}]} & \\mathrm{p}K_3 &= -\\log K_3 \n\\end{align}" }, { "math_id": 22, "text": "K_\\mathrm{w} = [\\ce{H}^+] [\\ce{OH}^-]" } ]
https://en.wikipedia.org/wiki?curid=8263
8263115
Penning ionization
Ionization process Penning ionization is a form of chemi-ionization, an ionization process involving reactions between neutral atoms or molecules. The Penning effect is put to practical use in applications such as gas-discharge neon lamps and fluorescent lamps, where the lamp is filled with a Penning mixture to improve the electrical characteristics of the lamps. History. The process is named after the Dutch physicist Frans Michel Penning who first reported it in 1927. Penning started to work at the Philips Natuurkundig Laboratorium at Eindhoven to continue the investigation of electric discharge on rare gases. Later, he started measurements on the liberation of electrons from metal surfaces by positive ions and metastable atoms, and especially on the effects related to ionization by metastable atoms. Reaction. Penning ionization refers to the interaction between an electronically excited gas-phase atom G* and a target molecule M. The collision results in the ionization of the molecule yielding a cation M+., an electron e−, and a neutral gas molecule, G, in the ground state. Penning ionization occurs via formation of a high energy collision complex, evolving toward the formation of a cationic species, by ejecting a high energy electron. &lt;chem&gt;{G^\ast} + M -&gt; {M^{+\bullet}} + {e^-} + G&lt;/chem&gt; Penning ionization occurs when the target molecule has an ionization potential lower than the excited energy of the excited-state atom or molecule. Variants. When the total electron excitation energy of colliding particles is sufficient, then the bonding energy of two particles that bonded together can also be contributed into the associative penning ionization act. Associative Penning ionization can also occur: &lt;chem&gt;{G^\ast} + M -&gt; {MG^{+\bullet}} + e^-&lt;/chem&gt; Surface Penning ionization (Auger Deexcitation) refers to the interaction of the excited-state gas with a surface S, resulting in the release of an electron: &lt;chem&gt;{G^\ast} + S -&gt; {G} + {S} + e^-&lt;/chem&gt; The positive charge symbol &lt;chem&gt;S+&lt;/chem&gt; that would appear to be required for charge conservation is omitted, because S is a macroscopic surface and the loss of one electron has a negligible effect. Applications. Electron spectroscopy. Penning ionization has been applied to Penning ionization electron spectroscopy (PIES) for gas chromatography detector in glow discharge by using the reaction for He* or Ne*. The kinetic energy of electron ejected is analyzed by the collisions between target (gas or solid) and metastable atoms by scanning the retarding field in a flight tube of the analyzer in the presence of a weak magnetic field. The electron produced by reaction has a kinetic energy E determined by: formula_0 The Penning ionization electron energy does not depend on the conditions of the experiments or any other species since both Em and IE are atomic or molecular constants of the energy of He* and the ionization energy for the species. Penning ionization electron spectroscopy applied to organic solids. It enables the study of local electron distribution of individual molecular orbitals, which exposes to the outside of the outermost surface layers. Mass spectrometry. Multiple mass spectrometric techniques, including glow discharge mass spectrometry and direct analysis in real time mass spectrometry rely on Penning ionization. Glow discharge mass spectrometry is the direct determination of trace element in solid samples. It occurs with two ionization mechanisms: the direct electron impact ionization and Penning ionization. Processes inherent to the glow discharge, namely cathodic sputtering coupled with Penning ionization, yield an ion population from which semi-quantitative results can be directly obtained. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E = E_\\text{m} + IE" } ]
https://en.wikipedia.org/wiki?curid=8263115
8265280
12 Ophiuchi
Star in the constellation Ophiuchus ! style="text-align: center; background-color: #FFFFC0;" colspan="2" | Observation dataEpoch J2000      Equinox J2000 ! style="text-align:left" | Constellation ! style="text-align:left" | Right ascension ! style="text-align:left" | Declination ! style="text-align:left" | Apparent magnitude (V) ! style="background-color: #FFFFC0; text-align: center;" colspan="2"| Characteristics ! style="text-align:left" | Spectral type ! style="text-align:left" | U−B ! style="text-align:left" | B−V ! style="text-align:left" | Variable type &lt;/th&gt;&lt;/tr&gt; &lt;/th&gt;&lt;/tr&gt; 12 Ophiuchi is a variable star in the constellation Ophiuchus. No companions have yet been detected in orbit around this star, and it remains uncertain whether or not it possesses a dust ring. This star is categorized as a BY Draconis variable, with variable star designation V2133. The variability is attributed to large-scale magnetic activity on the chromosphere (in the form of starspots) combined with a rotational period that moved the active regions into (and out of) the line of sight. This results in low amplitude variability of 12 Ophiuchi's luminosity. The star also appears to display rapid variation in luminosity, possibly due to changes in the starspots. Measurements of the long-term variability show two overlapping cycles of starspot activity (compared to the Sun's single, 11-year cycle.) The periods of these two cycles are 4.0 and 17.4 years. This star is among the top 100 target stars for NASA's planned Terrestrial Planet Finder mission . However, the mission is now postponed indefinitely. Its abundance of heavy elements (elements heavier than helium) is nearly identical to that of the Sun. The surface gravity is equal to formula_0, which is somewhat higher than the Sun's. The space velocity is 30 km/s relative to the Solar System. The high rotation period and active chromosphere are indicative of a relatively young star. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\log(g) = 4.6" } ]
https://en.wikipedia.org/wiki?curid=8265280
8265328
Proto-Indo-European nominals
Category of words in Proto-Indo-European Proto-Indo-European nominals include nouns, adjectives, and pronouns. Their grammatical forms and meanings have been reconstructed by modern linguists, based on similarities found across all Indo-European languages. This article discusses nouns and adjectives; Proto-Indo-European pronouns are treated elsewhere. The Proto-Indo-European language (PIE) had eight or nine cases, three numbers (singular, dual and plural) and probably originally two genders (animate and neuter), with the animate later splitting into the masculine and the feminine. Nominals fell into multiple different declensions. Most of them had word stems ending in a consonant (called athematic stems) and exhibited a complex pattern of accent shifts and/or vowel changes (ablaut) among the different cases. Two declensions ended in a vowel (*"-o/e-") and are called "thematic"; they were more regular and became more common during the history of PIE and its older daughter languages. PIE very frequently derived nominals from verbs. Just as English "giver" and "gift" are ultimately related to the verb "give", *"déh₃tors" 'giver' and *"déh₃nom" 'gift' are derived from *"deh₃-" 'to give', but the practice was much more common in PIE. For example, *"pṓds" 'foot' was derived from *"ped-" 'to tread', and *"dómh₂s" 'house' from *"demh₂-" 'to build'. Morphology. The basic structure of Proto-Indo-European nouns and adjectives was the same as that of PIE verbs. A lexical word (as would appear in a dictionary) was formed by adding a "suffix" (S) onto a "root" (R) to form a "stem". The word was then inflected by adding an ending (E) to the stem. The root indicates a basic concept, often a verb (e.g. *"deh₃-" 'give'), while the stem carries a more specific nominal meaning based on the combination of root and suffix (e.g. *"déh₃-tor-" 'giver', *"déh₃-o-" 'gift'). Some stems cannot clearly be broken up into root and suffix altogether, as in *"h₂r̥tḱo-" 'bear'. The ending carries grammatical information, including case, number, and gender. Gender is an inherent property of a noun but is part of the inflection of an adjective, because it must agree with the gender of the noun it modifies. Thus, the general morphological form of such words is R+S+E: formula_0 The process of forming a lexical stem from a root is known in general as derivational morphology, while the process of inflecting that stem is known as inflectional morphology. As in other languages, the possible suffixes that can be added to a given root, and the meaning that results, are not entirely predictable, while the process of inflection is largely predictable in both form and meaning. Originally, extensive ablaut (vowel variation, between *"e", *"o", *"ē", *"ō" and "Ø", i.e. no vowel) occurred in PIE, in both derivation and inflection and in the root, suffix, and ending. Variation in the position of the accent likewise occurred in both derivation and inflection, and is often considered part of the ablaut system (which is described in more detail below). For example, the nominative form *"léymons" 'lake' (composed of the root *"ley-" in the ablaut form *"léy-", the suffix in the form *"-mon-" and the ending in the form *"-s") had the genitive *"limnés" (root form *"li-", suffix *"-mn-" and ending *"-és"). In this word, the nominative has the ablaut vowels *"é–o–Ø" while the genitive has the ablaut vowels *"ؖؖé" — i.e. all three components have different ablaut vowels, and the stress position has also moved. A large number of different patterns of ablaut variation existed; speakers had to both learn the ablaut patterns and memorize which pattern went with which word. There was a certain regularity of which patterns occurred with which suffixes and formations, but with many exceptions. Already by late PIE times, this system was extensively simplified, and daughter languages show a steady trend towards more and more regularization and simplification. Far more simplification occurred in the late PIE nominal system than in the verbal system, where the original PIE ablaut variations were maintained essentially intact well into the recorded history of conservative daughter languages such as Sanskrit and Ancient Greek, as well as in the Germanic languages (in the form of strong verbs). Root nouns. PIE also had a class of monosyllabic "root nouns" which lack a suffix, the ending being directly added to the root (as in *"dómh₂-s" 'house', derived from *"demh₂-" 'build'). These nouns can also be interpreted as having a zero suffix or one without a phonetic body (*"dóm-Ø-s"). Verbal stems have corresponding morphological features, the root present and the root aorist. Complex nominals. Not all nominals fit the basic R+S+E pattern. Some were formed with additional prefixes. An example is *"ni-sd-ó-s" 'nest', derived from the verbal root *"sed-" 'sit' by adding a local prefix and thus meaning "where [the bird] sits down" or the like. A special kind of prefixation, called "reduplication", uses the first part of the root plus a vowel as a prefix. For example, *"kʷelh₁-" 'turn' gives *kʷe-kʷl(h₁)-ó-s" 'wheel', and *"bʰrew-" 'brown' gives *bʰé-bʰru-s" 'beaver'. This type of derivation is also found in verbs, mainly to form the perfect. As with PIE verbs, a distinction is made between "primary formations", which are words formed directly from a root as described above, and "secondary formations", which are formed from existing words (whether primary or secondary themselves). Athematic and thematic nominals. A fundamental distinction is made between "thematic" and "athematic" nominals. The stem of athematic nominals ends in a consonant. They have the original complex system of accent/ablaut alternations described above and are generally held as more archaic. Thematic nominals, which became more and more common during the times of later PIE and its younger daughter languages, have a stem ending in a "thematic vowel", *"-o-" in almost all grammatical cases, sometimes ablauting to *"-e-". Since all roots end in a consonant, all thematic nominals have suffixes ending in a vowel, and none are root nouns. The accent is fixed on the same syllable throughout the inflection. From the perspective of the daughter languages, a distinction is often made between "vowel" stems (that is, stems ending in a vowel: "i-", "u-", "(y)ā-", "(y)o-"stems) and "consonantic" stems (the rest). However, from the PIE perspective, only the thematic ("o-")stems are truly vocalic. Stems ending in *"i" or *"u" such as *"men-ti-" are consonantic (i.e. athematic) because the *"i" is just the vocalic form of the glide *"y", the full grade of the suffix being *"-tey-". Post-PIE "ā" was actually *"eh₂" in PIE. Among the most common athematic stems are root stems, "i"-stems, "u"-stems, "eh₂"-stems, "n"-stems, "nt"-stems, "r"-stems and "s"-stems. Within each of these, numerous subclasses with their own inflectional peculiarities developed by late PIE times. Grammatical categories. PIE nouns and adjectives (as well as pronouns) are subject to the system of PIE nominal inflection with eight or nine cases: nominative, accusative, vocative, genitive, dative, instrumental, ablative, locative, and possibly a directive or allative. The so-called "strong" or "direct" cases are the nominative and the vocative for all numbers, and the accusative case for singular and dual (and possibly plural as well), and the rest are the "weak" or "oblique" cases. This classification is relevant for inflecting the athematic nominals of different accent and ablaut classes. Number. Three numbers were distinguished: singular, dual and plural. Many (possibly all) athematic neuter nouns had a special collective form instead of the plural, which inflected with singular endings, but with the ending *"-h₂" in the direct cases, and an amphikinetic accent/ablaut pattern (see below). Gender. Late PIE had three genders, traditionally called "masculine", "feminine" and "neuter". Gender or "noun class" is an inherent (lexical) property of each noun; all nouns in a language that has grammatical genders are assigned to one of its classes. There was probably originally only an animate (masculine/feminine) versus an inanimate (neuter) distinction. This view is supported by the existence of certain classes of Latin and Ancient Greek adjectives which inflect only for two sets of endings: one for masculine and feminine, the other for neuter. Further evidence comes from the Anatolian languages such as Hittite which exhibit only the animate and the neuter genders. The feminine ending is thought to have developed from a collective/abstract suffix *"-h₂" that also gave rise to the neuter collective. The existence of combined collective and abstract grammatical forms can be seen in English words such as "youth" = "the young people (collective)" or "young age (abstract)". Remnants of this period exist in (for instance) the "eh₂"-stems, "ih₂"-stems, "uh₂"-stems and bare "h₂"-stems, which are found in daughter languages as "ā-", "ī-", "ū-" and "a-"stems, respectively. They originally were the feminine equivalents of the "o"-stems, "i"-stems, "u"-stems and root nouns. Already by late PIE times, however, this system was breaking down. *"-eh₂" became generalized as the feminine suffix, and "eh₂"-stem nouns evolved more and more in the direction of thematic "o"-stems, with fixed ablaut and accent, increasingly idiosyncratic endings and frequent borrowing of endings from the "o"-stems. Nonetheless, clear traces of the earlier system are seen especially in Sanskrit, where "ī"-stems and "ū"-stems still exist as distinct classes comprising largely feminine nouns. Over time, these stem classes merged with "i"-stems and "u"-stems, with frequent crossover of endings. Grammatical gender correlates only partially with sex, and almost exclusively when it relates to humans and domesticated animals. Even then, those correlations may not be consistent: nouns referring to adult males are usually masculine ("father", "brother", "priest"), nouns referring to adult females ("mother", "sister", "priestess") are usually feminine, but diminutives may be neuter regardless of referent, as in both Greek and German. Gender may have also had a grammatical function, a change of gender within a sentence signaling the end of a noun phrase (a head noun and its agreeing adjectives) and the start of a new one. An alternative hypothesis to the two-gender view is that Proto-Anatolian inherited a three-gender PIE system, and subsequently Hittite and other Anatolian languages eliminated the feminine by merging it with the masculine. Case endings. Some endings are difficult to reconstruct and not all authors reconstruct the same sets of endings. For example, the original form of the genitive plural is a particular thorny issue, because different daughter languages appear to reflect different proto-forms. It is variously reconstructed as *"-ōm", *"-om", *"-oHom", and so on. Meanwhile, the dual endings of cases other than the merged nominative/vocative/accusative are often considered impossible to reconstruct because these endings are attested sparsely and diverge radically in different languages. The following shows three modern mainstream reconstructions. Sihler (1995) remains closest to the data, often reconstructing multiple forms when daughter languages show divergent outcomes. Ringe (2006) is somewhat more speculative, willing to assume analogical changes in some cases to explain divergent outcomes from a single source form. Fortson (2004) is between Sihler and Ringe. The thematic vowel *"-o-" ablauts to *"-e-" only in word-final position in the vocative singular, and before *"h₂" in the neuter nominative and accusative plural. The vocative singular is also the only case for which the thematic nouns show "accent retraction", a leftward shift of the accent, denoted by *"-ĕ". †The dative, instrumental and ablative plural endings probably contained a *"bʰ" but are of uncertain structure otherwise. They might also have been of post-PIE date. §For athematic nouns, an "endingless locative" is reconstructed in addition to the ordinary locative singular in *"-i". In contrast to the other weak cases, it typically has full or lengthened grade of the stem. An alternative reconstruction is found in Beekes (1995). This reconstruction does not give separate tables for the thematic and athematic endings, assuming that they were originally the same and only differentiated in daughter languages. Athematic accent/ablaut classes. There is a general consensus as to which nominal accent-ablaut patterns must be reconstructed for Proto-Indo-European. Given that the foundations for the system were laid by a group of scholars (Schindler, Eichner, Rix, and Hoffmann) during the 1964 "Erlanger Kolloquium", which discussed the works of Pedersen and Kuiper on nominal accent-ablaut patterns in PIE, the system is sometimes referred to as the "Erlangen model". Early PIE. Early PIE nouns had complex patterns of ablation according to which the root, the stem and the ending all showed ablaut variations. Polysyllabic athematic nominals (type R+S+E) exhibit four characteristic patterns, which include accent and ablaut alternations throughout the paradigm between the root, the stem and the ending. Root nouns (type R+E) show a similar behavior but with only two patterns. The patterns called "Narten" are, at least formally, analogous to the Narten presents in verbs, as they alternate between full (*"e") and lengthened grades (*"ē"). Notes: The classification of the amphikinetic root nouns is disputed. Since those words have no suffix, they differ from the amphikinetic polysyllables in the strong cases (no "o"-grade) and in the locative singular (no "e"-grade suffix). Some scholars prefer to call them amphikinetic and the corresponding polysyllables "holokinetic" (or "holodynamic", from holos = whole). Some also list "mesostatic" (meso = middle) and "teleutostatic" types, with the accent fixed on the suffix and the ending, respectively, but their existence in PIE is disputed. The classes can then be grouped into three "static" (acrostatic, mesostatic, teleutostatic) and three or four "mobile" (proterokinetic, hysterokinetic, amphikinetic, holokinetic) paradigms. "Late PIE". By late PIE, the above system had been already significantly eroded, with one of the root ablaut grades tending to be extended throughout the paradigm. The erosion is much more extensive in all the daughter languages, with only the oldest stages of most languages showing any root ablaut and typically only in a small number of irregular nouns: The most extensive remains are in Vedic Sanskrit and Old Avestan (the oldest recorded stages of the oldest Indic and Iranian languages, c. 1700–1300 BC); the younger stages of the same languages already show extensive regularization. In many cases, a former ablauting paradigm was generalized in the daughter languages but in different ways in each language. For example, Ancient Greek "dóru" 'spear' &lt; PIE nominative *"dóru" 'wood, tree' and Old English "trēo" 'tree' &lt; PIE genitive *"dreu-s" reflect different stems of a PIE ablauting paradigm *"dóru", *"dreus", which is still reflected directly in Vedic Sanskrit nom. "dā́ru" 'wood', gen. "drṓs". Similarly, PIE *"ǵónu", *"ǵnéus" can be reconstructed for 'knee' from Ancient Greek "gónu" and Old English "cnēo". In that case, there is no extant ablauting paradigm in a single language, but Avestan accusative "žnūm" and Modern Persian "zānū" are attested, which strongly implies that Proto-Iranian had an ablauting paradigm. That is quite possible for Avestan as well, but that cannot be certain since the nominative is not extant. An ablauting paradigm *"pōds", *"ped-" can also clearly be reconstructed from 'foot', based on Greek "pous" gen. "podós" (&lt; *"pō(d)s", *"pod-") vs. Latin "pēs" gen. "pedis" (&lt; *"ped-") vs. Old English "fōt" (&lt; *"pōd-"), with differing ablaut grades among cognate forms in different languages. In some cases, ablaut would be expected based on the form (given numerous other examples of ablauting nouns of the same form), but a single ablaut variant is found throughout the paradigm. In such cases, it is often assumed that the noun had showed ablaut in early PIE but was generalized to a single form by late PIE or shortly afterwards. An example is Greek "génus" 'chin, jaw', Sanskrit "hánus" 'jaw', Latin "gena" 'cheek', Gothic "kinnus" 'cheek'. All except the Latin form suggest a masculine "u"-stem with non-ablauting PIE root *"ǵen-", but certain irregularities (the position of the accent, the unexpected feminine "ā"-stem form in Latin, the unexpected Gothic stem "kinn-" &lt; "ǵenw-", the ablaut found in Greek "gnáthos" 'jaw' &lt; PIE *"ǵnHdʰ-", Lithuanian "žándas" 'jawbone' &lt; *"ǵonHdʰ-os") suggest an original ablauting neuter noun *"ǵénu", *"ǵnéus" in early PIE. It generalized the nominative ablaut in late PIE and switched to the masculine "u"-stem in the post-PIE period. Another example is *"nokʷts" 'night'; an acrostatic root paradigm might be expected based on the form, but the consistent stem *"nokʷt-" is found throughout the family. With the discovery of Hittite, however, the form *"nekʷts" 'in the evening' was found, which is evidently a genitive; it indicates that early PIE actually had an acrostatic paradigm that was regularized by late PIE but after the separation of Hittite. Leiden model. Kuiper's student Beekes, together with his colleague Kortlandt, developed an alternative model on the basis of Pedersen's and Kuiper's works, described in detail in . Since the scholars who developed it and generally accept it are mostly from the University of Leiden, it is generally dubbed the "Leiden model". It states that for earlier PIE, three accent types of inflection of consonant stems are to be reconstructed, and from them, all of the attested types can be derived: For root nouns, accentuation could have been either static or mobile: The thematic stem type was a recent innovation, with a thematic vowel *-o- originating from the hysterodynamic genitive singular form of athematic inflection, which had in pre-PIE the function of ergative. That is why there are "o"-stems but no "e"-stems and is suggested to be why thematic nouns show no ablaut or accentual mobility in inflection (for other theories on the origin of thematic vowel see Thematic vowel: Origin in nouns). The general points of departure to the Erlangen model are: Heteroclitic stems. Some athematic noun stems have different final consonants in different cases and are termed "heteroclitic stems". Most of the stems end in *"-r-" in the nominative and accusative singular, and in *"-n-" in the other cases. An example of such "r/n"-stems is the acrostatic neuter *"wód-r̥" 'water', genitive *"wéd-n̥-s". The suffixes *"-mer/n-", *"-ser/n-", *"-ter/n-" and *"-wer/n-" are also attested, as in the probably-proterokinetic *"péh₂-wr̥" 'fire', genitive *"ph₂-wén-s" or similar. An "l/n"-stem is *"séh₂-wl̥" or *"seh₂-wōl" 'sun', genitive *"sh₂-wén-s" or the like. Derivation. PIE had a number of ways to derive nominals from verbs or from other nominals. These included Accent/ablaut alternations. From athematic nouns, derivatives could be created by shifting the accent to the right and thus switching to another accent/ablaut class: acrostatic to proterokinetic or amphikinetic, proterokinetic to amphikinetic or hysterokinetic, and so on. Such derivations signified "possessing, associated with". An example is proterokinetic *"bʰléǵʰ-mn̥", *"bʰl̥ǵʰ-mén-s" 'sacred formulation' (Vedic "bráhmaṇ-"), from which amphikinetic *"bʰléǵʰ-mō(n)", *"bʰl̥ǵʰ-mn-es" 'priest' (Vedic "brahmáṇ-") was derived. Another ablaut alternation is *"ḱernes" 'horned' from *"ḱernos" 'horn, roe'. Many PIE adjectives formed this way were subsequently nominalized in daughter languages. Thematic nominals could also be derived by accent or ablaut changes. Leftward shift of the accent could turn an agentive word into a resultative one, for example *"tomós" 'sharp', but *"tómos" 'a slice' (from *"tem-" 'to cut'); *"bʰorós" 'carrier', but *"bʰóros" 'burden' (from *"bʰer-" 'carry'). A special type of ablaut alternation was vṛddhi derivation, which typically lengthened a vowel, signifying "of, belonging to, descended from". Affixation. These are some of the nominal affixes found in Proto-Indo-European Compounding. PIE had a number of possibilities to compound nouns. Endocentric or determinative compounds denote subclasses of their head (usually the second part), as in English "small"talk"" or "black"bird"". Exocentric or possessive compounds, usually called bahuvrihis, denote something possessing something, as in "Flatfoot = [somebody] having flat feet" or "redthroat = [a bird] with a red throat". This type was much more common in old Indo-European languages; some doubt the existence of determinative compounds in PIE altogether. Compounds consisting of a nominal plus a verb (akin to English "cowherd") were common; those of a verb plus a nominal ("pickpocket"), less so. Other parts of speech also occurred as first part of compounds, such as prepositions, numerals (*"tri-" from *"tréyes" 'three'), other particles (*"n̥-", zero grade of *"ne" 'not', seen in English "un-", Latin "in-", Greek "a(n)-") and adjectives (*"drḱ-h₂ḱru" 'tear', literally 'bitter-eye'). Adjectives. Adjectives in PIE generally have the same form as nouns, although when paradigms are gender-specific more than one may be combined to form an adjectival paradigm, which must be declined for gender as well as number and case. The main example of this is the "o/eh₂"-stem adjectives, which have masculine forms following masculine "o"-stems (*"-os"), feminine forms following "eh₂"-stems and neuter forms following neuter "o"-stems (*"-om"). Caland-system adjectives. A number of adjectival roots form part of the Caland system, named after Dutch Indologist Willem Caland, who first formulated part of the system. The cognates derived from these roots in different daughter languages often do not agree in formation, but show certain characteristic properties: Comparison. Comparative. The comparative form ("bigger, more beautiful") could be formed by replacing an adjective's suffix with *"-yos-"; the resulting word is amphikinetic: *"meǵ-no-" 'big' (Latin "magnus") → *"méǵ-yos-" 'bigger' (Latin "maior, maius"), weak cases *"meǵ-is-". A second suffix, *"-tero-", originally expressed contrast, as in Ancient Greek "pó-tero-s" 'which (of two)' or "deksi-teró-s" 'right (as opposed to left)'. It later attained comparative function. For example, the meaning of Ancient Greek "sophṓteros" 'wiser, the wiser one' developed from 'the wise one (of the two)'. English "far-ther" also contains this suffix. Superlative. PIE probably expressed the superlative ("biggest, most beautiful") by adding a genitive plural noun to the adjective. Instead of 'the greatest of the gods', people said 'great of (=among) the gods'. Still, two suffixes have been reconstructed that have superlative meaning in daughter languages: one is *"-m̥mo-" or *"-m̥h₂o-", the other *"-isto-" or *"-isth₂o-", composed of the zero grade of the comparative suffix plus an additional syllable. They are generalisations of the ordinal numbers. Sample declensions. The following are example declensions of a number of different types of nouns, based on the reconstruction of Ringe (2006). The last two declensions, the o-stems, are thematic, and all others are athematic. Morpheme boundaries (boundaries between root, suffix, and ending) are given only in the nominative singular. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\underbrace{\\underbrace{\\mathrm{root+suffix}}_{\\mathrm{stem}} + \\mathrm{ending}}_{\\mathrm{word}}\n" } ]
https://en.wikipedia.org/wiki?curid=8265328
826539
YDbDr
Colour space used in the SECAM analog color TV standard YDbDr, sometimes written formula_3, is the colour space used in the SECAM (adopted in France and some countries of the former Eastern Bloc) analog colour television broadcasting standard. It is very close to YUV (used on the PAL system) and its related colour spaces such as YIQ (used on the NTSC system), YPbPr and YCbCr. formula_3 is composed of three components: formula_0, formula_1 and formula_2. formula_0 is the luminance, formula_1 and formula_2 are the chrominance components, representing the red and blue colour differences. Formulas. The three component signals are created from an original formula_4 (red, green and blue) source. The weighted values of formula_5, formula_6 and formula_7 are added together to produce a single formula_0 signal, representing the overall brightness, or luminance, of that spot. The formula_1 signal is then created by subtracting the formula_0 from the blue signal of the original formula_4, and then scaling; and formula_2 by subtracting the formula_0 from the red, and then scaling by a different factor. These formulae approximate the conversion between the RGB colour space and formula_3. formula_8 From RGB to YDbDr: formula_9 From YDbDr to RGB: formula_10 You may note that the formula_0 component of formula_3 is the same as the formula_0 component of formula_0formula_11formula_12. formula_1 and formula_2 are related to the formula_11 and formula_12 components of the YUV colour space as follows: formula_13 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Y" }, { "math_id": 1, "text": "D_B" }, { "math_id": 2, "text": "D_R" }, { "math_id": 3, "text": "YD_BD_R" }, { "math_id": 4, "text": "RGB" }, { "math_id": 5, "text": "R" }, { "math_id": 6, "text": "G" }, { "math_id": 7, "text": "B" }, { "math_id": 8, "text": "\\begin{align}\nR, G, B, Y &\\in \\left[ 0, 1 \\right]\\\\\nD_B, D_R &\\in \\left[ -1.333, 1.333 \\right]\\end{align}" }, { "math_id": 9, "text": "\\begin{align}\nY &= +0.299 R +0.587 G +0.114 B\\\\\nD_B &= -0.450 R -0.883 G +1.333 B\\\\\nD_R &= -1.333 R +1.116 G +0.217B\\\\\n\\begin{bmatrix} Y \\\\ D_B \\\\ D_R \\end{bmatrix} &=\n\\begin{bmatrix} 0.299 & 0.587 & 0.114 \\\\ \n-0.450 & -0.883 & 1.333 \\\\ \n-1.333 & 1.116 & 0.217 \\end{bmatrix}\n\\begin{bmatrix} R \\\\ G \\\\ B \\end{bmatrix}\\end{align}" }, { "math_id": 10, "text": "\\begin{align}\nR &= Y +0.000092303716148 D_B -0.525912630661865 D_R\\\\\nG &= Y -0.129132898890509 D_B +0.267899328207599 D_R\\\\\nB &= Y +0.664679059978955 D_B -0.000079202543533 D_R\\\\\n\\begin{bmatrix} R \\\\ G \\\\ B \\end{bmatrix} &=\n\\begin{bmatrix} 1 & 0.000092303716148 & -0.525912630661865 \\\\\n1 & -0.129132898890509 & 0.267899328207599 \\\\\n1 & 0.664679059978955 & -0.000079202543533 \\end{bmatrix}\n\\begin{bmatrix} Y \\\\ D_B \\\\ D_R \\end{bmatrix}\\end{align}" }, { "math_id": 11, "text": "U" }, { "math_id": 12, "text": "V" }, { "math_id": 13, "text": "\\begin{align}\nD_B &= + 3.059 U\\\\\nD_R &= - 2.169 V\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=826539
826546
YPbPr
Color space used in analog video YPbPr or Y'PbPr, also written as YPBPR, is a color space used in video electronics, in particular in reference to component video cables. Like YCBCR, it is based on gamma corrected RGB primaries; the two are numerically equivalent but YPBPR is designed for use in analog systems while YCBCR is intended for digital video. The EOTF (gamma correction) may be different from common sRGB EOTF and BT.1886 EOTF. Sync is carried on the Y channel and is a bi-level sync signal, but in HD formats a tri-level sync is used and is typically carried on all channels. YPBPR is commonly referred to as "component video" by manufacturers; however, there are many types of component video, most of which are some form of RGB. Some video cards come with video-in video-out (VIVO) ports for connecting to component video devices. Technical details. YPBPR is converted from the RGB video signal, which is split into three components: "Y", "PB", and "PR". There are other standards of YPBPR components derivation available: 1920x1035 uses SMPTE 240M (240M defined EOTF and uses SMPTE 170M primaries and white point) and 525 lines 60/1.001 Hz (SMPTE 273M) and 625 lines 50 Hz (ITU-R BT.1358) BT.601 matrix is used. To send a green signal as a fourth component is redundant, as it can be derived using the blue, red and luma information. When color signals were first added to the NTSC-encoded black and white video standard, the hue was represented by a phase shift of a color reference sub-carrier. "P" for phase information or phase shift has carried through to represent color information even in the case where there is no longer a phase shift used to represent hue. Thus, the YPBPR nomenclature derives from engineering metrics developed for the NTSC color standard. The same cables can be used for YPBPR and composite video. This means that the yellow, red, and white RCA connector cables commonly packaged with most audio/visual equipment can be used in place of the YPBPR connectors, provided the end user is careful to connect each cable to corresponding components at both ends. Also, many TVs use the green connection either for luma only or for composite video input. Since YPBPR is backwards compatible with the luminance portion of composite video even with just component video decoding one can still use composite video via this input, but only luma information will be displayed, along with the chroma dots. The same goes the other way around so long as 480i or 576i is used. YPBPR advantages. Signals using YPBPR offer enough separation that no color multiplexing is needed, so the quality of the extracted image is nearly identical to the pre-encoded signal. S-Video and composite video mix the signals together by means of electronic multiplexing. Signal degradation is typical for composite video, as most display systems are unable to completely separate the signals, though HDTVs tend to perform such separation better than most CRT units (see dot crawl). S-Video can mitigate some of these potential issues, as its luma is transmitted separately from chroma. Among consumer analog interfaces, only YPBPR and analog RGB component video are capable of carrying non-interlaced video and resolutions higher than 480i or 576i, up to 1080p for YPBPR. See also. Graphic chipsets that generate color internally based on YPBPR: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Y = 0.2126 R + 0.7152 G + 0.0722 B" } ]
https://en.wikipedia.org/wiki?curid=826546
826617
Euclid's lemma
A prime divisor of a product divides one of the factors In algebra and number theory, Euclid's lemma is a lemma that captures a fundamental property of prime numbers: For example, if "p" 19, "a" 133, "b" 143, then "ab" 133 × 143 19019, and since this is divisible by 19, the lemma implies that one or both of 133 or 143 must be as well. In fact, 133 19 × 7. The lemma first appeared in Euclid's "Elements", and is a fundamental result in elementary number theory. If the premise of the lemma does not hold, that is, if "p" is a composite number, its consequent may be either true or false. For example, in the case of "p" 10, "a" 4, "b" 15, composite number 10 divides "ab" 4 × 15 60, but 10 divides neither 4 nor 15. This property is the key in the proof of the fundamental theorem of arithmetic. It is used to define prime elements, a generalization of prime numbers to arbitrary commutative rings. Euclid's lemma shows that in the integers irreducible elements are also prime elements. The proof uses induction so it does not apply to all integral domains. Formulations. Euclid's lemma is commonly used in the following equivalent form: Euclid's lemma can be generalized as follows from prime numbers to any integers. This is a generalization because a prime number p is coprime with an integer a if and only if p does not divide a. History. The lemma first appears as proposition 30 in Book VII of Euclid's "Elements". It is included in practically every book that covers elementary number theory. The generalization of the lemma to integers appeared in Jean Prestet's textbook "Nouveaux Elémens de Mathématiques" in 1681. In Carl Friedrich Gauss's treatise "Disquisitiones Arithmeticae", the statement of the lemma is Euclid's Proposition 14 (Section 2), which he uses to prove the uniqueness of the decomposition product of prime factors of an integer (Theorem 16), admitting the existence as "obvious". From this existence and uniqueness he then deduces the generalization of prime numbers to integers. For this reason, the generalization of Euclid's lemma is sometimes referred to as Gauss's lemma, but some believe this usage is incorrect due to confusion with Gauss's lemma on quadratic residues. Proofs. The two first subsections, are proofs of the generalized version of Euclid's lemma, namely that: "if n divides ab and is coprime with a then it divides b." The original Euclid's lemma follows immediately, since, if n is prime then it divides a or does not divide a in which case it is coprime with a so per the generalized version it divides b. Using Bézout's identity. In modern mathematics, a common proof involves Bézout's identity, which was unknown at Euclid's time. Bézout's identity states that if "x" and "y" are coprime integers (i.e. they share no common divisors other than 1 and −1) there exist integers "r" and "s" such that formula_0 Let "a" and "n" be coprime, and assume that "n"|"ab". By Bézout's identity, there are "r" and "s" such that formula_1 Multiply both sides by "b": formula_2 The first term on the left is divisible by "n", and the second term is divisible by "ab", which by hypothesis is divisible by "n". Therefore their sum, "b", is also divisible by "n". By induction. The following proof is inspired by Euclid's version of Euclidean algorithm, which proceeds by using only subtractions. Suppose that formula_3 and that n and a are coprime (that is, their greatest common divisor is 1). One has to prove that n divides b. Since formula_4 there is an integer q such that formula_5 Without loss of generality, one can suppose that n, q, a, and b are positive, since the divisibility relation is independent from the signs of the involved integers. For proving this by strong induction, we suppose that the result has been proved for all positive lower values of ab. There are three cases: If "n" = "a", coprimality implies "n" = 1, and n divides b trivially. If "n" &lt; "a", one has formula_6 The positive integers "a" – "n" and n are coprime: their greatest common divisor d must divide their sum, and thus divides both n and a. It results that "d" = 1, by the coprimality hypothesis. So, the conclusion follows from the induction hypothesis, since 0 &lt; ("a" – "n") "b" &lt; "ab". Similarly, if "n" &gt; "a" one has formula_7 and the same argument shows that "n" – "a" and a are coprime. Therefore, one has 0 &lt; "a" ("b" − "q") &lt; "ab", and the induction hypothesis implies that "n" − "a" divides "b" − "q"; that is, formula_8 for some integer. So, formula_9 and, by dividing by "n" − "a", one has formula_10 Therefore, formula_11 and by dividing by a, one gets formula_12 the desired result. Proof of "Elements". Euclid's lemma is proved at the Proposition 30 in Book VII of Euclid's "Elements". The original proof is difficult to understand as is, so we quote the commentary from . If four numbers be proportional, the number produced from the first and fourth is equal to the number produced from the second and third; and, if the number produced from the first and fourth be equal to that produced from the second and third, the four numbers are proportional. The least numbers of those that have the same ratio with them measures those that have the same ratio the same number of times—the greater the greater and the less the less. Numbers prime to one another are the least of those that have the same ratio with them. Any prime number is prime to any number it does not measure. If two numbers, by multiplying one another, make the same number, and any prime number measures the product, it also measures one of the original numbers. If "c", a prime number, measure "ab", "c" measures either "a" or "b".Suppose "c" does not measure "a".Therefore "c", "a" are prime to one another. [VII. 29]Suppose "ab"="mc".Therefore "c" : "a" = "b" : "m". [VII. 19]Hence [VII. 20, 21] "b"="nc", where "n" is some integer.Therefore "c" measures "b".Similarly, if "c" does not measure "b", "c" measures "a".Therefore "c" measures one or other of the two numbers "a", "b".Q.E.D. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Footnotes. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nrx+sy = 1.\n" }, { "math_id": 1, "text": "\nrn+sa = 1.\n" }, { "math_id": 2, "text": "\nrnb+sab = b.\n" }, { "math_id": 3, "text": "n \\mid ab" }, { "math_id": 4, "text": "n \\mid ab," }, { "math_id": 5, "text": "nq=ab." }, { "math_id": 6, "text": "n(q-b)=(a-n)b." }, { "math_id": 7, "text": "(n-a)q=a(b-q)," }, { "math_id": 8, "text": "b-q=r(n-a)" }, { "math_id": 9, "text": "(n-a)q=ar(n-a)," }, { "math_id": 10, "text": "q=ar." }, { "math_id": 11, "text": "ab=nq=anr," }, { "math_id": 12, "text": "b=nr," } ]
https://en.wikipedia.org/wiki?curid=826617
826647
Lebesgue's number lemma
Given a cover of a compact metric space, all small subsets are subset of some cover set In topology, the Lebesgue covering lemma is a useful tool in the study of compact metric spaces. Given an open cover of a compact metric space, a Lebesgue's number of the cover is a number formula_0 such that every subset of formula_1 having diameter less than formula_2 is contained in some member of the cover. The existence of Lebesgue's numbers for compact metric spaces is given by the Lebesgue's covering lemma: If the metric space formula_3 is compact and an open cover of formula_1 is given, then the cover admits some Lebesgue's number formula_0. The notion of Lebesgue's numbers itself is useful in other applications as well. Proof. Direct Proof. Let formula_4 be an open cover of formula_1. Since formula_1 is compact we can extract a finite subcover formula_5. If any one of the formula_6's equals formula_1 then any formula_7 will serve as a Lebesgue's number. Otherwise for each formula_8, let formula_9, note that formula_10 is not empty, and define a function formula_11 by formula_12 Since formula_13 is continuous on a compact set, it attains a minimum formula_2. The key observation is that, since every formula_14 is contained in some formula_6, the extreme value theorem shows formula_0. Now we can verify that this formula_2 is the desired Lebesgue's number. If formula_15 is a subset of formula_1 of diameter less than formula_2, choose formula_16 as any point in formula_15, then by definition of diameter, formula_17, where formula_18 denotes the ball of radius formula_2 centered at formula_16. Since formula_19 there must exist at least one formula_20 such that formula_21. But this means that formula_22 and so, in particular, formula_23. Proof by Contradiction. Suppose for contradiction that that formula_1 is sequentially compact, formula_24 is an open cover of formula_1, and the Lebesgue number formula_2 does not exist. That is: for all formula_0, there exists formula_25 with formula_26 such that there does not exist formula_27 with formula_28. This enables us to perform the following construction: formula_29 formula_30 formula_31 formula_32 formula_31 Note that formula_33 for all formula_34, since formula_35. It is therefore possible by the axiom of choice to construct a sequence formula_36 in which formula_37 for each formula_20. Since formula_1 is sequentially compact, there exists a subsequence formula_38 (with formula_39) that converges to formula_40. Because formula_41 is an open cover, there exists some formula_42 such that formula_43. As formula_44 is open, there exists formula_45 with formula_46. Now we invoke the convergence of the subsequence formula_47: there exists formula_48 such that formula_49 implies formula_50. Furthermore, there exists formula_51 such that formula_52. Hence for all formula_53, we have formula_54 implies formula_55. Finally, define formula_56 such that formula_57 and formula_58. For all formula_59, notice that: Hence formula_63 by the triangle inequality, which implies that formula_64. This yields the desired contradiction.
[ { "math_id": 0, "text": "\\delta > 0" }, { "math_id": 1, "text": "X" }, { "math_id": 2, "text": "\\delta" }, { "math_id": 3, "text": "(X, d)" }, { "math_id": 4, "text": "\\mathcal U" }, { "math_id": 5, "text": "\\{A_1, \\dots, A_n\\} \\subseteq \\mathcal U" }, { "math_id": 6, "text": "A_i" }, { "math_id": 7, "text": " \\delta > 0 " }, { "math_id": 8, "text": "i \\in \\{1, \\dots, n\\}" }, { "math_id": 9, "text": "C_i := X \\smallsetminus A_i" }, { "math_id": 10, "text": "C_i" }, { "math_id": 11, "text": "f : X \\rightarrow \\mathbb R" }, { "math_id": 12, "text": "f(x) := \\frac{1}{n} \\sum_{i=1}^n d(x,C_i). " }, { "math_id": 13, "text": "f" }, { "math_id": 14, "text": "x" }, { "math_id": 15, "text": "Y" }, { "math_id": 16, "text": "x_0" }, { "math_id": 17, "text": "Y\\subseteq B_\\delta(x_0)" }, { "math_id": 18, "text": "B_\\delta(x_0)" }, { "math_id": 19, "text": "f(x_0)\\geq \\delta" }, { "math_id": 20, "text": "i" }, { "math_id": 21, "text": "d(x_0,C_i)\\geq \\delta" }, { "math_id": 22, "text": "B_\\delta(x_0)\\subseteq A_i" }, { "math_id": 23, "text": "Y\\subseteq A_i" }, { "math_id": 24, "text": "\\{ U_{\\alpha} \\mid \\alpha \\in J \\}" }, { "math_id": 25, "text": "A \\subset X" }, { "math_id": 26, "text": "\\operatorname{diam} (A) < \\delta" }, { "math_id": 27, "text": "\\beta \\in J" }, { "math_id": 28, "text": "A \\subset U_{\\beta}" }, { "math_id": 29, "text": "\\delta_{1} = 1, \\quad \\exists A_{1} \\subset X \\quad \\text{where} \\quad \\operatorname{diam} (A_{1}) < \\delta_{1} \\quad \\text {and} \\quad \\neg\\exists \\beta (A_{1} \\subset U_{\\beta})" }, { "math_id": 30, "text": "\\delta_{2} = \\frac{1}{2}, \\quad \\exists A_{2} \\subset X \\quad \\text{where} \\quad \\operatorname{diam} (A_{2}) < \\delta_{2} \\quad \\text{and} \\quad \\neg\\exists \\beta (A_{2} \\subset U_{\\beta})" }, { "math_id": 31, "text": "\\vdots" }, { "math_id": 32, "text": "\\delta_{k}=\\frac{1}{k}, \\quad \\exists A_{k} \\subset X \\quad \\text{where} \\quad \\operatorname{diam} (A_{k}) < \\delta_{k} \\quad \\text{and} \\quad \\neg\\exists \\beta (A_{k} \\subset U_{\\beta})" }, { "math_id": 33, "text": "A_{n} \\neq \\emptyset" }, { "math_id": 34, "text": " n \\in \\mathbb{Z}^{+}" }, { "math_id": 35, "text": "A_{n} \\not\\subset U_{\\beta}" }, { "math_id": 36, "text": "(x_{n})" }, { "math_id": 37, "text": "x_{i} \\in A_{i}" }, { "math_id": 38, "text": "\\{x_{n_{k}}\\}" }, { "math_id": 39, "text": "k \\in \\mathbb{Z}_{> 0}" }, { "math_id": 40, "text": "x_{0}" }, { "math_id": 41, "text": "\\{ U_{\\alpha} \\}" }, { "math_id": 42, "text": "\\alpha_{0} \\in J" }, { "math_id": 43, "text": "x_{0} \\in U_{\\alpha_{0}}" }, { "math_id": 44, "text": "U_{\\alpha_{0}}" }, { "math_id": 45, "text": "r > 0" }, { "math_id": 46, "text": "B_{d}(x_{0},r) \\subset U_{\\alpha_{0}}" }, { "math_id": 47, "text": " \\{ x_{n_{k}} \\} " }, { "math_id": 48, "text": " L \\in \\mathbb{Z}^{+}" }, { "math_id": 49, "text": " L \\le k" }, { "math_id": 50, "text": "x_{n_{k}} \\in B_{r/2} (x_{0})" }, { "math_id": 51, "text": "M \\in \\mathbb{Z}_{> 0}" }, { "math_id": 52, "text": " \\delta_{M}= \\tfrac{1}{M} < \\tfrac{r}{2} " }, { "math_id": 53, "text": "z \\in \\mathbb{Z}_{> 0}" }, { "math_id": 54, "text": "M \\le z" }, { "math_id": 55, "text": "\\operatorname{diam} (A_{M}) < \\tfrac{r}{2}" }, { "math_id": 56, "text": "q \\in \\mathbb{Z}_{> 0}" }, { "math_id": 57, "text": "n_{q} \\geq M" }, { "math_id": 58, "text": "q \\geq L" }, { "math_id": 59, "text": "x' \\in A_{n_{q}}" }, { "math_id": 60, "text": " d(x_{n_{q}},x') \\leq \\operatorname{diam} (A_{n_{q}})<\\frac{r}{2}" }, { "math_id": 61, "text": "d(x_{n_{q}},x_{0})<\\frac{r}{2}" }, { "math_id": 62, "text": "x_{n_{q}} \\in B_{r/2}\\left(x_{0}\\right)" }, { "math_id": 63, "text": "d(x_{0},x')<r" }, { "math_id": 64, "text": "A_{n_{q}} \\subset U_{\\alpha_{0}}" } ]
https://en.wikipedia.org/wiki?curid=826647
8267
Dimensional analysis
Analysis of the relationships between different physical quantities In engineering and science, dimensional analysis is the analysis of the relationships between different physical quantities by identifying their base quantities (such as length, mass, time, and electric current) and units of measurement (such as metres and grams) and tracking these dimensions as calculations or comparisons are performed. The term dimensional analysis is also used to refer to conversion of units from one dimensional unit to another, which can be used to evaluate scientific formulae. Commensurable physical quantities are of the same kind and have the same dimension, and can be directly compared to each other, even if they are expressed in differing units of measurement; e.g., metres and feet, grams and pounds, seconds and years. "Incommensurable" physical quantities are of different kinds and have different dimensions, and can not be directly compared to each other, no matter what units they are expressed in, e.g. metres and grams, seconds and grams, metres and seconds. For example, asking whether a gram is larger than an hour is meaningless. Any physically meaningful equation, or inequality, "must" have the same dimensions on its left and right sides, a property known as "dimensional homogeneity". Checking for dimensional homogeneity is a common application of dimensional analysis, serving as a plausibility check on derived equations and computations. It also serves as a guide and constraint in deriving equations that may describe a physical system in the absence of a more rigorous derivation. The concept of physical dimension, and of dimensional analysis, was introduced by Joseph Fourier in 1822. Formulation. The Buckingham π theorem describes how every physically meaningful equation involving "n" variables can be equivalently rewritten as an equation of "n" − "m" dimensionless parameters, where "m" is the rank of the dimensional matrix. Furthermore, and most importantly, it provides a method for computing these dimensionless parameters from the given variables. A dimensional equation can have the dimensions reduced or eliminated through nondimensionalization, which begins with dimensional analysis, and involves scaling quantities by characteristic units of a system or physical constants of nature. This may give insight into the fundamental properties of the system, as illustrated in the examples below. The dimension of a physical quantity can be expressed as a product of the base physical dimensions such as length, mass and time, each raised to an integer (and occasionally rational) power. The "dimension" of a physical quantity is more fundamental than some "scale" or unit used to express the amount of that physical quantity. For example, "mass" is a dimension, while the kilogram is a particular reference quantity chosen to express a quantity of mass. The choice of unit is arbitrary, and its choice is often based on historical precedent. Natural units, being based on only universal constants, may be thought of as being "less arbitrary". There are many possible choices of base physical dimensions. The SI standard selects the following dimensions and corresponding dimension symbols: time (T), length (L), mass (M), electric current (I), absolute temperature (Θ), amount of substance (N) and luminous intensity (J). The symbols are by convention usually written in roman sans serif typeface. Mathematically, the dimension of the quantity "Q" is given by formula_0 where "a", "b", "c", "d", "e", "f", "g" are the dimensional exponents. Other physical quantities could be defined as the base quantities, as long as they form a linearly independent basis – for instance, one could replace the dimension (I) of electric current of the SI basis with a dimension (Q) of electric charge, since Q = TI. A quantity that has only "b" ≠ 0 (with all other exponents zero) is known as a geometric quantity. A quantity that has only both "a" ≠ 0 and "b" ≠ 0 is known as a kinematic quantity. A quantity that has only all of "a" ≠ 0, "b" ≠ 0, and "c" ≠ 0 is known as a dynamic quantity. A quantity that has all exponents null is said to have dimension one. The unit chosen to express a physical quantity and its dimension are related, but not identical concepts. The units of a physical quantity are defined by convention and related to some standard; e.g., length may have units of metres, feet, inches, miles or micrometres; but any length always has a dimension of L, no matter what units of length are chosen to express it. Two different units of the same physical quantity have conversion factors that relate them. For example, 1 in = 2.54 cm; in this case 2.54 cm/in is the conversion factor, which is itself dimensionless. Therefore, multiplying by that conversion factor does not change the dimensions of a physical quantity. There are also physicists who have cast doubt on the very existence of incompatible fundamental dimensions of physical quantity, although this does not invalidate the usefulness of dimensional analysis. Simple cases. As examples, the dimension of the physical quantity speed "v" is formula_1 The dimension of the physical quantity acceleration "a" is formula_2 The dimension of the physical quantity force "F" is formula_3 The dimension of the physical quantity pressure "P" is formula_4 The dimension of the physical quantity energy "E" is formula_5 The dimension of the physical quantity power "P" is formula_6 The dimension of the physical quantity electric charge "Q" is formula_7 The dimension of the physical quantity voltage "V" is formula_8 The dimension of the physical quantity capacitance "C" is formula_9 Rayleigh's method. In dimensional analysis, Rayleigh's method is a conceptual tool used in physics, chemistry, and engineering. It expresses a functional relationship of some variables in the form of an exponential equation. It was named after Lord Rayleigh. The method involves the following steps: As a drawback, Rayleigh's method does not provide any information regarding number of dimensionless groups to be obtained as a result of dimensional analysis. Concrete numbers and base units. Many parameters and measurements in the physical sciences and engineering are expressed as a concrete number—a numerical quantity and a corresponding dimensional unit. Often a quantity is expressed in terms of several other quantities; for example, speed is a combination of length and time, e.g. 60 kilometres per hour or 1.4 kilometres per second. Compound relations with "per" are expressed with division, e.g. 60 km/h. Other relations can involve multiplication (often shown with a centered dot or juxtaposition), powers (like m2 for square metres), or combinations thereof. A set of base units for a system of measurement is a conventionally chosen set of units, none of which can be expressed as a combination of the others and in terms of which all the remaining units of the system can be expressed. For example, units for length and time are normally chosen as base units. Units for volume, however, can be factored into the base units of length (m3), thus they are considered derived or compound units. Sometimes the names of units obscure the fact that they are derived units. For example, a newton (N) is a unit of force, which may be expressed as the product of mass (with unit kg) and acceleration (with unit m⋅s−2). The newton is defined as 1 N = 1 kg⋅m⋅s−2. Percentages, derivatives and integrals. Percentages are dimensionless quantities, since they are ratios of two quantities with the same dimensions. In other words, the % sign can be read as "hundredths", since 1% = 1/100. Taking a derivative with respect to a quantity divides the dimension by the dimension of the variable that is differentiated with respect to. Thus: Likewise, taking an integral adds the dimension of the variable one is integrating with respect to, but in the numerator. In economics, one distinguishes between stocks and flows: a stock has a unit (say, widgets or dollars), while a flow is a derivative of a stock, and has a unit of the form of this unit divided by one of time (say, dollars/year). In some contexts, dimensional quantities are expressed as dimensionless quantities or percentages by omitting some dimensions. For example, debt-to-GDP ratios are generally expressed as percentages: total debt outstanding (dimension of currency) divided by annual GDP (dimension of currency)—but one may argue that, in comparing a stock to a flow, annual GDP should have dimensions of currency/time (dollars/year, for instance) and thus debt-to-GDP should have the unit year, which indicates that debt-to-GDP is the number of years needed for a constant GDP to pay the debt, if all GDP is spent on the debt and the debt is otherwise unchanged. Dimensional homogeneity (commensurability). The most basic rule of dimensional analysis is that of dimensional homogeneity. &lt;templatestyles src="Block indent/styles.css"/&gt;Only commensurable quantities (physical quantities having the same dimension) may be "compared", "equated", "added", or "subtracted". However, the dimensions form an abelian group under multiplication, so: &lt;templatestyles src="Block indent/styles.css"/&gt;One may take "ratios" of "incommensurable" quantities (quantities with different dimensions), and "multiply" or "divide" them. For example, it makes no sense to ask whether 1 hour is more, the same, or less than 1 kilometre, as these have different dimensions, nor to add 1 hour to 1 kilometre. However, it makes sense to ask whether 1 mile is more, the same, or less than 1 kilometre, being the same dimension of physical quantity even though the units are different. On the other hand, if an object travels 100 km in 2 hours, one may divide these and conclude that the object's average speed was 50 km/h. The rule implies that in a physically meaningful "expression" only quantities of the same dimension can be added, subtracted, or compared. For example, if "m"man, "m"rat and "L"man denote, respectively, the mass of some man, the mass of a rat and the length of that man, the dimensionally homogeneous expression "m"man + "m"rat is meaningful, but the heterogeneous expression "m"man + "L"man is meaningless. However, "m"man/"L"2man is fine. Thus, dimensional analysis may be used as a sanity check of physical equations: the two sides of any equation must be commensurable or have the same dimensions. Even when two physical quantities have identical dimensions, it may nevertheless be meaningless to compare or add them. For example, although torque and energy share the dimension , they are fundamentally different physical quantities. To compare, add, or subtract quantities with the same dimensions but expressed in different units, the standard procedure is first to convert them all to the same unit. For example, to compare 32 metres with 35 yards, use 1 yard = 0.9144 m to convert 35 yards to 32.004 m. A related principle is that any physical law that accurately describes the real world must be independent of the units used to measure the physical variables. For example, Newton's laws of motion must hold true whether distance is measured in miles or kilometres. This principle gives rise to the form that a conversion factor between two units that measure the same dimension must take multiplication by a simple constant. It also ensures equivalence; for example, if two buildings are the same height in feet, then they must be the same height in metres. Conversion factor. In dimensional analysis, a ratio which converts one unit of measure into another without changing the quantity is called a "conversion factor". For example, kPa and bar are both units of pressure, and 100 kPa = 1 bar. The rules of algebra allow both sides of an equation to be divided by the same expression, so this is equivalent to 100 kPa / 1 bar = 1. Since any quantity can be multiplied by 1 without changing it, the expression "100 kPa / 1 bar" can be used to convert from bars to kPa by multiplying it with the quantity to be converted, including the unit. For example, 5 bar × 100 kPa / 1 bar = 500 kPa because 5 × 100 / 1 = 500, and bar/bar cancels out, so 5 bar = 500 kPa. Applications. Dimensional analysis is most often used in physics and chemistry – and in the mathematics thereof – but finds some applications outside of those fields as well. Mathematics. A simple application of dimensional analysis to mathematics is in computing the form of the volume of an "n"-ball (the solid ball in "n" dimensions), or the area of its surface, the "n"-sphere: being an "n"-dimensional figure, the volume scales as "x""n", while the surface area, being ("n" − 1)-dimensional, scales as "x""n"−1. Thus the volume of the "n"-ball in terms of the radius is "C""n""r""n", for some constant "C""n". Determining the constant takes more involved mathematics, but the form can be deduced and checked by dimensional analysis alone. Finance, economics, and accounting. In finance, economics, and accounting, dimensional analysis is most commonly referred to in terms of the distinction between stocks and flows. More generally, dimensional analysis is used in interpreting various financial ratios, economics ratios, and accounting ratios. Fluid mechanics. In fluid mechanics, dimensional analysis is performed to obtain dimensionless pi terms or groups. According to the principles of dimensional analysis, any prototype can be described by a series of these terms or groups that describe the behaviour of the system. Using suitable pi terms or groups, it is possible to develop a similar set of pi terms for a model that has the same dimensional relationships. In other words, pi terms provide a shortcut to developing a model representing a certain prototype. Common dimensionless groups in fluid mechanics include: History. The origins of dimensional analysis have been disputed by historians. The first written application of dimensional analysis has been credited to François Daviet, a student of Lagrange, in a 1799 article at the Turin Academy of Science. This led to the conclusion that meaningful laws must be homogeneous equations in their various units of measurement, a result which was eventually later formalized in the Buckingham π theorem. Simeon Poisson also treated the same problem of the parallelogram law by Daviet, in his treatise of 1811 and 1833 (vol I, p. 39). In the second edition of 1833, Poisson explicitly introduces the term "dimension" instead of the Daviet "homogeneity". In 1822, the important Napoleonic scientist Joseph Fourier made the first credited important contributions based on the idea that physical laws like "F" = "ma" should be independent of the units employed to measure the physical variables. James Clerk Maxwell played a major role in establishing modern use of dimensional analysis by distinguishing mass, length, and time as fundamental units, while referring to other units as derived. Although Maxwell defined length, time and mass to be "the three fundamental units", he also noted that gravitational mass can be derived from length and time by assuming a form of Newton's law of universal gravitation in which the gravitational constant "G" is taken as unity, thereby defining M = T−2L3. By assuming a form of Coulomb's law in which the Coulomb constant "k"e is taken as unity, Maxwell then determined that the dimensions of an electrostatic unit of charge were Q = T−1L3/2M1/2, which, after substituting his M = T−2L3 equation for mass, results in charge having the same dimensions as mass, viz. Q = T−2L3. Dimensional analysis is also used to derive relationships between the physical quantities that are involved in a particular phenomenon that one wishes to understand and characterize. It was used for the first time in this way in 1872 by Lord Rayleigh, who was trying to understand why the sky is blue. Rayleigh first published the technique in his 1877 book "The Theory of Sound". The original meaning of the word "dimension", in Fourier's "Theorie de la Chaleur", was the numerical value of the exponents of the base units. For example, acceleration was considered to have the dimension 1 with respect to the unit of length, and the dimension −2 with respect to the unit of time. This was slightly changed by Maxwell, who said the dimensions of acceleration are T−2L, instead of just the exponents. Examples. A simple example: period of a harmonic oscillator. What is the period of oscillation "T" of a mass m attached to an ideal linear spring with spring constant "k" suspended in gravity of strength "g"? That period is the solution for "T" of some dimensionless equation in the variables "T", "m", "k", and "g". The four quantities have the following dimensions: T [T]; m [M]; k [M/T2]; and "g" [L/T2]. From these we can form only one dimensionless product of powers of our chosen variables, "G"1 = "T"2"k"/"m" [T2 · M/T2 / M = 1], and putting "G"1 = "C" for some dimensionless constant "C" gives the dimensionless equation sought. The dimensionless product of powers of variables is sometimes referred to as a dimensionless group of variables; here the term "group" means "collection" rather than mathematical group. They are often called dimensionless numbers as well. The variable g does not occur in the group. It is easy to see that it is impossible to form a dimensionless product of powers that combines g with k, m, and T, because g is the only quantity that involves the dimension L. This implies that in this problem the "g" is irrelevant. Dimensional analysis can sometimes yield strong statements about the "irrelevance" of some quantities in a problem, or the need for additional parameters. If we have chosen enough variables to properly describe the problem, then from this argument we can conclude that the period of the mass on the spring is independent of "g": it is the same on the earth or the moon. The equation demonstrating the existence of a product of powers for our problem can be written in an entirely equivalent way: &amp;NoBreak;}&amp;NoBreak;, for some dimensionless constant "κ" (equal to formula_14 from the original dimensionless equation). When faced with a case where dimensional analysis rejects a variable ("g", here) that one intuitively expects to belong in a physical description of the situation, another possibility is that the rejected variable is in fact relevant, but that some other relevant variable has been omitted, which might combine with the rejected variable to form a dimensionless quantity. That is, however, not the case here. When dimensional analysis yields only one dimensionless group, as here, there are no unknown functions, and the solution is said to be "complete" – although it still may involve unknown dimensionless constants, such as "κ". A more complex example: energy of a vibrating wire. Consider the case of a vibrating wire of length "ℓ" (L) vibrating with an amplitude "A" (L). The wire has a linear density "ρ" (M/L) and is under tension "s" (LM/T2), and we want to know the energy "E" (L2M/T2) in the wire. Let "π"1 and "π"2 be two dimensionless products of powers of the variables chosen, given by formula_15 The linear density of the wire is not involved. The two groups found can be combined into an equivalent form as an equation formula_16 where "F" is some unknown function, or, equivalently as formula_17 where "f" is some other unknown function. Here the unknown function implies that our solution is now incomplete, but dimensional analysis has given us something that may not have been obvious: the energy is proportional to the first power of the tension. Barring further analytical analysis, we might proceed to experiments to discover the form for the unknown function "f". But our experiments are simpler than in the absence of dimensional analysis. We'd perform none to verify that the energy is proportional to the tension. Or perhaps we might guess that the energy is proportional to "ℓ", and so infer that "E" = "ℓs". The power of dimensional analysis as an aid to experiment and forming hypotheses becomes evident. The power of dimensional analysis really becomes apparent when it is applied to situations, unlike those given above, that are more complicated, the set of variables involved are not apparent, and the underlying equations hopelessly complex. Consider, for example, a small pebble sitting on the bed of a river. If the river flows fast enough, it will actually raise the pebble and cause it to flow along with the water. At what critical velocity will this occur? Sorting out the guessed variables is not so easy as before. But dimensional analysis can be a powerful aid in understanding problems like this, and is usually the very first tool to be applied to complex problems where the underlying equations and constraints are poorly understood. In such cases, the answer may depend on a dimensionless number such as the Reynolds number, which may be interpreted by dimensional analysis. A third example: demand versus capacity for a rotating disc. Consider the case of a thin, solid, parallel-sided rotating disc of axial thickness "t" (L) and radius "R" (L). The disc has a density "ρ" (M/L3), rotates at an angular velocity "ω" (T−1) and this leads to a stress "S" (T−2L−1M) in the material. There is a theoretical linear elastic solution, given by Lame, to this problem when the disc is thin relative to its radius, the faces of the disc are free to move axially, and the plane stress constitutive relations can be assumed to be valid. As the disc becomes thicker relative to the radius then the plane stress solution breaks down. If the disc is restrained axially on its free faces then a state of plane strain will occur. However, if this is not the case then the state of stress may only be determined though consideration of three-dimensional elasticity and there is no known theoretical solution for this case. An engineer might, therefore, be interested in establishing a relationship between the five variables. Dimensional analysis for this case leads to the following (5 − 3 = 2) non-dimensional groups: demand/capacity = "ρR"2"ω"2/"S" thickness/radius or aspect ratio = "t"/"R" Through the use of numerical experiments using, for example, the finite element method, the nature of the relationship between the two non-dimensional groups can be obtained as shown in the figure. As this problem only involves two non-dimensional groups, the complete picture is provided in a single plot and this can be used as a design/assessment chart for rotating discs. Properties. Mathematical properties. The dimensions that can be formed from a given collection of basic physical dimensions, such as T, L, and M, form an abelian group: The identity is written as 1; L0 = 1, and the inverse of L is 1/L or L−1. L raised to any integer power "p" is a member of the group, having an inverse of L−"p" or 1/L"p". The operation of the group is multiplication, having the usual rules for handling exponents (L"n" × L"m" = L"n"+"m"). Physically, 1/L can be interpreted as reciprocal length, and 1/T as reciprocal time (see reciprocal second). An abelian group is equivalent to a module over the integers, with the dimensional symbol corresponding to the tuple ("i", "j", "k"). When physical measured quantities (be they like-dimensioned or unlike-dimensioned) are multiplied or divided by one other, their dimensional units are likewise multiplied or divided; this corresponds to addition or subtraction in the module. When measurable quantities are raised to an integer power, the same is done to the dimensional symbols attached to those quantities; this corresponds to scalar multiplication in the module. A basis for such a module of dimensional symbols is called a set of base quantities, and all other vectors are called derived units. As in any module, one may choose different bases, which yields different systems of units (e.g., choosing whether the unit for charge is derived from the unit for current, or vice versa). The group identity, the dimension of dimensionless quantities, corresponds to the origin in this module, (0, 0, 0). In certain cases, one can define fractional dimensions, specifically by formally defining fractional powers of one-dimensional vector spaces, like "V""L"1/2. However, it is not possible to take arbitrary fractional powers of units, due to representation-theoretic obstructions. One can work with vector spaces with given dimensions without needing to use units (corresponding to coordinate systems of the vector spaces). For example, given dimensions "M" and "L", one has the vector spaces "V""M" and "V""L", and can define "V""ML" := "V""M" ⊗ "V""L" as the tensor product. Similarly, the dual space can be interpreted as having "negative" dimensions. This corresponds to the fact that under the natural pairing between a vector space and its dual, the dimensions cancel, leaving a dimensionless scalar. The set of units of the physical quantities involved in a problem correspond to a set of vectors (or a matrix). The nullity describes some number (e.g., "m") of ways in which these vectors can be combined to produce a zero vector. These correspond to producing (from the measurements) a number of dimensionless quantities, {π1, ..., π"m"}. (In fact these ways completely span the null subspace of another different space, of powers of the measurements.) Every possible way of multiplying (and exponentiating) together the measured quantities to produce something with the same unit as some derived quantity "X" can be expressed in the general form formula_18 Consequently, every possible commensurate equation for the physics of the system can be rewritten in the form formula_19 Knowing this restriction can be a powerful tool for obtaining new insight into the system. Mechanics. The dimension of physical quantities of interest in mechanics can be expressed in terms of base dimensions T, L, and M – these form a 3-dimensional vector space. This is not the only valid choice of base dimensions, but it is the one most commonly used. For example, one might choose force, length and mass as the base dimensions (as some have done), with associated dimensions F, L, M; this corresponds to a different basis, and one may convert between these representations by a change of basis. The choice of the base set of dimensions is thus a convention, with the benefit of increased utility and familiarity. The choice of base dimensions is not entirely arbitrary, because they must form a basis: they must span the space, and be linearly independent. For example, F, L, M form a set of fundamental dimensions because they form a basis that is equivalent to T, L, M: the former can be expressed as [F = LM/T2], L, M, while the latter can be expressed as [T = (LM/F)1/2], L, M. On the other hand, length, velocity and time (T, L, V) do not form a set of base dimensions for mechanics, for two reasons: Other fields of physics and chemistry. Depending on the field of physics, it may be advantageous to choose one or another extended set of dimensional symbols. In electromagnetism, for example, it may be useful to use dimensions of T, L, M and Q, where Q represents the dimension of electric charge. In thermodynamics, the base set of dimensions is often extended to include a dimension for temperature, Θ. In chemistry, the amount of substance (the number of molecules divided by the Avogadro constant, ≈ ) is also defined as a base dimension, N. In the interaction of relativistic plasma with strong laser pulses, a dimensionless relativistic similarity parameter, connected with the symmetry properties of the collisionless Vlasov equation, is constructed from the plasma-, electron- and critical-densities in addition to the electromagnetic vector potential. The choice of the dimensions or even the number of dimensions to be used in different fields of physics is to some extent arbitrary, but consistency in use and ease of communications are common and necessary features. Polynomials and transcendental functions. Bridgman’s theorem restricts the type of function that can be used to define a physical quantity from general (dimensionally compounded) quantities to only products of powers of the quantities, unless some of the independent quantities are algebraically combined to yield dimensionless groups, whose functions are grouped together in the dimensionless numeric multiplying factor. This excludes polynomials of more than one term or transcendental functions not of that form. Scalar arguments to transcendental functions such as exponential, trigonometric and logarithmic functions, or to inhomogeneous polynomials, must be dimensionless quantities. (Note: this requirement is somewhat relaxed in Siano's orientational analysis described below, in which the square of certain dimensioned quantities are dimensionless.) While most mathematical identities about dimensionless numbers translate in a straightforward manner to dimensional quantities, care must be taken with logarithms of ratios: the identity log("a"/"b") = log "a" − log "b", where the logarithm is taken in any base, holds for dimensionless numbers "a" and "b", but it does "not" hold if "a" and "b" are dimensional, because in this case the left-hand side is well-defined but the right-hand side is not. Similarly, while one can evaluate monomials ("x""n") of dimensional quantities, one cannot evaluate polynomials of mixed degree with dimensionless coefficients on dimensional quantities: for "x"2, the expression (3 m)2 = 9 m2 makes sense (as an area), while for "x"2 + "x", the expression (3 m)2 + 3 m = 9 m2 + 3 m does not make sense. However, polynomials of mixed degree can make sense if the coefficients are suitably chosen physical quantities that are not dimensionless. For example, formula_20 This is the height to which an object rises in time "t" if the acceleration of gravity is 9.8 metres per second per second and the initial upward speed is 500 metres per second. It is not necessary for "t" to be in "seconds". For example, suppose "t" = 0.01 minutes. Then the first term would be formula_21 Combining units and numerical values. The value of a dimensional physical quantity "Z" is written as the product of a unit ["Z"] within the dimension and a dimensionless numerical value or numerical factor, "n". formula_22 When like-dimensioned quantities are added or subtracted or compared, it is convenient to express them in the same unit so that the numerical values of these quantities may be directly added or subtracted. But, in concept, there is no problem adding quantities of the same dimension expressed in different units. For example, 1 metre added to 1 foot is a length, but one cannot derive that length by simply adding 1 and 1. A conversion factor, which is a ratio of like-dimensioned quantities and is equal to the dimensionless unity, is needed: formula_23 is identical to formula_24 The factor 0.3048 m/ft is identical to the dimensionless 1, so multiplying by this conversion factor changes nothing. Then when adding two quantities of like dimension, but expressed in different units, the appropriate conversion factor, which is essentially the dimensionless 1, is used to convert the quantities to the same unit so that their numerical values can be added or subtracted. Only in this manner is it meaningful to speak of adding like-dimensioned quantities of differing units. Quantity equations. A quantity equation, also sometimes called a complete equation, is an equation that remains valid independently of the unit of measurement used when expressing the physical quantities. In contrast, in a "numerical-value equation", just the numerical values of the quantities occur, without units. Therefore, it is only valid when each numerical values is referenced to a specific unit. For example, a quantity equation for displacement "d" as speed "s" multiplied by time difference "t" would be: "d" = "s" "t" for "s" = 5 m/s, where "t" and "d" may be expressed in any units, converted if necessary. In contrast, a corresponding numerical-value equation would be: "D" = 5 "T" where "T" is the numeric value of "t" when expressed in seconds and "D" is the numeric value of "d" when expressed in metres. Generally, the use of numerical-value equations is discouraged. Dimensionless concepts. Constants. The dimensionless constants that arise in the results obtained, such as the "C" in the Poiseuille's Law problem and the "κ" in the spring problems discussed above, come from a more detailed analysis of the underlying physics and often arise from integrating some differential equation. Dimensional analysis itself has little to say about these constants, but it is useful to know that they very often have a magnitude of order unity. This observation can allow one to sometimes make "back of the envelope" calculations about the phenomenon of interest, and therefore be able to more efficiently design experiments to measure it, or to judge whether it is important, etc. Formalisms. Paradoxically, dimensional analysis can be a useful tool even if all the parameters in the underlying theory are dimensionless, e.g., lattice models such as the Ising model can be used to study phase transitions and critical phenomena. Such models can be formulated in a purely dimensionless way. As we approach the critical point closer and closer, the distance over which the variables in the lattice model are correlated (the so-called correlation length, "χ") becomes larger and larger. Now, the correlation length is the relevant length scale related to critical phenomena, so one can, e.g., surmise on "dimensional grounds" that the non-analytical part of the free energy per lattice site should be ~ 1/"χ""d", where "d" is the dimension of the lattice. It has been argued by some physicists, e.g., Michael J. Duff, that the laws of physics are inherently dimensionless. The fact that we have assigned incompatible dimensions to Length, Time and Mass is, according to this point of view, just a matter of convention, borne out of the fact that before the advent of modern physics, there was no way to relate mass, length, and time to each other. The three independent dimensionful constants: "c", "ħ", and "G", in the fundamental equations of physics must then be seen as mere conversion factors to convert Mass, Time and Length into each other. Just as in the case of critical properties of lattice models, one can recover the results of dimensional analysis in the appropriate scaling limit; e.g., dimensional analysis in mechanics can be derived by reinserting the constants "ħ", "c", and "G" (but we can now consider them to be dimensionless) and demanding that a nonsingular relation between quantities exists in the limit "c" → ∞, "ħ" → 0 and "G" → 0. In problems involving a gravitational field the latter limit should be taken such that the field stays finite. Dimensional equivalences. Following are tables of commonly occurring expressions in physics, related to the dimensions of energy, momentum, and force. Programming languages. Dimensional correctness as part of type checking has been studied since 1977. Implementations for Ada and C++ were described in 1985 and 1988. Kennedy's 1996 thesis describes an implementation in Standard ML, and later in F#. There are implementations for Haskell, OCaml, and Rust, Python, and a code checker for Fortran. Griffioen's 2019 thesis extended Kennedy's Hindley–Milner type system to support Hart's matrices. McBride and Nordvall-Forsberg show how to use dependent types to extend type systems for units of measure. Mathematica 13.2 has a function for transformations with quantities named NondimensionalizationTransform that applies a nondimensionalization transform to an equation. Mathematica also has a function to find the dimensions of a unit such as 1 J named UnitDimensions. Mathematica also has a function that will find dimensionally equivalent combinations of a subset of physical quantities named DimensionalCombations. Mathematica can also factor out certain dimension with UnitDimensions by specifying an argument to the function UnityDimensions. For example, you can use UnityDimensions to factor out angles. In addition to UnitDimensions, Mathematica can find the dimensions of a QuantityVariable with the function QuantityVariableDimensions. Geometry: position vs. displacement. Affine quantities. Some discussions of dimensional analysis implicitly describe all quantities as mathematical vectors. In mathematics scalars are considered a special case of vectors; vectors can be added to or subtracted from other vectors, and, inter alia, multiplied or divided by scalars. If a vector is used to define a position, this assumes an implicit point of reference: an origin. While this is useful and often perfectly adequate, allowing many important errors to be caught, it can fail to model certain aspects of physics. A more rigorous approach requires distinguishing between position and displacement (or moment in time versus duration, or absolute temperature versus temperature change). Consider points on a line, each with a position with respect to a given origin, and distances among them. Positions and displacements all have units of length, but their meaning is not interchangeable: This illustrates the subtle distinction between "affine" quantities (ones modeled by an affine space, such as position) and "vector" quantities (ones modeled by a vector space, such as displacement). Properly then, positions have dimension of "affine" length, while displacements have dimension of "vector" length. To assign a number to an "affine" unit, one must not only choose a unit of measurement, but also a point of reference, while to assign a number to a "vector" unit only requires a unit of measurement. Thus some physical quantities are better modeled by vectorial quantities while others tend to require affine representation, and the distinction is reflected in their dimensional analysis. This distinction is particularly important in the case of temperature, for which the numeric value of absolute zero is not the origin 0 in some scales. For absolute zero, −273.15 °C ≘ 0 K = 0 °R ≘ −459.67 °F, where the symbol ≘ means "corresponds to", since although these values on the respective temperature scales correspond, they represent distinct quantities in the same way that the distances from distinct starting points to the same end point are distinct quantities, and cannot in general be equated. For temperature differences, 1 K = 1 °C ≠ 1 °F = 1 °R. (Here °R refers to the Rankine scale, not the Réaumur scale). Unit conversion for temperature differences is simply a matter of multiplying by, e.g., 1 °F / 1 K (although the ratio is not a constant value). But because some of these scales have origins that do not correspond to absolute zero, conversion from one temperature scale to another requires accounting for that. As a result, simple dimensional analysis can lead to errors if it is ambiguous whether 1 K means the absolute temperature equal to −272.15 °C, or the temperature difference equal to 1 °C. Orientation and frame of reference. Similar to the issue of a point of reference is the issue of orientation: a displacement in 2 or 3 dimensions is not just a length, but is a length together with a "direction". (In 1 dimension, this issue is equivalent to the distinction between positive and negative.) Thus, to compare or combine two dimensional quantities in multi-dimensional Euclidean space, one also needs a bearing: they need to be compared to a frame of reference. This leads to the extensions discussed below, namely Huntley's directed dimensions and Siano's orientational analysis. Huntley's extensions. Huntley has pointed out that a dimensional analysis can become more powerful by discovering new independent dimensions in the quantities under consideration, thus increasing the rank formula_25 of the dimensional matrix. He introduced two approaches: Directed dimensions. As an example of the usefulness of the first approach, suppose we wish to calculate the distance a cannonball travels when fired with a vertical velocity component formula_26 and a horizontal velocity component &amp;NoBreak;}&amp;NoBreak;, assuming it is fired on a flat surface. Assuming no use of directed lengths, the quantities of interest are then R, the distance travelled, with dimension L, &amp;NoBreak;}&amp;NoBreak;, &amp;NoBreak;}&amp;NoBreak;, both dimensioned as T−1L, and g the downward acceleration of gravity, with dimension T−2L. With these four quantities, we may conclude that the equation for the range R may be written: formula_27 Or dimensionally formula_28 from which we may deduce that formula_29 and &amp;NoBreak;&amp;NoBreak;, which leaves one exponent undetermined. This is to be expected since we have two fundamental dimensions T and L, and four parameters, with one equation. However, if we use directed length dimensions, then formula_30 will be dimensioned as T−1Lx, formula_31 as T−1Ly, R as Lx and g as T−2Ly. The dimensional equation becomes: formula_32 and we may solve completely as "a" = 1, "b" = 1 and "c" = −1. The increase in deductive power gained by the use of directed length dimensions is apparent. Huntley's concept of directed length dimensions however has some serious limitations: It also is often quite difficult to assign the L, Lx, Ly, Lz, symbols to the physical variables involved in the problem of interest. He invokes a procedure that involves the "symmetry" of the physical problem. This is often very difficult to apply reliably: It is unclear as to what parts of the problem that the notion of "symmetry" is being invoked. Is it the symmetry of the physical body that forces are acting upon, or to the points, lines or areas at which forces are being applied? What if more than one body is involved with different symmetries? Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts? Are they the same or different? These difficulties are responsible for the limited application of Huntley's directed length dimensions to real problems. Quantity of matter. In Huntley's second approach, he holds that it is sometimes useful (e.g., in fluid mechanics and thermodynamics) to distinguish between mass as a measure of inertia ("inertial mass"), and mass as a measure of the quantity of matter. Quantity of matter is defined by Huntley as a quantity only proportional to inertial mass, while not implicating inertial properties. No further restrictions are added to its definition. For example, consider the derivation of Poiseuille's Law. We wish to find the rate of mass flow of a viscous fluid through a circular pipe. Without drawing distinctions between inertial and substantial mass, we may choose as the relevant variables: There are three fundamental variables, so the above five equations will yield two independent dimensionless variables: formula_33 formula_34 If we distinguish between inertial mass with dimension formula_35 and quantity of matter with dimension formula_36, then mass flow rate and density will use quantity of matter as the mass parameter, while the pressure gradient and coefficient of viscosity will use inertial mass. We now have four fundamental parameters, and one dimensionless constant, so that the dimensional equation may be written: formula_37 where now only C is an undetermined constant (found to be equal to formula_38 by methods outside of dimensional analysis). This equation may be solved for the mass flow rate to yield Poiseuille's law. Huntley's recognition of quantity of matter as an independent quantity dimension is evidently successful in the problems where it is applicable, but his definition of quantity of matter is open to interpretation, as it lacks specificity beyond the two requirements he postulated for it. For a given substance, the SI dimension amount of substance, with unit mole, does satisfy Huntley's two requirements as a measure of quantity of matter, and could be used as a quantity of matter in any problem of dimensional analysis where Huntley's concept is applicable. Siano's extension: orientational analysis. Angles are, by convention, considered to be dimensionless quantities (although the wisdom of this is contested ) . As an example, consider again the projectile problem in which a point mass is launched from the origin ("x", "y") = (0, 0) at a speed "v" and angle "θ" above the "x"-axis, with the force of gravity directed along the negative "y"-axis. It is desired to find the range "R", at which point the mass returns to the "x"-axis. Conventional analysis will yield the dimensionless variable "π" = "R" "g"/"v"2, but offers no insight into the relationship between "R" and "θ". Siano has suggested that the directed dimensions of Huntley be replaced by using "orientational symbols" 1x 1y 1z to denote vector directions, and an orientationless symbol 10. Thus, Huntley's Lx becomes L1x with L specifying the dimension of length, and 1x specifying the orientation. Siano further shows that the orientational symbols have an algebra of their own. Along with the requirement that 1"i"−1 = 1"i", the following multiplication table for the orientation symbols results: The orientational symbols form a group (the Klein four-group or "Viergruppe"). In this system, scalars always have the same orientation as the identity element, independent of the "symmetry of the problem". Physical quantities that are vectors have the orientation expected: a force or a velocity in the z-direction has the orientation of 1z. For angles, consider an angle θ that lies in the z-plane. Form a right triangle in the z-plane with θ being one of the acute angles. The side of the right triangle adjacent to the angle then has an orientation 1x and the side opposite has an orientation 1y. Since (using ~ to indicate orientational equivalence) tan("θ") = "θ" + ... ~ 1y/1x we conclude that an angle in the xy-plane must have an orientation 1y/1x = 1z, which is not unreasonable. Analogous reasoning forces the conclusion that sin("θ") has orientation 1z while cos("θ") has orientation 10. These are different, so one concludes (correctly), for example, that there are no solutions of physical equations that are of the form "a" cos("θ") + "b" sin("θ"), where a and b are real scalars. An expression such as formula_39 is not dimensionally inconsistent since it is a special case of the sum of angles formula and should properly be written: formula_40 which for formula_41 and formula_42 yields &amp;NoBreak;&amp;NoBreak;. Siano distinguishes between geometric angles, which have an orientation in 3-dimensional space, and phase angles associated with time-based oscillations, which have no spatial orientation, i.e. the orientation of a phase angle is &amp;NoBreak;&amp;NoBreak;. The assignment of orientational symbols to physical quantities and the requirement that physical equations be orientationally homogeneous can actually be used in a way that is similar to dimensional analysis to derive more information about acceptable solutions of physical problems. In this approach, one solves the dimensional equation as far as one can. If the lowest power of a physical variable is fractional, both sides of the solution is raised to a power such that all powers are integral, putting it into normal form. The orientational equation is then solved to give a more restrictive condition on the unknown powers of the orientational symbols. The solution is then more complete than the one that dimensional analysis alone gives. Often, the added information is that one of the powers of a certain variable is even or odd. As an example, for the projectile problem, using orientational symbols, "θ", being in the xy-plane will thus have dimension 1z and the range of the projectile R will be of the form: formula_43 Dimensional homogeneity will now correctly yield "a" = −1 and "b" = 2, and orientational homogeneity requires that &amp;NoBreak;&amp;NoBreak;. In other words, that c must be an odd integer. In fact, the required function of theta will be sin("θ")cos("θ") which is a series consisting of odd powers of θ. It is seen that the Taylor series of sin("θ") and cos("θ") are orientationally homogeneous using the above multiplication table, while expressions like cos("θ") + sin("θ") and exp("θ") are not, and are (correctly) deemed unphysical. Siano's orientational analysis is compatible with the conventional conception of angular quantities as being dimensionless, and within orientational analysis, the radian may still be considered a dimensionless unit. The orientational analysis of a quantity equation is carried out separately from the ordinary dimensional analysis, yielding information that supplements the dimensional analysis. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{dim}Q = \\mathsf{T}^a\\mathsf{L}^b\\mathsf{M}^c\\mathsf{I}^d\\mathsf{\\Theta}^e\\mathsf{N}^f\\mathsf{J}^g" }, { "math_id": 1, "text": "\\operatorname{dim}v\n= \\frac{\\text{length}}{\\text{time}}\n= \\frac{\\mathsf{L}}{\\mathsf{T}}\n= \\mathsf{T}^{-1}\\mathsf{L} ." }, { "math_id": 2, "text": "\\operatorname{dim}a\n= \\frac{\\text{speed}}{\\text{time}}\n= \\frac{\\mathsf{T}^{-1}\\mathsf{L}}{\\mathsf{T}}\n= \\mathsf{T}^{-2}\\mathsf{L} ." }, { "math_id": 3, "text": "\\operatorname{dim}F\n= \\text{mass} \\times \\text{acceleration}\n= \\mathsf{M} \\times \\mathsf{T}^{-2}\\mathsf{L}\n= \\mathsf{T}^{-2}\\mathsf{L}\\mathsf{M} ." }, { "math_id": 4, "text": "\\operatorname{dim}P\n= \\frac{\\text{force}}{\\text{area}}\n= \\frac{\\mathsf{T}^{-2}\\mathsf{L}\\mathsf{M}}{\\mathsf{L}^2}\n= \\mathsf{T}^{-2}\\mathsf{L}^{-1}\\mathsf{M} ." }, { "math_id": 5, "text": "\\operatorname{dim}E\n= \\text{force} \\times \\text{displacement}\n= \\mathsf{T}^{-2}\\mathsf{L}\\mathsf{M} \\times \\mathsf{L}\n= \\mathsf{T}^{-2}\\mathsf{L}^2\\mathsf{M} ." }, { "math_id": 6, "text": "\\operatorname{dim}P\n= \\frac{\\text{energy}}{\\text{time}}\n= \\frac{\\mathsf{T}^{-2}\\mathsf{L}^2\\mathsf{M}}{\\mathsf{T}}\n= \\mathsf{T}^{-3}\\mathsf{L}^2\\mathsf{M} ." }, { "math_id": 7, "text": "\\operatorname{dim}Q\n= \\text{current} \\times \\text{time}\n= \\mathsf{T}\\mathsf{I} ." }, { "math_id": 8, "text": "\\operatorname{dim}V\n= \\frac{\\text{power}}{\\text{current}}\n= \\frac{\\mathsf{T}^{-3}\\mathsf{L}^2\\mathsf{M}}{\\mathsf{I}}\n= \\mathsf{T^{-3}}\\mathsf{L}^2\\mathsf{M} \\mathsf{I}^{-1} ." }, { "math_id": 9, "text": "\\operatorname{dim}C\n= \\frac{\\text{electric charge}}{\\text{electric potential difference}}\n= \\frac {\\mathsf{T}\\mathsf{I}}{\\mathsf{T}^{-3}\\mathsf{L}^2\\mathsf{M}\\mathsf{I}^{-1}}\n= \\mathsf{T^4}\\mathsf{L^{-2}}\\mathsf{M^{-1}}\\mathsf{I^2} ." }, { "math_id": 10, "text": "\\mathrm{Re} = \\frac{\\rho\\,ud}{\\mu}." }, { "math_id": 11, "text": "\\mathrm{Fr} = \\frac{u}{\\sqrt{g\\,L}}." }, { "math_id": 12, "text": "\\mathrm{Eu} = \\frac{\\Delta p}{\\rho u^2}." }, { "math_id": 13, "text": "\\mathrm{Ma} = \\frac{u}{c}," }, { "math_id": 14, "text": "\\sqrt{C}" }, { "math_id": 15, "text": "\\begin{align}\n \\pi_1 &= \\frac{E}{As} \\\\\n \\pi_2 &= \\frac{\\ell}{A}.\n\\end{align}" }, { "math_id": 16, "text": "F\\left(\\frac{E}{As}, \\frac{\\ell}{A}\\right) = 0 ," }, { "math_id": 17, "text": "E = As f\\left(\\frac{\\ell}{A}\\right) ," }, { "math_id": 18, "text": "X = \\prod_{i=1}^m (\\pi_i)^{k_i}\\,." }, { "math_id": 19, "text": "f(\\pi_1,\\pi_2, ..., \\pi_m)=0\\,." }, { "math_id": 20, "text": " \\tfrac{1}{2} \\cdot (\\mathrm{-9.8~m/s^2}) \\cdot t^2 + (\\mathrm{500~m/s}) \\cdot t. " }, { "math_id": 21, "text": "\\begin{align}\n &\\tfrac{1}{2} \\cdot (\\mathrm{-9.8~m/s^2}) \\cdot (\\mathrm{0.01~min})^2 \\\\[10pt]\n ={} &\\tfrac{1}{2} \\cdot -9.8 \\cdot \\left(0.01^2\\right) (\\mathrm{min/s})^2 \\cdot \\mathrm{m} \\\\[10pt]\n ={} &\\tfrac{1}{2} \\cdot -9.8 \\cdot \\left(0.01^2\\right) \\cdot 60^2 \\cdot \\mathrm{m}.\n\\end{align}" }, { "math_id": 22, "text": "Z = n \\times [Z] = n [Z]" }, { "math_id": 23, "text": " \\mathrm{1\\,ft} = \\mathrm{0.3048\\,m}" }, { "math_id": 24, "text": " 1 = \\frac{\\mathrm{0.3048\\,m}}{\\mathrm{1\\,ft}}." }, { "math_id": 25, "text": "m" }, { "math_id": 26, "text": "v_\\text{y}" }, { "math_id": 27, "text": "R \\propto v_\\text{x}^a\\,v_\\text{y}^b\\,g^c ." }, { "math_id": 28, "text": "\\mathsf{L} = \\left(\\mathsf{T}^{-1}\\mathsf{L}\\right)^{a+b} \\left(\\mathsf{T}^{-2}\\mathsf{L}\\right)^c" }, { "math_id": 29, "text": "a + b + c = 1" }, { "math_id": 30, "text": "v_\\mathrm{x}" }, { "math_id": 31, "text": "v_\\mathrm{y}" }, { "math_id": 32, "text": "\n \\mathsf{L}_\\mathrm{x} =\n \\left({\\mathsf{T}^{-1}}{\\mathsf{L}_\\mathrm{x}}\\right)^a\n \\left({\\mathsf{T}^{-1}}{\\mathsf{L}_\\mathrm{y}}\\right)^b \n \\left({\\mathsf{T}^{-2}}{\\mathsf{L}_\\mathrm{y}}\\right)^c\n" }, { "math_id": 33, "text": "\\pi_1 = \\frac{\\dot{m}}{\\eta r}" }, { "math_id": 34, "text": "\\pi_2 = \\frac{p_\\mathrm{x}\\rho r^5}{\\dot{m}^2}" }, { "math_id": 35, "text": "M_\\text{i}" }, { "math_id": 36, "text": "M_\\text{m}" }, { "math_id": 37, "text": "C = \\frac{p_\\mathrm{x}\\rho r^4}{\\eta \\dot{m}}" }, { "math_id": 38, "text": "\\pi/8" }, { "math_id": 39, "text": "\\sin(\\theta+\\pi/2)=\\cos(\\theta)" }, { "math_id": 40, "text": "\n \\sin\\left(a\\,1_\\text{z} + b\\,1_\\text{z}\\right) =\n \\sin\\left(a\\,1_\\text{z}) \\cos(b\\,1_\\text{z}\\right) +\n \\sin\\left(b\\,1_\\text{z}) \\cos(a\\,1_\\text{z}\\right),\n" }, { "math_id": 41, "text": "a = \\theta" }, { "math_id": 42, "text": "b = \\pi/2" }, { "math_id": 43, "text": "R = g^a\\,v^b\\,\\theta^c\\text{ which means }\\mathsf{L}\\,1_\\mathrm{x} \\sim\n\\left(\\frac{\\mathsf{L}\\,1_\\text{y}}{\\mathsf{T}^2}\\right)^a \\left(\\frac{\\mathsf{L}}{\\mathsf{T}}\\right)^b\\,1_\\mathsf{z}^c.\\," } ]
https://en.wikipedia.org/wiki?curid=8267