id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
855056 | Forward osmosis | Water purification process
Forward osmosis (FO) is an osmotic process that, like reverse osmosis (RO), uses a semi-permeable membrane to effect separation of water from dissolved solutes. The driving force for this separation is an osmotic pressure gradient, such that a "draw" solution of high concentration (relative to that of the feed solution), is used to induce a net flow of water through the membrane into the draw solution, thus effectively separating the feed water from its solutes. In contrast, the reverse osmosis process uses hydraulic pressure as the driving force for separation, which serves to counteract the osmotic pressure gradient that would otherwise favor water flux from the permeate to the feed. Hence significantly more energy is required for reverse osmosis compared to forward osmosis.
The simplest equation describing the relationship between osmotic and hydraulic pressures and water (solvent) flux is:
formula_0
where formula_1 is water flux, A is the hydraulic permeability of the membrane, Δπ is the difference in osmotic pressures on the two sides of the membrane, and ΔP is the difference in hydrostatic pressure (negative values of formula_1 indicating reverse osmotic flow). The modeling of these relationships is in practice more complex than this equation indicates, with flux depending on the membrane, feed, and draw solution characteristics, as well as the fluid dynamics within the process itself.
Tsolute flux (formula_2) for each individual solute can be modelled by Fick's Law
formula_3
Where formula_4 is the solute permeability coefficient and formula_5 is the trans-membrane concentration differential for the solute. It is clear from this governing equation that a solute will diffuse from an area of high concentration to an area of low concentration if solutes can diffuse across a membrane. This is well known in reverse osmosis where solutes from the feedwater diffuse to the product water, however in the case of forward osmosis the situation can be far more complicated.
In FO processes we may have solute diffusion in both directions depending on the composition of the draw solution, type of membrane used and feed water characteristics. Reverse solute flux (formula_2) does two things; the draw solution solutes may diffuse to the feed solution and the feed solution solutes may diffuse to the draw solution. Clearly these phenomena have consequences in terms of the selection of the draw solution for any particular FO process. For instance the loss of draw solution may affect the feed solution perhaps due to environmental issues or contamination of the feed stream, such as in osmotic membrane bioreactors.
An additional distinction between the reverse osmosis (RO) and forward osmosis (FO) processes is that the permeate water resulting from an RO process is in most cases fresh water ready for use. In FO, an additional process is required to separate fresh water from a diluted draw solution. Types of processes used are reverse osmosis, solvent extraction, magnetic and thermolytic. Depending on the concentration of solutes in the feed (which dictates the necessary concentration of solutes in the draw) and the intended use of the product of the FO process, the addition of a separation step may not be required. The membrane separation of the FO process in effect results in a "trade" between the solutes of the feed solution and the draw solution.
The forward osmosis process is also known as osmosis or in the case of a number of companies who have coined their own terminology 'engineered osmosis' and 'manipulated osmosis'.
Applications.
Emergency drinks.
One example of an application of this type may be found in "hydration bags", which use an ingestible draw solute and are intended for separation of water from dilute feeds. This allows, for example, the ingestion of water from surface waters (streams, ponds, puddles, etc.) that may be expected to contain pathogens or toxins that are readily rejected by the FO membrane. With sufficient contact time, such water will permeate the membrane bag into the draw solution, leaving the undesirable feed constituents behind. The diluted draw solution may then be ingested directly. Typically, the draw solutes are sugars such as glucose or fructose, which provide the additional benefit of nutrition to the user of the FO device. A point of additional interest with such bags is that they may be readily used to recycle urine, greatly extending the ability of a backpacker or soldier to survive in arid environments. This process may also, in principle, be employed with highly concentrated saline feedwater sources such as seawater, as one of the first intended uses of FO with ingestible solutes was for survival in life rafts at sea.
Desalination.
Desalinated water can be produced from the diluted draw / osmotic agent solution, using a second process. This may be by membrane separation, thermal method, physical separation or a combination of these processes. The process has the feature of inherently low fouling because of the forward osmosis first step, unlike conventional reverse osmosis desalination plants where fouling is often a problem. Modern Water has deployed forward osmosis based desalination plants in Gibraltar and Oman.
In March 2010, National Geographic magazine cited forward osmosis as one of three technologies that promised to reduce the energy requirements of desalination.
Evaporative cooling tower – make-up water.
One other application developed, where only the forward osmosis step is used, is in evaporative cooling make-up water. In this case the cooling water is the draw solution and the water lost by evaporation is simply replaced using water produced by forward osmosis from a suitable source, such as seawater, brackish water, treated sewage effluent or industrial waste water. Thus in comparison with other ‘desalination’ processes that may be used for make-up water the energy consumption is a fraction of these with the added advantage of the low fouling propensity of a forward osmosis process.
Landfill leachate treatment.
In the case where the desired product is fresh water that does not contain draw solutes, a second separation step is required. The first separation step of FO, driven by an osmotic pressure gradient, does not require a significant energy input (only unpressurized stirring or pumping of the solutions involved). The second separation step, however does typically require energy input. One method used for the second separation step is to employ RO. This approach has been used, for instance, in the treatment of landfill leachate. An FO membrane separation is used to draw water from the leachate feed into a saline (NaCl) brine. The diluted brine is then passed through a RO process to produce fresh water and a reusable brine concentrate. The advantage of this method is not a savings in energy, but rather in the fact that the FO process is more resistant to fouling from the leachate feed than a RO process alone would be. A similar FO/RO hybrid has been used for the concentration of food products, such as fruit juice.
Brine concentration.
Brine concentration using forward osmosis may be achieved using a high osmotic pressure draw solution with a means to recover and regenerate it. One such process uses the ammonia-carbon dioxide (NH3/CO2) forward osmosis process invented at Yale University by Rob McGinnis, who subsequently founded Oasys Water to commercialize the technology. Because ammonia and carbon dioxide readily dissociate into gases using heat, the draw solutes can effectively be recovered and reused in a closed loop system, achieving separation through the conversion between thermal energy and osmotic pressure. NH3/CO2 FO brine concentration was initially demonstrated in the oil and gas industry to treat produced water in the Permian Basin area of Texas, and is currently being used in power and manufacturing plants in China.
Feed water 'softening' / pre-treatment for thermal desalination.
One unexploited application is to 'soften' or pre-treat the feedwater to multi stage flash (MSF) or multiple effect distillation (MED) plants by osmotically diluting the recirculating brine with the cooling water. This reduces the concentrations of scale forming calcium carbonate and calcium sulphate compared to the normal process, thus allowing an increase in top brine temperature (TBT), output and gained output ratio (GOR). Darwish et al. showed that the TBT could be raised from 110 °C to 135 °C whilst maintaining the same scaling index for calcium sulphate.
Food Processing.
Although osmotic treatment of food products (e.g., preserved fruits and meats) is very common in the food industry, FO treatment for concentration of beverages and liquid foods has been studied at laboratory-scale only. FO has several advantages as a process for concentrating beverages and liquid foods, including operation at low temperatures and low pressures that promote high retention of sensory (e.g., taste, aroma, color) and nutritional (e.g., vitamin) value, high rejection, and potentially low membrane fouling compared to pressure-driven membrane processes.
Osmotic power.
In 1954 Pattle suggested that there was an untapped source of power when a river mixes with the sea, in terms of the lost osmotic pressure, however it was not until the mid ‘70s where a practical method of exploiting it using selectively permeable membranes by Loeb and independently by Jellinek was outlined. This process was referred by Loeb as pressure retarded osmosis (PRO) and one simplistic implementation is shown opposite. Some situations that may be envisaged to exploit it are using the differential osmotic pressure between a low brackish river flowing into the sea, or brine and seawater. The worldwide theoretical potential for osmotic power has been estimated at 1,650 TWh / year.
In more recent times a significant amount of research and development work has been undertaken and funded by Statkraft, the Norwegian state energy company. A prototype plant was built in Norway generating a gross output between 2 – 4 kW; see Statkraft osmotic power prototype in Hurum. A much larger plant with an output of 1 – 2 MW at Sunndalsøra, 400 km north of Oslo was considered but was subsequently dropped. The New Energy and Industrial Technology Development Organisation (NEDO) in Japan is funding work on osmotic power.
Industrial usage.
Advantages.
Forward osmosis (FO) has many positive aspects in the treating of industrial effluents containing many different kinds of contaminants and also in the treating of salty waters. When these draw effluents have moderate to low concentrations of removable agents, the FO membranes are really efficient and have the flexibility of adapting the membrane depending on the quality desired for the product water.
FO systems are also really useful when using them combined with other kinds of treatment systems as they compensate the deficiencies that the other systems may have. This is also helpful in processes where the recovery of a certain product is essential to minimize costs or to improve efficiency such as biogas production processes.
Disadvantages.
The main disadvantage of the FO processes is the high fouling factor that they may experience. This occurs when treating a high saturated draw effluent, resulting in the membrane getting obtruded and no longer making its function. This implies that the process has to be stopped and the membrane cleaned. This issue happens less in other kind of membrane treatments as they have artificial pressure forcing to trespass the membrane reducing the fouling effect.
Also there's an issue with the yet to be developed membranes technology. This affects to the FO processes as the membranes used are expensive and not highly efficient or ideal for the desired function. This means that many times other cheaper and simpler systems are used rather than membranes.
Industrial market and future.
Currently the industry uses few FO membranes processes (and membranes technologies in general) as they're complex processes which are also expensive and require a lot of cleaning procedures and that sometimes only work under certain conditions that in industry can't always be ensured. For that reason the focus for the future in membranes is to improve the technology so it's more flexible and suitable for general industrial usage. This will be done by investing in research and by slowly getting these developments into the market so the production cost is lowered as more membranes are produced.
Keeping with the current development it can be ensured that in few years from now, membranes will be spread-used in many different industrial processes (not only water treatments) and that there will appear many fields where FO processes can be used.
Research.
An area of current research in FO involves direct removal of draw solutes, in this case by means of a magnetic field. Small (nanoscale) magnetic particles are suspended in solution creating osmotic pressures sufficient for the separation of water from a dilute feed. Once the draw solution containing these particles has been diluted by the FO water flux, they may be separated from that solution by use of a magnet (either against the side of a hydration bag, or around a pipe in-line in a steady state process).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "J_w = A \\left(\\Delta \\pi - \\Delta P \\right)"
},
{
"math_id": 1,
"text": "J_w"
},
{
"math_id": 2,
"text": "J_s"
},
{
"math_id": 3,
"text": "J_s = B \\Delta c "
},
{
"math_id": 4,
"text": "B"
},
{
"math_id": 5,
"text": "\\Delta c"
}
] | https://en.wikipedia.org/wiki?curid=855056 |
855077 | 2.4 Metre | The International 2.4mR is a one-person keelboat. The class is a development class governed by the 2.4mR rule. The rule controlled by World Sailing (formerly known as ISAF) is one of the few classes designated as an International Class. The International 2.4mR Class rule is closely related to the International 12mR class rule that was used at the America's Cup.
While there is a small but active group of amateur or professional designers and builders around the world, around 90% of the 2.4mR boats are the commercially produced Norlin Mark III designed by Swedish yacht designer Peter Norlin. Over the years, new 2.4mR designs such as the Stradivari III, the Proton and the Super 3 have come into production.
The 2.4mR boats are primarily used for racing and the class holds highly competitive national events in many countries. World and European championships can attract as many as 100 boats at a time.
The 2.4mR is ideal for adaptive sailing since the sailor barely moves in the boat, and all settings can be adjusted from a facing forward seated position. Both hand-steering and foot-steering are possible. The boat is sailed without a spinnaker, but it is equipped with a whisker-pole that is extending outward to hold the shape of the jib when sailing downwind. The boat's capability as a truly inclusive sailing boat has been demonstrated over many years at multiple Open World Championships.
History.
After the 1980 America's Cup, people in the Newport, RI area started sailing boats called Mini-12s. They were named after the 12-Metre yachts that were used at the America's Cup. As the fleet started to grow, the word spread to Sweden, home of the yacht designer Peter Norlin. Peter Norlin refined the original designs, and along with other naval architects, they collectively initiated the International 2.4mR Class that we know today. Although the 2.4mR is a development class, Peter Norlin has become the dominant designer, and the class is therefore often mistaken as a one-design class.
One-design.
In recent years attempts have been made to develop a one-design class based on the 2.4 Norlin Mark III. This was primarily because the competition within the Paralympics was meant to be more about the sailors' competitiveness and less about the equipment. This led to the introduction of Appendix K to the Class rules and a group of individuals started to work on a set of stand-alone one-design rules. This is still at the early stages but this effort is likely to lead to the emergence of a new one-design 2.4mR class alongside the existing development 2.4mR class.
Rating formula.
As an open class rather than a one-design, all boat designs must meet the following formula.
formula_0
Events.
Para World Sailing Championships.
The 2.4 metre has been used a number of times as equipment for the One-Person Technical Disabled discipline which holds an annual World Championships.
Paralympics.
From 2000 to 2016, the 2.4 Metre was the official single-crew class boat for sailing at the Summer Paralympics although it was used in a more one-design form utilising the Norlin Mk3 design. | [
{
"math_id": 0,
"text": "\n\\frac{L + 2d + \\sqrt{S} - F}{2.37} \\leq 2.4 \\mbox{ metres}\n"
}
] | https://en.wikipedia.org/wiki?curid=855077 |
855138 | Line element | Line segment of infinitesimally small length
In geometry, the line element or length element can be informally thought of as a line segment associated with an infinitesimal displacement vector in a metric space. The length of the line element, which may be thought of as a differential arc length, is a function of the metric tensor and is denoted by "formula_0".
Line elements are used in physics, especially in theories of gravitation (most notably general relativity) where spacetime is modelled as a curved Pseudo-Riemannian manifold with an appropriate metric tensor.
General formulation.
Definition of the line element and arclength.
The coordinate-independent definition of the square of the line element "ds" in an "n"-dimensional Riemannian or Pseudo Riemannian manifold (in physics usually a Lorentzian manifold) is the "square of the length" of an infinitesimal displacement formula_1 (in pseudo Riemannian manifolds possibly negative) whose square root should be used for computing curve length:
formula_2
where "g" is the metric tensor, · denotes inner product, and "d"q an infinitesimal displacement on the (pseudo) Riemannian manifold. By parametrizing a curve formula_3, we can define the arc length of the curve length of the curve between formula_4, and formula_5 as the integral:
formula_6
To compute a sensible length of curves in pseudo Riemannian manifolds, it is best to assume that the infinitesimal displacements have the same sign everywhere. E.g. in physics the square of a line element along a timeline curve would (in the formula_7 signature convention) be negative and the negative square root of the square of the line element along the curve would measure the proper time passing for an observer moving along the curve.
From this point of view, the metric also defines in addition to line element the surface and volume elements etc.
Identification of the square of the line element with the metric tensor.
Since formula_1 is an arbitrary "square of the arc length", formula_8 completely defines the metric, and it is therefore usually best to consider the expression for formula_8 as a definition of the metric tensor itself, written in a suggestive but non tensorial notation:
formula_9
This identification of the square of arc length formula_8 with the metric is even more easy to see in "n"-dimensional general curvilinear coordinates q = ("q"1, "q"2, "q"3, ..., "qn"), where it is written as a symmetric rank 2 tensor coinciding with the metric tensor:
formula_10
Here the indices "i" and "j" take values 1, 2, 3, ..., "n" and Einstein summation convention is used. Common examples of (pseudo) Riemannian spaces include three-dimensional space (no inclusion of time coordinates), and indeed four-dimensional spacetime.
Line elements in Euclidean space.
Following are examples of how the line elements are found from the metric.
Cartesian coordinates.
The simplest line element is in Cartesian coordinates - in which case the metric is just the Kronecker delta:
formula_11
(here "i, j" = 1, 2, 3 for space) or in matrix form ("i" denotes row, "j" denotes column):
formula_12
The general curvilinear coordinates reduce to Cartesian coordinates:
formula_13
so
formula_14
Orthogonal curvilinear coordinates.
For all orthogonal coordinates the metric is given by:
formula_15
where
formula_16
for "i" = 1, 2, 3 are scale factors, so the square of the line element is:
formula_17
Some examples of line elements in these coordinates are below.
General curvilinear coordinates.
Given an arbitrary basis of a space of dimension formula_18, the metric is defined as the inner product of the basis vectors.
formula_19
Where formula_20 and the inner product is with respect to the ambient space (usually its formula_21)
In a coordinate basis formula_22
The coordinate basis is a special type of basis that is regularly used in differential geometry.
Line elements in 4d spacetime.
Minkowski spacetime.
The Minkowski metric is:
formula_23
where one sign or the other is chosen, both conventions are used. This applies only for flat spacetime. The coordinates are given by the 4-position:
formula_24
so the line element is:
formula_25
Schwarzschild coordinates.
In Schwarzschild coordinates coordinates are formula_26, being the general metric of the form:
formula_27
(note the similitudes with the metric in 3D spherical polar coordinates).
so the line element is:
formula_28
General spacetime.
The coordinate-independent definition of the square of the line element d"s" in spacetime is:
formula_29
In terms of coordinates:
formula_30
where for this case the indices "α" and "β" run over 0, 1, 2, 3 for spacetime.
This is the spacetime interval - the measure of separation between two arbitrarily close events in spacetime. In special relativity it is invariant under Lorentz transformations. In general relativity it is invariant under arbitrary invertible differentiable coordinate transformations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "ds"
},
{
"math_id": 1,
"text": "d\\mathbf{q}"
},
{
"math_id": 2,
"text": " ds^2 = d\\mathbf{q}\\cdot d\\mathbf{q} = g(d\\mathbf{q},d\\mathbf{q})"
},
{
"math_id": 3,
"text": "q(\\lambda)"
},
{
"math_id": 4,
"text": "q(\\lambda_1)"
},
{
"math_id": 5,
"text": "q(\\lambda_2)"
},
{
"math_id": 6,
"text": " s = \\int_{\\lambda_1}^{\\lambda_2} d\\lambda \\sqrt{ \\left|ds^2\\right|} = \\int_{\\lambda_1}^{\\lambda_2} d\\lambda \\sqrt{ \\left|g\\left(\\frac{dq}{d\\lambda},\\frac{dq}{d\\lambda}\\right)\\right|} = \\int_{\\lambda_1}^{\\lambda_2} d\\lambda \\sqrt{ \\left|g_{ij}\\frac{dq^i}{d\\lambda}\\frac{dq^j}{d\\lambda}\\right|} "
},
{
"math_id": 7,
"text": "-+++"
},
{
"math_id": 8,
"text": "ds^2"
},
{
"math_id": 9,
"text": "ds^2 = g"
},
{
"math_id": 10,
"text": " ds^2= g_{ij} dq^i dq^j = g ."
},
{
"math_id": 11,
"text": "g_{ij} = \\delta_{ij}"
},
{
"math_id": 12,
"text": "[g_{ij}] = \\begin{pmatrix}\n1 & 0 & 0\\\\\n0 & 1 & 0\\\\\n0 & 0 & 1\n\\end{pmatrix}"
},
{
"math_id": 13,
"text": "(q^1,q^2,q^3) = (x, y, z)\\,\\Rightarrow\\,d\\mathbf{r}=(dx,dy,dz)"
},
{
"math_id": 14,
"text": " ds^2 = g_{ij} dq^i dq^j = dx^2 +dy^2 +dz^2 "
},
{
"math_id": 15,
"text": "[g_{ij}] = \\begin{pmatrix}\nh_1^2 & 0 & 0\\\\\n0 & h_2^2 & 0\\\\\n0 & 0 & h_3^2\n\\end{pmatrix}"
},
{
"math_id": 16,
"text": "h_i = \\left|\\frac{\\partial\\mathbf{r}}{\\partial q^i}\\right|"
},
{
"math_id": 17,
"text": "ds^2 = h_1^2(dq^1)^2 + h_2^2(dq^2)^2 + h_3^2(dq^3)^2 "
},
{
"math_id": 18,
"text": " n, \\{\\hat{b}_{i}\\}"
},
{
"math_id": 19,
"text": "g_{ij}=\\langle\\hat{b}_{i},\\hat{b}_{j}\\rangle"
},
{
"math_id": 20,
"text": "1\\leq i,j\\leq n"
},
{
"math_id": 21,
"text": "\\delta_{ij}"
},
{
"math_id": 22,
"text": "\\hat{b}_{i} = \\frac{\\partial}{\\partial x^{i}}"
},
{
"math_id": 23,
"text": "[g_{ij}] = \\pm \\begin{pmatrix}\n1 & 0 & 0 & 0 \\\\\n0 & -1 & 0 & 0 \\\\\n0 & 0 & -1 & 0 \\\\\n0 & 0 & 0 & -1 \\\\\n\\end{pmatrix}"
},
{
"math_id": 24,
"text": "\\mathbf{x} = (x^0,x^1,x^2,x^3) = (ct,\\mathbf{r}) \\,\\Rightarrow\\, d\\mathbf{x} = (c dt, d\\mathbf{r})"
},
{
"math_id": 25,
"text": "ds^2 = \\pm (c^2 dt^2 - d\\mathbf{r} \\cdot d\\mathbf{r}) ."
},
{
"math_id": 26,
"text": " \\left(t, r, \\theta, \\phi \\right)"
},
{
"math_id": 27,
"text": "[g_{ij}] = \\begin{pmatrix}\n-a(r)^2 & 0 & 0 & 0 \\\\\n0 & b(r)^2 & 0 & 0 \\\\\n0 & 0 & r^2 & 0 \\\\\n0 & 0 & 0 & r^2 \\sin^2\\theta \\\\\n\\end{pmatrix}"
},
{
"math_id": 28,
"text": "ds^2 = -a(r)^2 \\, dt^2 + b(r)^2 \\, dr^2 + r^2 \\, d\\theta^2 + r^2 \\sin^2\\theta \\, d\\phi^2 ."
},
{
"math_id": 29,
"text": " ds^2 = d\\mathbf{x} \\cdot d\\mathbf{x} = g(d\\mathbf{x},d\\mathbf{x}) "
},
{
"math_id": 30,
"text": " ds^2 = g_{\\alpha\\beta} dx^\\alpha dx^\\beta "
}
] | https://en.wikipedia.org/wiki?curid=855138 |
8551999 | Diffraction topography | X-ray imaging technique
Diffraction topography (short: "topography") is a imaging technique based on Bragg diffraction.
Diffraction topographic images ("topographies") record the intensity profile of a beam of X-rays (or, sometimes, neutrons) diffracted by a crystal.
A topography thus represents a two-dimensional spatial intensity mapping (image) of the X-rays diffracted in a specific direction, so regions which diffract substantially will appear brighter than those which do not. This is equivalent to the spatial fine structure of a Laue reflection.
Topographs often reveal the irregularities in a non-ideal crystal lattice.
X-ray diffraction topography is one variant of X-ray imaging, making use of diffraction contrast rather than absorption contrast which is usually used in radiography and computed tomography (CT). Topography is exploited to a lesser extent with neutrons, and is the same concept as dark field imaging in an electron microscope.
Topography is used for monitoring crystal quality and visualizing defects in many different crystalline materials.
It has proved helpful e.g. when developing new crystal growth methods, for monitoring growth and the crystal quality achieved, and for iteratively optimizing growth conditions.
In many cases, topography can be applied without preparing or otherwise damaging the sample; it is therefore one variant of non-destructive testing.
History.
After the discovery of x-rays by Wilhelm Röntgen in 1895, and of the principles of X-ray diffraction by Laue and the Bragg family, it took several decades for the benefits of diffraction "imaging" to be fully recognized, and the first useful experimental techniques to be developed. The first systematic reports of laboratory topography techniques date from the early 1940s. In the 1950s and 1960s, topographic investigations played a role in detecting the nature of defects and improving crystal growth methods for germanium and (later) silicon as materials for semiconductor microelectronics.
For a more detailed account of the historical development of topography, see J.F. Kelly – "A brief history of X-ray diffraction topography".
From about the 1970s on, topography profited from the advent of synchrotron x-ray sources which provided considerably more intense x-ray beams, allowing to achieve shorter exposure times, better contrast, higher spatial resolution, and to investigate smaller samples or rapidly changing phenomena.
Initial applications of topography were mainly in the field of metallurgy, controlling the growth of better crystals of various metals. Topography was later extended to semiconductors, and generally to materials for microelectronics. A related field are investigations of materials and devices for X-ray optics, such as monochromator crystals made of Silicon, Germanium or Diamond, which need to be checked for defects prior to being used. Extensions of topography to organic crystals are somewhat more recent.
Topography is applied today not only to volume crystals of any kind, including semiconductor wafers, but also to thin layers, entire electronic devices, as well as to organic materials such as protein crystals and others.
Basic principle of topography.
The basic working principle of diffraction topography is as follows:
An incident, spatially extended beam (mostly of X-rays, or neutrons) impinges on a sample.
The beam may be either monochromatic, i.e. consist one single wavelength of X-rays or neutrons, or polychromatic, i.e. be composed of a mixture of wavelengths ("white beam" topography). Furthermore, the incident beam may be either parallel, consisting only of "rays" propagating all along nearly the same direction, or divergent/convergent, containing several more strongly different directions of propagation.
When the beam hits the crystalline sample, Bragg diffraction occurs, i.e. the incident wave is reflected by the atoms on certain lattice planes of the sample, if it hits those planes at the right Bragg angle.
Diffraction from sample can take place either in reflection geometry (Bragg case), with the beam entering and leaving through the same surface, or in transmission geometry (Laue case).
Diffraction gives rise to a diffracted beam, which will leave the sample and propagate along a direction differing from the incident direction by the scattering angle formula_0.
The cross section of the diffracted beam may or may not be identical to the one of the incident beam. In the case of strongly asymmetric reflections, the beam size (in the diffraction plane) is considerably expanded or compressed, with expansion occurring if the incidence angle is much smaller than the exit angle, and vice versa. Independently of this beam expansion, the relation of sample size to image size is given by the exit angle alone: The apparent lateral size of sample features parallel to the exit surface is downscaled in the image by the projection effect of the exit angle.
A homogeneous sample (with a regular crystal lattice) would yield a homogeneous intensity distribution in the topograph (a "flat" image with no contrast). Intensity modulations (topographic contrast) arise from irregularities in the crystal lattice, originating from various kinds of defects such as
In many cases of defects such as dislocations, topography is not directly sensitive to the defects themselves (atomic structure of the dislocation core), but predominantly to the strain field surrounding the defect region.
Theory.
Theoretical descriptions of contrast formation in X-ray topography are largely based on the dynamical theory of diffraction. This framework is helpful in the description of many aspects of topographic image formation: entrance of an X-ray wavefield into a crystal, propagation of the wavefield inside the crystal, interaction of wavefield with crystal defects, altering of wavefield propagation by local lattice strains, diffraction, multiple scattering, absorption.
The theory is therefore often helpful in the interpretation of topographic images of crystal defects. The exact nature of a defect often cannot be deduced directly from the observed image (i.e., a "backwards calculation" is problematic). Instead, one has to make assumptions about the structure of the defect, deduce a hypothetical image from the assumed structure ("forward calculation", based on theory), and compare with the experimental image. If the match between both is not good enough, the assumptions have to be varied until sufficient correspondence is reached. Theoretical calculations, and in particular numerical simulations by computer based on this theory, are thus a valuable tool for the interpretation of topographic images.
Contrast mechanisms.
The topographic image of a uniform crystal with a perfectly regular lattice, illuminated by a homogeneous beam, is uniform (no contrast). Contrast arises when distortions of the lattice (defects, tilted crystallites, strain) occur; when the crystal is composed of several different materials or phases; or when the thickness of the crystal changes across the image domain.
Structure factor contrast.
The diffraction from a crystalline material, and thus the intensity of the diffracted beam, changes with the type and number of atoms inside the crystal unit cell. This fact is quantitatively expressed by the structure factor. Different materials have different structure factors, and similarly for different phases of the same material (e.g. for materials crystallizing in several different space groups). In samples composed of a mixture of materials/phases in spatially adjacent domains, the geometry of these domains can be resolved by topography. This is true, for example, also for twinned crystals, ferroelectric domains, and many others.
Orientation contrast.
When a crystal is composed of crystallites with varying lattice orientation, topographic contrast arises: In plane-wave topography, only selected crystallites will be in diffracting position, thus yielding diffracted intensity only in some parts of the image. Upon sample rotation, these will disappear, and other crystallites will appear in the new topograph as strongly diffracting. In white-beam topography, all misoriented crystallites will be diffracting simultaneously (each at a different wavelength). However, the exit angles of the respective diffracted beams will differ, leading to overlapping regions of enhanced intensity as well as to shadows in the image, thus again giving rise to contrast.
While in the case of tilted crystallites, domain walls, grain boundaries etc. orientation contrast occurs on a macroscopic scale, it can also be generated more locally around defects, e.g. due to curved lattice planes around a dislocation core.
Extinction contrast.
Another type of topographic contrast, extinction contrast, is slightly more complex. While the two above variants are explicable in simple terms based on geometrical theory (basically, the Bragg law) or kinematical theory of X-ray diffraction, extinction contrast can be understood based on dynamical theory.
Qualitatively, extinction contrast arises e.g. when the thickness of a sample, compared to the respective extinction length (Bragg case) or Pendelloesung length (Laue case), changes across the image. In this case, diffracted beams from areas of different thickness, having suffered different degrees of extinction, are recorded within the same image, giving rise to contrast. Topographists have systematically investigated this effect by studying wedge-shaped samples, of linearly varying thickness, allowing to directly record in one image the dependence of diffracted intensity on sample thickness as predicted by dynamical theory.
In addition to mere thickness changes, extinction contrast also arises when parts of a crystal are diffracting with different strengths, or when the crystal contains deformed (strained) regions.
The governing quantity for an overall theory of extinction contrast in deformed crystals is called the "effective misorientation"
formula_1
where formula_2 is the displacement vector field, and formula_3 and formula_4 are the directions of the incident and diffracted beam, respectively.
In this way, different kinds of disturbances are "translated" into equivalent misorientation values, and contrast formation can be understood analogously to orientation contrast.
For instance, a compressively strained material requires larger Bragg angles for diffraction at unchanged wavelength. To compensate for this and to reach diffraction conditions, the sample needs to be rotated, similarly as in the case of lattice tilts.
A simplified and more "transparent" formula taking into account the combined effect of tilts and strains onto contrast is the following:
formula_5
Visibility of defects; types of defect images.
To discuss the visibility of defects in topographic images according to theory, consider the exemplary case of a single dislocation: It will give rise to contrast in topography only if the lattice planes involved in diffraction are distorted in some way by the existence of the dislocation. This is true in the case of an edge dislocation if the scattering vector of the Bragg reflection used is parallel to the Burgers vector of the dislocation, or at least has a component in the plane perpendicular to the dislocation line, but not if it is parallel to the dislocation line. In the case of a screw dislocation, the scattering vector has to have a component along the Burgers vector, which is now parallel to dislocation line. As a rule of thumb, a dislocation will be invisible in a topograph if the vector product
formula_6
is zero.
(A more precise rule will have to distinguish between screw and edge dislocations and to also take the direction of the dislocation line formula_7 into account.
If a defect is visible, often there occurs not only one, but several distinct images of it on the topograph. Theory predicts three images of single defects: The so-called direct image, the kinematical image, and the intermediary image.
Spatial resolution; limiting effects.
The spatial resolution achievable in topographic images can be limited by one or several of three factors:
the resolution (grain or pixel size) of the detector, the experimental geometry, and intrinsic diffraction effects.
First, the spatial resolution of an image can obviously not be better than the grain size (in the case of film) or the pixel size (in the case of digital detectors) with which it was recorded. This is the reason why topography requires high-resolution X-ray films or CCD cameras with the smallest pixel sizes available today. Secondly, resolution can be additionally blurred by a geometric projection effect. If one point of the sample is a "hole" in an otherwise opaque mask, then the X-ray source, of finite lateral size S, is imaged through the hole onto a finite image domain given by the formula
formula_8
where I is the spread of the image of one sample point in the image plane, D is the source-to-sample distance, and d is the sample-to-image distance. The ratio S/D corresponds to the angle (in radians) under which the source appears from the position of the sample (the angular source size, equivalent to the incident divergence at one sample point). The achievable resolution is thus best for small sources, large sample distances, and small detector distances. This is why the detector (film) needed to be placed very close to the sample in the early days of topography; only at synchrotrons, with their small S and (very) large D, could larger values of d finally be afforded, introducing much more flexibility into topography experiments.
Thirdly, even with perfect detectors and ideal geometric conditions, the visibility of special contrast features, such as the images of single dislocations, can be additionally limited by diffraction effects.
A dislocation in a perfect crystal matrix gives rise to contrast only in those regions where the local orientation of the crystal lattice differs from average orientation by more than about the Darwin width of the Bragg reflection used. A quantitative description is provided by the dynamical theory of X-ray diffraction. As a result, and somehow counter-intuitively, the widths of dislocation images become narrower when the associated rocking curves are large. Thus, strong reflections of low diffraction order are particularly appropriate for topographic imaging. They permit topographists to obtain narrow, well-resolved images of dislocations, and to separate single dislocations even when the dislocation density in a material is rather high. In more unfavourable cases (weak, high-order reflections, higher photon energies), dislocation images become broad, diffuse, and overlap for high and medium dislocation densities. Highly ordered, strongly diffracting materials – like minerals or semiconductors – are generally unproblematic, whereas e.g. protein crystals are particularly challenging for topographic imaging.
Apart from the Darwin width of the reflection, the width of single dislocation images may additionally depend on the Burgers vector of the dislocation, i.e. both its length and its orientation (relative to the scattering vector), and, in plane wave topography, on the angular departure from the exact Bragg angle. The latter dependence follows a reciprocity law, meaning that dislocations images become narrower inversely as the angular distance grows. So-called weak beam conditions are thus favourable in order to obtain narrow dislocation images.
Experimental realization – instrumentation.
To conduct a topographic experiment, three groups of instruments are required: an x-ray source, potentially including appropriate x-ray optics; a sample stage with sample manipulator (diffractometer); and a two-dimensionally resolving detector (most often X-ray film or camera).
X-ray source.
The x-ray beam used for topography is generated by an x-ray source, typically either a laboratory x-ray tube (fixed or rotating) or a synchrotron source. The latter offers advantages due to its higher beam intensity, lower divergence, and its continuous wavelength spectrum. X-ray tubes are still useful, however, due to easier access and continuous availability, and are often used for initial screening of samples and/or training of new staff.
For white beam topography, not much more is required: most often, a set of slits to precisely define the beam shape and a (well polished) vacuum exit window will suffice. For those topography techniques requiring a monochromatic x-ray beam, an additional crystal monochromator is mandatory. A typical configuration at synchrotron sources is a combination of two Silicon crystals, both with surfaces oriented parallel to [111]-lattice planes, in geometrically opposite orientation. This guarantees relatively high intensity, good wavelength selectivity (about 1 part in 10000) and the possibility to change the target wavelength without having to change the beam position ("fixed exit").
Sample stage.
To place the sample under investigation into the x-ray beam, a sample holder is required. While in white-beam techniques a simple fixed holder is sometimes sufficient, experiments with monochromatic techniques typically require one or more degrees of freedom of rotational motion. Samples are therefore placed on a diffractometer, allowing to orient the sample along one, two or three axes. If the sample needs to be displaced, e.g. in order to scan its surface through the beam in several steps, additional translational degrees of freedom are required.
Detector.
After being scattered by the sample, the profile of the diffracted beam needs to be detected by a two-dimensionally resolving X-ray detector. The classical "detector" is X-ray sensitive film, with nuclear plates as a traditional alternative. The first step beyond these "offline" detectors were the so-called image plates, although limited in readout speed and spatial resolution. Since about the mid-1990s, CCD cameras have emerged as a practical alternative, offering many advantages such as fast online readout and the possibility to record entire image series in place. X-ray sensitive CCD cameras, especially those with spatial resolution in the micrometer range, are now well established as electronic detectors for topography. A promising further option for the future may be pixel detectors, although their limited spatial resolution may restrict their usefulness for topography.
General criteria for judging the practical usefulness of detectors for topography applications include spatial resolution, sensitivity, dynamic range ("color depth", in black-white mode), readout speed, weight (important for mounting on diffractometer arms), and price.
Systematic overview of techniques and imaging conditions.
The manifold topographic techniques can be categorized according to several criteria.
One of them is the distinction between restricted-beam techniques on the one hand (such as section topography or pinhole topography) and extended-beam techniques on the other hand, which use the full width and intensity of the incoming beam. Another, independent distinction is between integrated-wave topography, making use of the full spectrum of incoming X-ray wavelengths and divergences, and plane-wave (monochromatic) topopgraphy, more selective in both wavelengths and divergence. Integrated-wave topography can be realized as either single-crystal or double-crystal topography. Further distinctions include the one between topography in reflection geometry (Bragg-case) and in transmission geometry (Laue case).
For a full discussion and a graphical hierarchy of topographic techniques, see
Experimental techniques I – Some classical topographic techniques.
The following is an exemplary list of some of the most important experimental techniques for topography:
White-beam.
White-beam topography uses the full bandwidth of X-ray wavelengths in the incoming beam, without any wavelength filtering (no monochromator). The technique is particularly useful in combination with synchrotron radiation sources, due to their wide and continuous wavelength spectrum. In contrast to the monochromatic case, in which accurate sample adjustment is often necessary in order to reach diffraction conditions, the Bragg equation is always and automatically fulfilled in the case of a white X-ray beam: Whatever the angle at which the beam hits a specific lattice plane, there is always one wavelength in the incident spectrum for which the Bragg angle is fulfilled just at this precise angle (on condition that the spectrum is wide enough). White-beam topography is therefore a very simple and fast technique. Disadvantages include the high X-ray dose, possibly leading to radiation damage to the sample, and the necessity to carefully shield the experiment.
White-beam topography produces a pattern of several diffraction spots, each spot being related to one specific lattice plane in the crystal. This pattern, typically recorded on X-ray film, corresponds to a Laue pattern and shows the symmetry of the crystal lattice. The fine structure of each single spot (topograph) is related to defects and distortions in the sample. The distance between spots, and the details of contrast within one single spot, depend on the distance between sample and film; this distance is therefore an important degree of freedom for white-beam topography experiments.
Deformation of the crystal will cause variation in the size of the diffraction spot. For a cylindrically bent crystal the Bragg planes in the crystal lattice will lie on Archimedean spirals (with the exception of those orientated tangentially and radially to the curvature of the bend, which are respectively cylindrical and planar), and the degree of curvature can be determined in a predictable way from the length of the spots and the geometry of the set-up.
White-beam topographs are useful for fast and comprehensive visualization of crystal defect and distortions. They are, however, rather difficult to analyze in any quantitative way, and even a qualitative interpretation often requires considerable experience and time.
Plane-wave topography.
Plane-wave topography is in some sense the opposite of white-beam topography, making use of monochromatic (single-wavelength) and parallel incident beam. In order to achieve diffraction conditions, the sample under study must be precisely aligned. The contrast observed strongly depends on the exact position of the angular working point on the rocking curve of the sample, i.e. on the angular distance between the actual sample rotation position and the theoretical position of the Bragg peak. A sample rotation stage is therefore an essential instrumental precondition for controlling and varying the contrast conditions.
Section topography.
While the above techniques use a spatially extended, wide incident beam, section topography is based on a narrow beam on the order of some 10 micrometers (in one or, in the case of pinhole topography with a pencil beam, in both lateral dimensions). Section topographs therefore investigate only a restricted volume of the sample.
On its path through the crystal, the beam is diffracted at different depths, each one contributing to image formation on a different location on the detector (film). Section topography can therefore be used for depth-resolved defect analysis.
In section topography, even perfect crystals display fringes. The technique is very sensitive to crystalline defects and strain, as these distort the fringe pattern in the topograph. Quantitative analysis can be performed with the help of image simulation by computer algorithms, usually based on the Takagi-Taupin equations.
An enlarged synchrotron X-ray transmission section topograph on the right shows a diffraction image of the section of a sample having a gallium nitride (GaN) layer grown by metal-organic vapour phase epitaxy on sapphire wafer. Both the epitaxial GaN layer and the sapphire substrate show numerous defects. The GaN layer actually consists of about 20 micrometers wide small-angle grains connected to each other. Strain in the epitaxial layer and substrate is visible as elongated streaks parallel to the diffraction vector direction. The defects on the underside of the sapphire wafer section image are surface defects on the unpolished backside of the sapphire wafer. Between the sapphire and GaN the defects are interfacial defects.
Projection topography.
The setup for projection topography (also called "traverse" topography") is essentially identical to section topography, the difference being that both sample and film are now scanned laterally (synchronously) with respect to the narrow incident beam. A projection topograph therefore corresponds to the superposition of many adjacent section topographs, able to investigate not just a restricted portion, but the entire volume of a crystal.
The technique is rather simple and has been in routine use at "Lang cameras" in many research laboratories.
Berg-Barrett.
Berg-Barrett topography uses a narrow incident beam that is reflected from the surface of the sample under study under conditions of high asymmetry (grazing incidence, steep exit). To achieve sufficient spatial resolution, the detector (film) needs to be placed rather close to the sample surface. Berg-Barrett topography is another routine technique in many X-ray laboratories.
Experimental techniques II – Advanced topographic techniques.
Topography at synchrotron sources.
The advent of synchrotron X-ray sources has been beneficial to X-ray topography techniques. Several of the properties of synchrotron radiation are advantageous also for topography applications: The high collimation (more precisely the small angular source size) allows to reach higher geometrical resolution in topographs, even at larger sample-to-detector distances. The continuous wavelength spectrum facilitates white-beam topography. The high beam intensities available at synchrotrons make it possible to investigate small sample volumes, to work at weaker reflections or further off Bragg-conditions (weak beam conditions), and to achieve shorter exposure times. Finally, the discrete time structure of synchrotron radiation permits topographists to use stroboscopic methods to efficiently visualize time-dependent, periodically recurrent structures (such as acoustic waves on crystal surfaces).
Neutron topography.
Diffraction topography with neutron radiation has been in use for several decades, mainly at research reactors with high neutron beam intensities. Neutron topography can make use of contrast mechanisms that are partially different from the X-ray case, and thus serve e.g. to visualize magnetic structures. However, due to the comparatively low neutron intensities, neutron topography requires long exposure times. Its use is therefore rather limited in practice.
Literature:
Topography applied to organic crystals.
Topography is "classically" applied to inorganic crystals, such a metals and semiconductors. However, it is nowadays applied more and more often also to organic crystals, most notably proteins. Topographic investigations can help to understand and optimize crystal growth processes also for proteins. Numerous studies have been initiated in the last 5–10 years, using both white-beam and plane-wave topography.
Although considerable progress has been achieved, topography on protein crystals remains a difficult discipline: Due to large unit cells, small structure factors and high disorder, diffracted intensities are weak. Topographic imaging therefore requires long exposure times, which may lead to radiation damage of the crystals, generating in the first place the defects which are then imaged. In addition, the low structure factors lead to small Darwin widths and thus to broad dislocation images, i.e. rather low spatial resolution.
Nevertheless, in some cases, protein crystals were reported to be perfect enough to achieve images of single dislocations.
Literature:
Topography on thin layered structures.
Not only volume crystals can be imaged by topography, but also crystalline layers on a foreign substrate. For very thin layers, the scattering volume and thus the diffracted intensities are very low. In these cases, topographic imaging is therefore a rather demanding task, unless incident beams with very high intensities are available.
Experimental techniques III – Special techniques and recent developments.
Reticulography.
A relatively new topography-related technique (first published in 1996) is the so-called "reticulography". Based on white-beam topography, the new aspect consists in placing a fine-scaled metallic grid ("reticule") between sample and detector. The metallic grid lines are highly absorbing, producing dark lines in the recorded image. While for flat, homgeneous sample the image of the grid is rectilinear, just as the grid itself, strongly deformed grid images may occur in the case of tilted or strained sample. The deformation results from Bragg angle changes (and thus different directions of propagation of the diffracted beams) due to lattice parameter differences (or tilted crystallites) in the
sample. The grid serves to split the diffracted beam into an array of microbeams, and to backtrace the propagation of each individual microbeam onto the sample surface. By recording reticulographic images at several sample-to-detector distances, and appropriate data processing, local distributions of misorientation across the sample surface can be derived.
Digital topography.
The use of electronic detectors such as X-ray CCD cameras, replacing traditional X-ray film, facilitates topography in many ways. CCDs achieve online readout in (almost) real-time, dispensing experimentalists of the need to develop films in a dark room. Drawbacks with respect to films are the limited dynamic range and, above all, the moderate spatial resolution of commercial CCD cameras, making the development of dedicated CCD cameras necessary for high-resolution imaging. A further, decisive advantage of digital topography is the possibility to record series of images without changing detector position, thanks to online readout. This makes it possible, without complicated image registration procedures, to observe time-dependent phenomena, to perform kinetic studies, to investigate processes of device degradation and radiation damage, and to realize sequential topography (see below).
Time-resolved (stroboscopic) topography; Imaging of surface acoustic waves.
To image time-dependent, periodically fluctuating phenomena, topography can be combined with stroboscopic exposure techniques. In this way, one selected phase of a sinusoidally varying movement is selectively images as a "snapshot". First applications were in the field of surface acoustic waves on semiconductor surfaces.
Literature:
Topo-tomography; 3D dislocation distributions.
By combining topographic image formation with tomographic image reconstruction, distributions of defects can be resolved in three dimensions. Unlike "classical" computed tomography (CT), image contrast is not based on differences in absorption (absorption contrast), but on the usual contrast mechanisms of topography (diffraction contrast). In this way, three-dimensional distributions of dislocations in crystals have been imaged.
Literature:
Sequential topography / Rocking Curve Imaging.
Plane-wave topography can be made to extract an additional wealth of information from a sample by recording not just one image, but an entire sequence of topographs all along the sample's rocking curve. By following the diffracted intensity in one pixel across the entire sequence of images, local rocking curves from very small areas of sample surface can be reconstructed.
Although the required post-processing and numerical analysis is sometimes moderately demanding, the effort is often compensated by very comprehensive information on the sample's local properties. Quantities that become quantitatively measurable in this way include local scattering power, local lattice tilts (crystallite misorientation), and local lattice quality and perfection. Spatial resolution is, in many cases, essentially given by the detector pixel size.
The technique of sequential topography, in combination with appropriate data analysis methods also called "rocking curve imaging", constitutes a method of "microdiffraction imaging", i.e. a combination of X-ray imaging with X-ray diffractometry.
Literature:
MAXIM.
The "MAXIM" (MAterials X-ray IMaging) method is another method combining diffraction analysis with spatial resolution. It can be viewed as serial topography with additional angular resolution in the exit beam. In contrast to the Rocking Curve Imaging method, it is more appropriate for more highly disturbed (polycrystalline) materials with lower crystalline perfection. The difference on the instrumental side is that MAXIM uses an array of slits / small channels (a so-called "multi-channel plate" (MCP), the two-dimensional equivalent of a Soller slit system) as an additional X-ray optical element between sample and CCD detector. These channels transmit intensity only in specific, parallel directions, and thus guarantee a one-to-one-relation between detector pixels and points on the sample surface, which would otherwise not be given in the case of materials with high strain and/or a strong mosaicity. The spatial resolution of the method is limited by a combination of detector pixel size and channel plate periodicity, which in the ideal case are identical. The angular resolution is mostly given by the aspect ratio (length over width) of the MCP channels.
Literature:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " 2 \\theta_B "
},
{
"math_id": 1,
"text": "\n\\Delta \\theta(\\vec r) = \\frac{1}{\\vec h \\cdot \\cos \\theta_B} \\frac{\\partial}{\\partial s_{\\vec h}} \\left[ \\vec h \\cdot \\vec u(\\vec r)\\right]\n"
},
{
"math_id": 2,
"text": " \\vec u(\\vec r)"
},
{
"math_id": 3,
"text": " \\vec s_0 "
},
{
"math_id": 4,
"text": "\\vec s_{h}"
},
{
"math_id": 5,
"text": "\n\\Delta \\theta(\\vec r) = -\\tan \\theta_B \\frac{\\Delta d}{d}(\\vec r) \\pm \\Delta \\varphi(\\vec r)\n"
},
{
"math_id": 6,
"text": " \\mathbf{g} \\cdot \\mathbf{b} "
},
{
"math_id": 7,
"text": " l "
},
{
"math_id": 8,
"text": "\n\\Delta x = S \\cdot \\frac{d}{D} = \\frac{S}{D} \\cdot d\n"
}
] | https://en.wikipedia.org/wiki?curid=8551999 |
855214 | Beach ball | Inflatable ball for beach and water games
A beach ball is an inflatable ball for beach and water games. Their large size and light weight require little effort to propel them.
They became popular in the beach-themed films of the 1960s starring Annette Funicello and Frankie Avalon. These movies include "Beach Party", "Muscle Beach Party", "Beach Blanket Bingo" and "How to Stuff a Wild Bikini".
Design.
Beach balls range from hand-sized to over across or larger. They generally have a set of soft plastic panels, with two circular end panels, one with an oral inflation valve, intended to be inflated by lung power or pump. A common design is vertical solid colored stripes alternating with white stripes. There are also other designs, including beach balls in a single solid colour, promotional beach balls with advertisements or company slogans, as globes or as Emojis.
Some manufacturers specify the size of their beach balls (which is often confused with the diameter) as the tip-to-tip length of a deflated ball (approximately half the circumference), or even the length of the panels before they have their ends cut and joined into a beach ball. Thus the actual diameter may be about formula_0 (≈ 0.6366…) of the nominal size.
Other sizes of beach balls exist, ranging from smaller to larger ones. There are beach balls that have a diameter of or even .
The world's largest beach ball was made in London, England on May 30, 2017. It was carried by a barge on River Thames. It had a diameter of with the word "Baywatch" written all over it. It was produced by Paramount Pictures to promote the 2017 movie "Baywatch". The record was registered by Guinness and the certificates were given to the members of the film's cast.
Uses.
Beach ball sports include water polo and volleyball. While they are much less expensive than the balls used in professional sports, they are also much less durable, as most of them are made of soft plastic. Giant beach balls may be tossed between crowd members at concerts, festivals, and sporting events. Many graduates use beach balls as a prank during ceremonies, hitting them around the crowd. They are bounced around crowds at cricket, baseball and football games, but are frequently confiscated and popped by security. Some security personnel at these events might inspect the ball's interior after tearing it, most likely searching for illegal items (e.g. narcotics) that might be transported inside the beach ball. Guards may also do this so that the ball cannot enter the field and obstruct or distract players. This happened in August 1999, in a baseball game between the Cleveland Indians and the Los Angeles Angels, where the distraction caused by a beach ball on the field resulted in the Angels' defeat.
Their light weight and stability make beach balls ideal for trained seals to balance on their noses, which has become an iconic scene. Beach balls are also a popular prop used in swimsuit photography and to promote or represent beach-themed events or locations. Popular games using the beachball include, volley ball, beach ball, and ball.
Some more basic uses of beach balls include a game's tool, a toy, or just a decoration in certain events. Another basic use of them is simply for sitting, sizes ranging from 24 inches to 48 inches are commonly used. Some examples regarding to the use of sitting are for seats in certain parties (i.e. birthday parties, pool parties, house parties, etc.), However, they are not as reinforced as exercise balls. Excessive sitting and bouncing can slowly damage the beach ball, which may cause it to eventually leak or even pop so it is also recommended to keep an eye on the weight limit of the ball by examining the size of the ball. For example, 24" beach balls are suitable for kids and young teens which has an approximate weight limit of 110 lbs. Meanwhile, the sizes above it, like 36" or 48", are suitable for teens and adults which has an approximate weight limit of 176 lbs.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{2}{\\pi}"
}
] | https://en.wikipedia.org/wiki?curid=855214 |
855329 | Turn (angle) | Unit of plane angle where a full circle equals 1
<templatestyles src="Template:Infobox/styles-images.css" />
The turn (symbol tr or pla) is a unit of plane angle measurement that is the angular measure subtended by a complete circle at its center. It is equal to 2"π" radians, 360 degrees or 400 gradians. As an angular unit, one turn also corresponds to one cycle (symbol cyc or c) or to one revolution (symbol rev or r). Common related units of frequency are "cycles per second" (cps) and "revolutions per minute" (rpm). The angular unit of the turn is useful in connection with, among other things, electromagnetic coils (e.g., transformers), rotating objects, and the winding number of curves.
Subdivisions of a turn include the half-turn and quarter-turn, spanning a straight angle and a right angle, respectively; metric prefixes can also be used as in, e.g., centiturns (ctr), milliturns (mtr), etc.
Because one turn is formula_0 radians, some have proposed representing 2"π" with a single letter. In 2010, Michael Hartl proposed using the Greek letter formula_1 (tau), equal to the ratio of a circle's circumference to its radius (formula_0) and corresponding to one turn, for greater conceptual simplicity when stating angles in radians. This proposal did not initially gain widespread acceptance in the mathematical community, but the constant has become more widespread, having been added to several major programming languages and calculators.
In the ISQ, an arbitrary "number of turns" (also known as "number of revolutions" or "number of cycles") is formalized as a dimensionless quantity called rotation, defined as the ratio of a given angle and a full turn. It is represented by the symbol "N". <templatestyles src="Crossreference/styles.css" />
Unit symbols.
There are several unit symbols for the turn.
EU and Switzerland.
The German standard DIN 1315 (March 1974) proposed the unit symbol "pla" (from Latin: 'full angle') for turns. Covered in DIN 1301-1 (October 2010), the so-called ('full angle') is not an SI unit. However, it is a legal unit of measurement in the EU and Switzerland.
Calculators.
The scientific calculators HP 39gII and HP Prime support the unit symbol "tr" for turns since 2011 and 2013, respectively. Support for "tr" was also added to newRPL for the HP 50g in 2016, and for the hp 39g+, HP 49g+, HP 39gs, and HP 40gs in 2017. An angular mode TURN was suggested for the WP 43S as well, but the calculator instead implements "MULπ" ("multiples of π") as mode and unit since 2019.
Subdivisions.
A turn can be divided in 100 centiturns or milliturns, with each milliturn corresponding to an angle of 0.36°, which can also be written as 21′ 36″. A protractor divided in centiturns is normally called a "percentage protractor".
While percentage protractors have existed since 1922, the terms centiturns, milliturns and microturns were introduced much later by the British astronomer Fred Hoyle in 1962. Some measurement devices for artillery and satellite watching carry milliturn scales.
Binary fractions of a turn are also used. Sailors have traditionally divided a turn into 32 compass points, which implicitly have an angular separation of 1/32 turn. The "binary degree", also known as the "binary radian" (or "brad"), is turn. The binary degree is used in computing so that an angle can be represented to the maximum possible precision in a single byte. Other measures of angle used in computing may be based on dividing one whole turn into 2"n" equal parts for other values of n.
Proposals for a single letter to represent 2"π".
The number (approximately 6.28) is the ratio of a circle's circumference to its radius, and the number of radians in one turn.
The meaning of the symbol formula_2 was not originally fixed to the ratio of the circumference and the diameter. In 1697, David Gregory used (pi over rho) to denote the perimeter of a circle (i.e., the circumference) divided by its radius. However, earlier in 1647, William Oughtred had used (delta over pi) for the ratio of the diameter to perimeter. The first use of the symbol π on its own with its present meaning (of perimeter divided by diameter) was in 1706 by the Welsh mathematician William Jones.
The first known usage of a single letter to denote the 6.28... constant was in Leonhard Euler's 1727 "Essay Explaining the Properties of Air", where it was denoted by the letter π. Euler would later use the letter π for the 3.14... constant in his 1736 "Mechanica" and 1748 "Introductio in analysin infinitorum," though defined as half the circumference of a circle of radius 1—a unit circle—rather than the ratio of circumference to diameter. Elsewhere in "Introductio in analysin infinitorum", Euler instead used the letter π for one-fourth of the circumference of a unit circle, or 1.57... . Usage of the letter π for the circle constant became widespread, though it varied between 3.14... and 6.28... up until 1761; afterward, π was standardized as being equal to 3.14... .
Several people have independently proposed using 𝜏 = 2π, including:
In 2001, Robert Palais proposed using the number of radians in a turn as the fundamental circle constant instead of π, which amounts to the number of radians in half a turn, in order to make mathematics simpler and more intuitive. His proposal used a "π with three legs" symbol to denote the constant (formula_3).
In 2008, Robert P. Crease proposed the idea of defining a constant as the ratio of circumference to radius, a proposal supported by John Horton Conway. Crease used the Greek letter psi: formula_4.
The same year, Thomas Colignatus proposed the uppercase Greek letter theta, Θ, to represent 2π.
The Greek letter theta derives from the Phoenician and Hebrew letter teth, 𐤈 or ט, and it has been observed that the older version of the symbol, which means wheel, resembles a wheel with four spokes. It has also been proposed to use the wheel symbol, teth, to represent the value 2π, and more recently a connection has been made among other ancient cultures on the existence of a wheel, sun, circle, or disk symbol—i.e. other variations of teth—as representation for 2π.
In 2010, Michael Hartl proposed to use the Greek letter tau to represent the circle constant: "τ" = 2"π". He offered several reasons for the choice of constant, primarily that it allows fractions of a turn to be expressed more directly: for instance, a turn would be represented as rad instead of rad. As for the choice of notation, he offered two reasons. First, τ is the number of radians in one "turn", and both τ and "turn" begin with a sound. Second, τ visually resembles π, whose association with the circle constant is unavoidable. Hartl's "Tau Manifesto" gives many examples of formulas that are asserted to be clearer where "τ" is used instead of "π". For example, Hartl asserts that replacing Euler's identity "e""iπ" = −1 by "e""iτ" = 1 (which Hartl also calls "Euler's identity") is more fundamental and meaningful. He also claims that the formula for circular area in terms of τ, "A" = 𝜏"r"2, contains a natural factor of arising from integration.
Initially, this proposal did not receive significant acceptance by the mathematical and scientific communities. However, the use of "τ" has become more widespread. For example:
The following table shows how various identities appear when "τ" = 2"π" is used instead of π. For a more complete list, see "List of formulae involving π".
Unit conversion.
One turn is equal to 2"π" (≈ ) radians, 360 degrees, or 400 gradians.
In the ISQ/SI.
In the International System of Quantities (ISQ), rotation (symbol N) is a physical quantity defined as number of revolutions:
"N" is the number (not necessarily an integer) of revolutions, for example, of a rotating body about a given axis. Its value is given by:
formula_5
where 𝜑 denotes the measure of rotational displacement.
The above definition is part of the ISQ, formalized in the international standard ISO 80000-3 (Space and time), and adopted in the International System of Units (SI).
Rotation count or number of revolutions is a quantity of dimension one, resulting from a ratio of angular displacement.
It can be negative and also greater than 1 in modulus.
The relationship between quantity rotation, "N", and unit turns, tr, can be expressed as:
formula_6
where {𝜑}tr is the numerical value of the angle 𝜑 in units of turns (see Physical quantity#Components).
In the ISQ/SI, rotation is used to derive rotational frequency (the rate of change of rotation with respect to time), denoted by n:
formula_7
The SI unit of rotational frequency is the reciprocal second (s−1). Common related units of frequency are "hertz" (Hz), "cycles per second" (cps), and "revolutions per minute" (rpm).
<templatestyles src="Template:Infobox/styles-images.css" />
The superseded version ISO 80000-3:2006 defined "revolution" as a special name for the dimensionless unit "one",
which also received other special names, such as the radian.
Despite their dimensional homogeneity, these two specially named dimensionless units are applicable for non-comparable kinds of quantity: rotation and angle, respectively.
"Cycle" is also mentioned in ISO 80000-3, in the definition of "period".
In programming languages.
The following table documents various programming languages that have implemented the circle constant for converting between turns and radians. All of the languages below support the name "Tau" in some casing, but Processing also supports "TWO_PI" and Raku also supports the symbol "τ" for accessing the same value.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2\\pi"
},
{
"math_id": 1,
"text": "\\tau"
},
{
"math_id": 2,
"text": "\\pi"
},
{
"math_id": 3,
"text": "\\pi\\!\\;\\!\\!\\!\\pi = 2\\pi"
},
{
"math_id": 4,
"text": "\\psi = 2 \\pi"
},
{
"math_id": 5,
"text": "N = \\frac{\\varphi}{2 \\pi \\text{ rad}}"
},
{
"math_id": 6,
"text": "N = \\frac \\varphi \\text{tr} = \\{ \\varphi \\}_\\text{tr}"
},
{
"math_id": 7,
"text": "n = \\frac{\\mathrm{d}N}{\\mathrm{d}t}"
}
] | https://en.wikipedia.org/wiki?curid=855329 |
8556260 | Thermal oxidation | Process creating a thin layer of (usually) silicon dioxide
In microfabrication, thermal oxidation is a way to produce a thin layer of oxide (usually silicon dioxide) on the surface of a wafer. The technique forces an oxidizing agent to diffuse into the wafer at high temperature and react with it. The rate of oxide growth is often predicted by the Deal–Grove model. Thermal oxidation may be applied to different materials, but most commonly involves the oxidation of silicon substrates to produce silicon dioxide.
The chemical reaction.
Thermal oxidation of silicon is usually performed at a temperature between 800 and 1200 °C, resulting in so called High Temperature Oxide layer (HTO). It may use either water vapor (usually UHP steam) or molecular oxygen as the oxidant; it is consequently called either "wet" or "dry" oxidation. The reaction is one of the following:
formula_0
formula_1
The oxidizing ambient may also contain several percent of hydrochloric acid (HCl). The chlorine neutralizes metal ions that may occur in the oxide.
Thermal oxide incorporates silicon consumed from the substrate and oxygen supplied from the ambient. Thus, it grows both down into the wafer and up out of it. For every unit thickness of silicon consumed, 2.17 unit thicknesses of oxide will appear. If a bare silicon surface is oxidized, 46% of the oxide thickness will lie below the original surface, and 54% above it.
Deal-Grove model.
According to the commonly used Deal-Grove model, the time "τ" required to grow an oxide of thickness "Xo", at a constant temperature, on a bare silicon surface, is:
formula_2
where the constants A and B relate to properties of the reaction and the oxide layer, respectively. This model has further been adapted to account for self-limiting oxidation processes, as used for the fabrication and morphological design of Si nanowires and other nanostructures.
If a wafer that already contains oxide is placed in an oxidizing ambient, this equation must be modified by adding a corrective term τ, the time that would have been required to grow the pre-existing oxide under current conditions. This term may be found using the equation for "t" above.
Solving the quadratic equation for "Xo" yields:
formula_3
Oxidation technology.
Most thermal oxidation is performed in furnaces, at temperatures between 800 and 1200 °C. A single furnace accepts many wafers at the same time, in a specially designed quartz rack (called a "boat"). Historically, the boat entered the oxidation chamber from the side (this design is called "horizontal"), and held the wafers vertically, beside each other. However, many modern designs hold the wafers horizontally, above and below each other, and load them into the oxidation chamber from below.
Because vertical furnaces stand higher than horizontal furnaces, they may not fit into some microfabrication facilities. They help to prevent dust contamination. Unlike horizontal furnaces, in which falling dust can contaminate any wafer, vertical furnaces use enclosed cabinets with air filtration systems to prevent dust from reaching the wafers.
Vertical furnaces also eliminate an issue that plagued horizontal furnaces: non-uniformity of grown oxide across the wafer. Horizontal furnaces typically have convection currents inside the tube which causes the bottom of the tube to be slightly colder than the top of the tube. As the wafers lie vertically in the tube the convection and the temperature gradient with it causes the top of the wafer to have a thicker oxide than the bottom of the wafer. Vertical furnaces solve this problem by having wafer sitting horizontally, and then having the gas flow in the furnace flowing from top to bottom, significantly damping any thermal convections.
Vertical furnaces also allow the use of load locks to purge the wafers with nitrogen before oxidation to limit the growth of native oxide on the Si surface.
Oxide quality.
Wet oxidation is preferred to dry oxidation for growing thick oxides, because of the higher growth rate. However, fast oxidation leaves more dangling bonds at the silicon interface, which produce quantum states for electrons and allow current to leak along the interface. (This is called a "dirty" interface.) Wet oxidation also yields a lower-density oxide, with lower dielectric strength.
The long time required to grow a thick oxide in dry oxidation makes this process impractical. Thick oxides are usually grown with a long wet oxidation bracketed by short dry ones (a "dry-wet-dry" cycle). The beginning and ending dry oxidations produce films of high-quality oxide at the outer and inner surfaces of the oxide layer, respectively.
Mobile metal ions can degrade performance of MOSFETs (sodium is of particular concern). However, chlorine can immobilize sodium by forming sodium chloride. Chlorine is often introduced by adding hydrogen chloride or trichloroethylene to the oxidizing medium. Its presence also increases the rate of oxidation.
Other notes.
Thermal oxidation can be performed on selected areas of a wafer, and blocked on others. This process, first developed at Philips, is commonly referred to as the local oxidation of silicon (LOCOS) process. Areas which are not to be oxidized are covered with a film of silicon nitride, which blocks diffusion of oxygen and water vapor due to its oxidation at a much slower rate. The nitride is removed after oxidation is complete. This process cannot produce sharp features, because lateral (parallel to the surface) diffusion of oxidant molecules under the nitride mask causes the oxide to protrude into the masked area.
Because impurities dissolve differently in silicon and oxide, a growing oxide will selectively take up or reject dopants. This redistribution is governed by the segregation coefficient, which determines how strongly the oxide absorbs or rejects the dopant, and the diffusivity.
The orientation of the silicon crystal affects oxidation. A <100> wafer (see Miller indices) oxidizes more slowly than a <111> wafer, but produces an electrically cleaner oxide interface.
Thermal oxidation of any variety produces a higher-quality oxide, with a much cleaner interface, than chemical vapor deposition of oxide resulting in low temperature oxide layer (reaction of TEOS at about 600 °C). However, the high temperatures required to produce High Temperature Oxide (HTO) restrict its usability. For instance, in MOSFET processes, thermal oxidation is never performed after the doping for the source and drain terminals is performed, because it would disturb the placement of the dopants.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rm Si + 2H_2O \\rightarrow SiO_2 + 2H_{2\\ (g)}"
},
{
"math_id": 1,
"text": "\\rm Si + O_2 \\rightarrow SiO_2 \\,"
},
{
"math_id": 2,
"text": "\\tau = \\frac{X_o^2}{B} + \\frac{X_o}{(\\frac{B}{A})}"
},
{
"math_id": 3,
"text": "X_o(t) = A/2 \\cdot \\left[ \\sqrt{1+\\frac{4B}{A^2}(t+\\tau)} - 1 \\right]"
}
] | https://en.wikipedia.org/wiki?curid=8556260 |
8556497 | Contraposition | Mathematical logic concept
In logic and mathematics, contraposition, or "transposition", refers to the inference of going from a conditional statement into its logically equivalent contrapositive, and an associated proof method known as . The contrapositive of a statement has its antecedent and consequent inverted and flipped.
Conditional statement formula_0. In formulas: the contrapositive of formula_0 is formula_1.
If "P", Then "Q". — If not "Q", Then not "P". """If "it is raining," then "I wear my coat" —" "If "I don't wear my coat," then "it isn't raining.""
The law of contraposition says that a conditional statement is true if, and only if, its contrapositive is true.
The contrapositive (formula_1) can be compared with three other statements:
Note that if formula_0 is true and one is given that "formula_5" is false (i.e., formula_6), then it can logically be concluded that "formula_7" must be also false (i.e., formula_8). This is often called the "law of contrapositive", or the "modus tollens" rule of inference.
Intuitive explanation.
In the Euler diagram shown, if something is in A, it must be in B as well. So we can interpret "all of A is in B" as:
formula_9
It is also clear that anything that is "not" within B (the blue region) "cannot" be within A, either. This statement, which can be expressed as:
formula_10
is the contrapositive of the above statement. Therefore, one can say that
formula_11
In practice, this equivalence can be used to make proving a statement easier. For example, if one wishes to prove that every girl in the United States (A) has brown hair (B), one can either try to directly prove formula_9 by checking that all girls in the United States do indeed have brown hair, or try to prove formula_10 by checking that all girls without brown hair are indeed all outside the US. In particular, if one were to find at least one girl without brown hair within the US, then one would have disproved formula_10, and equivalently formula_9.
In general, for any statement where "A" implies "B", "not B" always implies "not A". As a result, proving or disproving either one of these statements automatically proves or disproves the other, as they are logically equivalent to each other.
Formal definition.
A proposition "Q" is implicated by a proposition "P" when the following relationship holds:
formula_12
This states that, "if formula_7, then formula_5", or, "if "Socrates is a man", then "Socrates is human"." In a conditional such as this, formula_7 is the antecedent, and formula_5 is the consequent. One statement is the contrapositive of the other only when its antecedent is the negated consequent of the other, and vice versa. Thus a contrapositive generally takes the form of:
formula_13
That is, "If not-formula_5, then not-formula_7", or, more clearly, "If formula_5 is not the case, then "P" is not the case." Using our example, this is rendered as "If "Socrates is not human", then "Socrates is not a man"." This statement is said to be "contraposed" to the original and is logically equivalent to it. Due to their logical equivalence, stating one effectively states the other; when one is true, the other is also true, and when one is false, the other is also false.
Strictly speaking, a contraposition can only exist in two simple conditionals. However, a contraposition may also exist in two complex, universal conditionals, if they are similar. Thus, formula_14, or "All formula_7s are formula_5s," is contraposed to formula_15, or "All non-formula_5s are non-formula_7s."
Sequent notation.
The "transposition" rule may be expressed as a sequent:
formula_16
where formula_17 is a metalogical symbol meaning that formula_18 is a syntactic consequence of formula_12 in some logical system; or as a rule of inference:
formula_19
where the rule is that wherever an instance of "formula_20" appears on a line of a proof, it can be replaced with "formula_21"; or as the statement of a truth-functional tautology or theorem of propositional logic. The principle was stated as a theorem of propositional logic by Russell and Whitehead in "Principia Mathematica" as
formula_22
where formula_7 and formula_5 are propositions expressed in some formal system.
Proofs.
Simple proof by definition of a conditional.
In first-order logic, the conditional is defined as:
formula_23
which can be made equivalent to its contrapositive, as follows:
formula_24
Simple proof by contradiction.
Let:
formula_25
It is given that, if A is true, then B is true, and it is also given that B is not true. We can then show that A must not be true by contradiction. For if A were true, then B would have to also be true (by Modus Ponens). However, it is given that B is not true, so we have a contradiction. Therefore, A is not true (assuming that we are dealing with bivalent statements that are either true or false):
formula_26
We can apply the same process the other way round, starting with the assumptions that:
formula_27
Here, we also know that B is either true or not true. If B is not true, then A is also not true. However, it is given that A is true, so the assumption that B is not true leads to a contradiction, which means that it is not the case that B is not true. Therefore, B must be true:
formula_28
Combining the two proved statements together, we obtain the sought-after logical equivalence between a conditional and its contrapositive:
formula_29
More rigorous proof of the equivalence of contrapositives.
Logical equivalence between two propositions means that they are true together or false together. To prove that contrapositives are logically equivalent, we need to understand when material implication is true or false.
formula_20
This is only false when formula_7 is true and formula_5 is false. Therefore, we can reduce this proposition to the statement "False when formula_7 and not-formula_5" (i.e. "True when it is not the case that formula_7 and not-formula_5"):
formula_30
The elements of a conjunction can be reversed with no effect (by commutativity):
formula_31
We define formula_32 as equal to "formula_6", and formula_33 as equal to formula_8 (from this, formula_34 is equal to formula_35, which is equal to just formula_7):
formula_36
This reads "It is not the case that ("R" is true and "S" is false)", which is the definition of a material conditional. We can then make this substitution:
formula_37
By reverting "R" and "S" back into formula_7 and formula_5, we then obtain the desired contrapositive:
formula_21
In classical propositional calculus system.
In Hilbert-style deductive systems for propositional logic, only one side of the transposition is taken as an axiom, and the other is a theorem. We describe a proof of this theorem in the system of three axioms proposed by Jan Łukasiewicz:
A1. formula_38
A2. formula_39
A3. formula_40
(A3) already gives one of the directions of the transposition. The other side, formula_41, is proven below, using the following lemmas proven here:
(DN1) formula_42 - Double negation (one direction)
(DN2) formula_43 - Double negation (another direction)
(HS1) formula_44 - one form of Hypothetical syllogism
(HS2) formula_45 - another form of Hypothetical syllogism.
We also use the method of the hypothetical syllogism metatheorem as a shorthand for several proof steps.
The proof is as follows:
Comparisons.
Examples.
Take the statement ""All red objects have color." This can be equivalently expressed as "If an object is red, then it has color."
In other words, the contrapositive is logically equivalent to a given conditional statement, though not sufficient for a biconditional.
Similarly, take the statement "All quadrilaterals have four sides," or equivalently expressed "If a polygon is a quadrilateral, then it has four sides."
Since the statement and the converse are both true, it is called a biconditional, and can be expressed as "A polygon is a quadrilateral "if, and only if," it has four sides." (The phrase "if and only if" is sometimes abbreviated as "iff".) That is, having four sides is both necessary to be a quadrilateral, and alone sufficient to deem it a quadrilateral.
Traditional logic.
In traditional logic, contraposition is a form of immediate inference in which a proposition is inferred from another and where the former has for its subject the contradictory of the original logical proposition's predicate. In some cases, contraposition involves a change of the former's quality (i.e. affirmation or negation). For its symbolic expression in modern logic, see the rule of transposition. Contraposition also has philosophical application distinct from the other traditional inference processes of conversion and obversion where equivocation varies with different proposition types.
In traditional logic, the process of contraposition is a schema composed of several steps of inference involving categorical propositions and classes. A categorical proposition contains a subject and predicate where the existential impact of the copula implies the proposition as referring to a class "with at least one member", in contrast to the conditional form of hypothetical or materially implicative propositions, which are compounds of other propositions, e.g. "If P, then Q" (P and Q are both propositions), and their existential impact is dependent upon further propositions where quantification existence is instantiated (existential instantiation), not on the hypothetical or materially implicative propositions themselves.
"Full contraposition" is the simultaneous interchange and negation of the subject and predicate, and is valid only for the type "A" and type "O" propositions of Aristotelian logic, while it is conditionally valid for "E" type propositions if a change in quantity from universal to particular is made ("partial contraposition"). Since the valid obverse is obtained for all the four types (A, E, I, and O types) of traditional propositions, yielding propositions with the contradictory of the original predicate, (full) contraposition is obtained by converting the obvert of the original proposition. For "E" statements, partial contraposition can be obtained by additionally making a change in quantity. Because "nothing is said in the definition of contraposition with regard to the predicate of the inferred proposition", it can be either the original subject, or its contradictory, resulting in two contrapositives which are the obverts of one another in the "A", "O", and "E" type propositions.
By example: from an original, 'A' type categorical proposition,
"All residents are voters",
which presupposes that all classes have members and the existential import presumed in the form of categorical propositions, one can derive first by obversion the 'E' type proposition,
"No residents are non-voters".
The contrapositive of the original proposition is then derived by conversion to another 'E' type proposition,
"No non-voters are residents".
The process is completed by further obversion resulting in the 'A' type proposition that is the obverted contrapositive of the original proposition,
"All non-voters are non-residents".
The schema of contraposition:
Notice that contraposition is a valid form of immediate inference only when applied to "A" and "O" propositions. It is not valid for "I" propositions, where the obverse is an "O" proposition which has no valid converse. The contraposition of the "E" proposition is valid only with limitations ("per accidens"). This is because the obverse of the "E" proposition is an "A" proposition which cannot be validly converted except by limitation, that is, contraposition plus a change in the quantity of the proposition from universal to particular.
Also, notice that contraposition is a method of inference which may require the use of other rules of inference. The contrapositive is the product of the method of contraposition, with different outcomes depending upon whether the contraposition is full, or partial. The successive applications of conversion and obversion within the process of contraposition may be given by a variety of names.
The process of the logical equivalence of a statement and its contrapositive as defined in traditional class logic is "not" one of the axioms of propositional logic. In traditional logic there is more than one contrapositive inferred from each original statement. In regard to the "A" proposition this is circumvented in the symbolism of modern logic by the rule of transposition, or the law of contraposition. In its technical usage within the field of philosophic logic, the term "contraposition" may be limited by logicians (e.g. Irving Copi, Susan Stebbing) to traditional logic and categorical propositions. In this sense the use of the term "contraposition" is usually referred to by "transposition" when applied to hypothetical propositions or material implications.
Form of transposition.
In the inferred proposition, the consequent is the contradictory of the antecedent in the original proposition, and the antecedent of the inferred proposition is the contradictory of the consequent of the original proposition. The symbol for material implication signifies the proposition as a hypothetical, or the "if–then" form, e.g. "if "P", then "Q"".
The biconditional statement of the rule of transposition (↔) refers to the relation between hypothetical (→) "propositions", with each proposition including an antecedent and consequential term. As a matter of logical inference, to transpose or convert the terms of one proposition requires the conversion of the terms of the propositions on both sides of the biconditional relationship, meaning that transposing or converting ("P" → "Q") to ("Q" → "P") requires that the other proposition, (¬"Q" → ¬"P"), to be transposed or converted to (¬"P" → ¬"Q"). Otherwise, converting the terms of one proposition and not the other renders the rule invalid, violating the sufficient condition and necessary condition of the terms of the propositions, where the violation is that the changed proposition commits the fallacy of denying the antecedent or affirming the consequent by means of illicit conversion.
The truth of the rule of transposition is dependent upon the relations of sufficient condition and necessary condition in logic.
Sufficient condition.
In the proposition "If "P", then "Q"", the occurrence of "P" is sufficient reason for the occurrence of "Q". "P", as an individual or a class, materially implicates "Q", but the relation of "Q" to "P" is such that the converse proposition "If "Q", then "P"" does not necessarily have sufficient condition. The rule of inference for sufficient condition is "modus ponens", which is an argument for conditional implication:
Necessary condition.
Since the converse of premise (1) is not valid, all that can be stated of the relationship of "P" and "Q" is that in the absence of "Q", "P" does not occur, meaning that "Q" is the necessary condition for "P". The rule of inference for necessary condition is "modus tollens":
Necessity and sufficiency example.
An example traditionally used by logicians contrasting sufficient and necessary conditions is the statement "If there is fire, then oxygen is present". An oxygenated environment is necessary for fire or combustion, but simply because there is an oxygenated environment does not necessarily mean that fire or combustion is occurring. While one can infer that fire stipulates the presence of oxygen, from the presence of oxygen the converse "If there is oxygen present, then fire is present" cannot be inferred. All that can be inferred from the original proposition is that "If oxygen is not present, then there cannot be fire".
Relationship of propositions.
The symbol for the biconditional ("↔") signifies the relationship between the propositions is both necessary and sufficient, and is verbalized as "if and only if", or, according to the example "If "P", then "Q" 'if and only if' if not "Q", then not "P"".
Necessary and sufficient conditions can be explained by analogy in terms of the concepts and the rules of immediate inference of traditional logic. In the categorical proposition "All "S" is "P"", the subject term "S" is said to be distributed, that is, all members of its class are exhausted in its expression. Conversely, the predicate term "P" cannot be said to be distributed, or exhausted in its expression because it is indeterminate whether every instance of a member of "P" as a class is also a member of "S" as a class. All that can be validly inferred is that "Some "P" are "S"". Thus, the type "A" proposition "All "P" is "S"" cannot be inferred by conversion from the original type "A" proposition "All "S" is "P"". All that can be inferred is the type "A" proposition "All non-"P" is non-"S"" (note that ("P" → "Q") and (¬"Q" → ¬"P") are both type "A" propositions). Grammatically, one cannot infer "all mortals are men" from "All men are mortal". An type "A" proposition can only be immediately inferred by conversion when both the subject and predicate are distributed, as in the inference "All bachelors are unmarried men" from "All unmarried men are bachelors".
Distinguished from transposition.
While most authors use the terms for the same thing, some authors distinguish transposition from contraposition. In traditional logic the reasoning process of transposition as a rule of inference is applied to categorical propositions through contraposition and obversion, a series of immediate inferences where the rule of obversion is first applied to the original categorical proposition "All "S" is "P""; yielding the obverse "No "S" is non-"P"". In the obversion of the original proposition to a type "E" proposition, both terms become distributed. The obverse is then converted, resulting in "No non-"P" is "S"", maintaining distribution of both terms. The "No non-"P" is "S"" is again obverted, resulting in the [contrapositive] "All non-"P" is non-"S"". Since nothing is said in the definition of contraposition with regard to the predicate of the inferred proposition, it is permissible that it could be the original subject or its contradictory, and the predicate term of the resulting type "A" proposition is again undistributed. This results in two contrapositives, one where the predicate term is distributed, and another where the predicate term is undistributed.
Contraposition is a type of immediate inference in which from a given categorical proposition another categorical proposition is inferred which has as its subject the contradictory of the original predicate. Since nothing is said in the definition of contraposition with regard to the predicate of the inferred proposition, it is permissible that it could be the original subject or its contradictory. This is in contradistinction to the form of the propositions of transposition, which may be material implication, or a hypothetical statement. The difference is that in its application to categorical propositions the result of contraposition is two contrapositives, each being the obvert of the other, i.e. "No non-"P" is "S"" and "All non-"P" is non-"S"". The distinction between the two contrapositives is absorbed and eliminated in the principle of transposition, which presupposes the "mediate inferences" of contraposition and is also referred to as the "law of contraposition".
Proof by contrapositive.
Because the contrapositive of a statement always has the same truth value (truth or falsity) as the statement itself, it can be a powerful tool for proving mathematical theorems (especially if the truth of the contrapositive is easier to establish than the truth of the statement itself). A proof by contrapositive is a direct proof of the contrapositive of a statement. However, indirect methods such as proof by contradiction can also be used with contraposition, as, for example, in the proof of the irrationality of the square root of 2. By the definition of a rational number, the statement can be made that ""If formula_55 is rational, then it can be expressed as an irreducible fraction". This statement is true because it is a restatement of a definition. The contrapositive of this statement is "If formula_55 cannot be expressed as an irreducible fraction, then it is not rational"". This contrapositive, like the original statement, is also true. Therefore, if it can be proven that formula_55 cannot be expressed as an irreducible fraction, then it must be the case that formula_55 is not a rational number. The latter can be proved by contradiction.
The previous example employed the contrapositive of a definition to prove a theorem. One can also prove a theorem by proving the contrapositive of the theorem's statement. To prove that "if a positive integer "N" is a non-square number, its square root is irrational", we can equivalently prove its contrapositive, that "if a positive integer "N" has a square root that is rational, then "N" is a square number." This can be shown by setting √"N" equal to the rational expression "a/b" with "a" and "b" being positive integers with no common prime factor, and squaring to obtain "N" = "a"2/"b"2 and noting that since "N" is a positive integer "b"=1 so that "N" = "a"2, a square number.
In mathematics, proof by contrapositive, or proof by contraposition, is a rule of inference used in proofs, where one infers a conditional statement from its contrapositive. In other words, the conclusion "if "A", then "B"" is inferred by constructing a proof of the claim "if not "B", then not "A"" instead. More often than not, this approach is preferred if the contrapositive is easier to prove than the original conditional statement itself.
Logically, the validity of proof by contrapositive can be demonstrated by the use of the following truth table, where it is shown that p" → "q and formula_56q" → "formula_56p share the same truth values in all scenarios:
Difference with proof by contradiction.
Proof by contradiction: Assume (for contradiction) that formula_57 is true. Use this assumption to prove a contradiction. It follows that formula_57 is false, so formula_58 is true.
Proof by contrapositive: To prove formula_9, prove its contrapositive statement, which is formula_10.
Example.
Let formula_59 be an integer.
To prove: "If "formula_60 "is even, then "formula_59" is even.
Although a direct proof can be given, we choose to prove this statement by contraposition. The contrapositive of the above statement is:
"If "formula_59" is not even, then "formula_60 "is not even."
This latter statement can be proven as follows: suppose that "x" is not even, then "x" is odd. The product of two odd numbers is odd, hence formula_61 is odd. Thus formula_60 is not even.
Having proved the contrapositive, we can then infer that the original statement is true.
In nonclassical logics.
Intuitionistic logic.
In intuitionistic logic, the statement formula_20 cannot be proven to be equivalent to formula_62. We can prove that formula_20 implies formula_62 (see below), but the reverse implication, from formula_62 to formula_20, requires the law of the excluded middle or an equivalent axiom.<br>
Assume formula_20 (initial assumption)
Assume formula_63
From formula_20 and formula_63, conclude formula_64
Discharge assumption; conclude formula_65
Turning formula_66 into formula_67, conclude formula_62
Discharge assumption; conclude formula_68.
Subjective logic.
"Contraposition" represents an instance of the subjective Bayes' theorem in subjective logic expressed as:
formula_69
where formula_70 denotes a pair of binomial conditional opinions given by source formula_58. The parameter formula_71 denotes the base rate (aka. the prior probability) of formula_7. The pair of derivative inverted conditional opinions is denoted formula_72. The conditional opinion formula_73 generalizes the logical statement formula_20, i.e. in addition to assigning TRUE or FALSE the source formula_58 can assign any subjective opinion to the statement. The case where formula_74 is an absolute TRUE opinion is equivalent to source formula_58 saying that formula_75 is TRUE, and the case where formula_74 is an absolute FALSE opinion is equivalent to source formula_58 saying that formula_75 is FALSE. In the case when the conditional opinion formula_73 is absolute TRUE the subjective Bayes' theorem operator formula_76 of subjective logic produces an absolute FALSE derivative conditional opinion formula_77 and thereby an absolute TRUE derivative conditional opinion formula_78 which is equivalent to formula_62 being TRUE. Hence, the subjective Bayes' theorem represents a generalization of both "contraposition" and Bayes' theorem.
In probability theory.
"Contraposition" represents an instance of Bayes' theorem which in a specific form can be expressed as:
formula_79
In the equation above the conditional probability formula_80 generalizes the logical statement formula_81, i.e. in addition to assigning TRUE or FALSE we can also assign any probability to the statement. The term formula_82 denotes the base rate (aka. the prior probability) of formula_7. Assume that formula_83 is equivalent to formula_84 being TRUE, and that formula_85 is equivalent to formula_81 being FALSE. It is then easy to see that formula_86 when formula_87 i.e. when formula_20 is TRUE. This is because formula_88 so that the fraction on the right-hand side of the equation above is equal to 1, and hence formula_89 which is equivalent to formula_62 being TRUE. Hence, Bayes' theorem represents a generalization of "contraposition".
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "P \\rightarrow Q"
},
{
"math_id": 1,
"text": " \\neg Q \\rightarrow \\neg P "
},
{
"math_id": 2,
"text": "\\neg P \\rightarrow \\neg Q"
},
{
"math_id": 3,
"text": "Q \\rightarrow P"
},
{
"math_id": 4,
"text": "\\neg (P \\rightarrow Q)"
},
{
"math_id": 5,
"text": "Q"
},
{
"math_id": 6,
"text": "\\neg Q"
},
{
"math_id": 7,
"text": "P"
},
{
"math_id": 8,
"text": "\\neg P"
},
{
"math_id": 9,
"text": "A \\to B"
},
{
"math_id": 10,
"text": "\\neg B \\to \\neg A"
},
{
"math_id": 11,
"text": "(A \\to B) \\leftrightarrow (\\neg B \\to \\neg A)."
},
{
"math_id": 12,
"text": "(P \\to Q)"
},
{
"math_id": 13,
"text": "(\\neg Q \\to \\neg P)."
},
{
"math_id": 14,
"text": "\\forall{x}(P{x} \\to Q{x})"
},
{
"math_id": 15,
"text": "\\forall{x}(\\neg Q{x} \\to \\neg P{x})"
},
{
"math_id": 16,
"text": "(P \\to Q) \\vdash (\\neg Q \\to \\neg P),"
},
{
"math_id": 17,
"text": "\\vdash"
},
{
"math_id": 18,
"text": "(\\neg Q \\to \\neg P)"
},
{
"math_id": 19,
"text": "\\frac{P \\to Q}{\\therefore \\neg Q \\to \\neg P},"
},
{
"math_id": 20,
"text": "P \\to Q"
},
{
"math_id": 21,
"text": "\\neg Q \\to \\neg P"
},
{
"math_id": 22,
"text": "(P \\to Q) \\to (\\neg Q \\to \\neg P),"
},
{
"math_id": 23,
"text": "A \\to B \\, \\leftrightarrow \\, \\neg A \\lor B"
},
{
"math_id": 24,
"text": "\n\\begin{align}\n\\neg A \\lor B \\,& \\, \\leftrightarrow B \\lor \\neg A \\\\ \\, & \\, \\leftrightarrow \\neg B \\to \\neg A\n\\end{align}\n"
},
{
"math_id": 25,
"text": "(A \\to B)\\land \\neg B"
},
{
"math_id": 26,
"text": "(A \\to B) \\to (\\neg B \\to \\neg A)"
},
{
"math_id": 27,
"text": "(\\neg B \\to \\neg A)\\land A"
},
{
"math_id": 28,
"text": "(\\neg B \\to \\neg A) \\to (A \\to B)"
},
{
"math_id": 29,
"text": "(A \\to B) \\equiv (\\neg B \\to \\neg A)"
},
{
"math_id": 30,
"text": "\\neg(P \\land \\neg Q)"
},
{
"math_id": 31,
"text": "\\neg(\\neg Q \\land P)"
},
{
"math_id": 32,
"text": "R"
},
{
"math_id": 33,
"text": "S"
},
{
"math_id": 34,
"text": "\\neg S"
},
{
"math_id": 35,
"text": "\\neg\\neg P"
},
{
"math_id": 36,
"text": "\\neg(R \\land \\neg S)"
},
{
"math_id": 37,
"text": "R \\to S"
},
{
"math_id": 38,
"text": "\\phi \\to \\left( \\psi \\to \\phi \\right) "
},
{
"math_id": 39,
"text": "\\left( \\phi \\to \\left( \\psi \\rightarrow \\xi \\right) \\right) \\to \\left( \\left( \\phi \\to \\psi \\right) \\to \\left( \\phi \\to \\xi \\right) \\right)"
},
{
"math_id": 40,
"text": "\\left ( \\lnot \\phi \\to \\lnot \\psi \\right) \\to \\left( \\psi \\to \\phi \\right) "
},
{
"math_id": 41,
"text": "( \\psi \\to \\phi ) \\to ( \\neg \\phi \\to \\neg \\psi)"
},
{
"math_id": 42,
"text": " \\neg \\neg p \\to p"
},
{
"math_id": 43,
"text": " p \\to \\neg \\neg p"
},
{
"math_id": 44,
"text": "(q \\to r) \\to ((p \\to q) \\to (p \\to r))"
},
{
"math_id": 45,
"text": "(p \\to q) \\to ((q \\to r) \\to (p \\to r))"
},
{
"math_id": 46,
"text": " q \\to \\neg\\neg q "
},
{
"math_id": 47,
"text": " (q \\to \\neg\\neg q) \\to ((p \\to q) \\to (p \\to \\neg\\neg q)) "
},
{
"math_id": 48,
"text": " (p \\to q) \\to (p \\to \\neg\\neg q) "
},
{
"math_id": 49,
"text": " \\neg\\neg p \\to p "
},
{
"math_id": 50,
"text": " (\\neg\\neg p \\to p) \\to ((p \\to \\neg\\neg q) \\to (\\neg\\neg p \\to \\neg\\neg q)) "
},
{
"math_id": 51,
"text": " (p \\to \\neg\\neg q) \\to (\\neg\\neg p \\to \\neg\\neg q) "
},
{
"math_id": 52,
"text": " (p \\to q) \\to (\\neg\\neg p \\to \\neg\\neg q) "
},
{
"math_id": 53,
"text": " (\\neg\\neg p \\to \\neg\\neg q) \\to (\\neg q \\to \\neg p) "
},
{
"math_id": 54,
"text": " (p \\to q) \\to (\\neg q \\to \\neg p) "
},
{
"math_id": 55,
"text": "\\sqrt{2}"
},
{
"math_id": 56,
"text": "\\lnot"
},
{
"math_id": 57,
"text": "\\neg A"
},
{
"math_id": 58,
"text": "A"
},
{
"math_id": 59,
"text": "x"
},
{
"math_id": 60,
"text": "x^2"
},
{
"math_id": 61,
"text": "x^2=x\\cdot x"
},
{
"math_id": 62,
"text": "\\lnot Q \\to \\lnot P"
},
{
"math_id": 63,
"text": "Q \\to \\bot"
},
{
"math_id": 64,
"text": "P \\to \\bot"
},
{
"math_id": 65,
"text": "(Q \\to \\bot) \\to (P \\to \\bot)"
},
{
"math_id": 66,
"text": "(A \\to \\bot)"
},
{
"math_id": 67,
"text": "\\lnot A"
},
{
"math_id": 68,
"text": "(P \\to Q) \\to (\\lnot Q \\to \\lnot P)"
},
{
"math_id": 69,
"text": "(\\omega^{A}_{P\\tilde{|}Q},\\omega^{A}_{P\\tilde{|}\\lnot Q}) = (\\omega^{A}_{Q|P},\\omega^{A}_{Q|\\lnot P})\\,\\widetilde{\\phi\\,}\\, a_{P}\\,,"
},
{
"math_id": 70,
"text": "(\\omega^{A}_{Q|P},\\omega^{A}_{Q|\\lnot P})"
},
{
"math_id": 71,
"text": "a_{P}"
},
{
"math_id": 72,
"text": "(\\omega^{A}_{P\\tilde{|}Q},\\omega^{A}_{P\\tilde{|}\\lnot Q})"
},
{
"math_id": 73,
"text": "\\omega^{A}_{Q|P}"
},
{
"math_id": 74,
"text": "\\omega^{A}_{Q\\mid P}"
},
{
"math_id": 75,
"text": "P\\to Q"
},
{
"math_id": 76,
"text": "\\widetilde{\\phi\\,}"
},
{
"math_id": 77,
"text": "\\omega^{A}_{P\\widetilde{|}\\lnot Q}"
},
{
"math_id": 78,
"text": "\\omega^{A}_{\\lnot P\\widetilde{|}\\lnot Q}"
},
{
"math_id": 79,
"text": "\\Pr(\\lnot P\\mid \\lnot Q) = \\frac{\\Pr(\\lnot Q \\mid \\lnot P)\\,a(\\lnot P)}{\\Pr(\\lnot Q\\mid \\lnot P)\\,a(\\lnot P)+\\Pr(\\lnot Q\\mid P)\\,a(P)}."
},
{
"math_id": 80,
"text": "\\Pr(\\lnot Q\\mid P)"
},
{
"math_id": 81,
"text": "P \\to \\lnot Q"
},
{
"math_id": 82,
"text": "a(P)"
},
{
"math_id": 83,
"text": "\\Pr(\\lnot Q \\mid P) = 1"
},
{
"math_id": 84,
"text": "P\\to \\lnot Q"
},
{
"math_id": 85,
"text": "\\Pr(\\lnot Q \\mid P) = 0"
},
{
"math_id": 86,
"text": "\\Pr(\\lnot P \\mid \\lnot Q) = 1"
},
{
"math_id": 87,
"text": "\\Pr(Q\\mid P) = 1"
},
{
"math_id": 88,
"text": "\\Pr(\\lnot Q\\mid P) = 1 - \\Pr(Q\\mid P) = 0"
},
{
"math_id": 89,
"text": "\\Pr(\\lnot P\\mid \\lnot Q) = 1"
}
] | https://en.wikipedia.org/wiki?curid=8556497 |
8557344 | Drude particle | Drude particles are model oscillators used to simulate the effects of electronic polarizability in the context of a classical molecular mechanics force field. They are inspired by the Drude model of mobile electrons and are used in the computational study of proteins, nucleic acids, and other biomolecules.
Classical Drude oscillator.
Most force fields in current practice represent individual atoms as point particles interacting according to the laws of Newtonian mechanics. To each atom, a single electric charge is assigned that doesn't change during the course of the simulation. However, such models cannot have induced dipoles or other electronic effects due to a changing local environment.
Classical Drude particles are massless virtual sites carrying a partial electric charge, attached to individual atoms via a harmonic spring. The spring constant and relative partial charges on the atom and associated Drude particle determine its response to the local electrostatic field, serving as a proxy for the changing distribution of the electronic charge of the atom or molecule. However, this response is limited to a changing dipole moment. This response is not enough to model interactions in environments with large field gradients, which interact with higher order moments.
Efficiency of simulation.
The major computational cost of simulating classical Drude oscillators is the calculation of the local electrostatic field and the repositioning of the Drude particle at each step. Traditionally, this repositioning is done self consistently. This cost can be reduced by assigning a small mass to each Drude particle, applying a Lagrangian transformation and evolving the simulation in the generalised coordinates. This method of simulation has been used to create water models incorporating classical Drude oscillators.
Quantum Drude oscillator.
Since the response of a classical Drude oscillator is limited, it is not enough to model interactions in heterogeneous media with large field gradients, where higher order electronic responses have significant contributions to the interaction energy. A quantum Drude oscillator (QDO) is a natural extension to the classical Drude oscillator. Instead of a classical point particle serving as a proxy for the charge distribution, a QDO uses a quantum harmonic oscillator, in the form of a pseudoelectron connected to an oppositely charged pseudonucleus by a harmonic spring.
A QDO has three free parameters: the spring's frequency formula_0, the pseudoelectron's charge formula_1 and the system's reduced mass formula_2. The ground state of a QDO is a gaussian of width formula_3. Adding an external field perturbs the ground state of a QDO, which allows us to calculate its polarizability. To second order, the change in energy relative to the ground state is given by the following series:
formula_4
where the polarizabilities formula_5 are
formula_6
Furthermore, since QDOs are quantum mechanical objects, their electrons can correlate, giving rise to dispersion forces between them. The second order change in energy corresponding to such an interaction is:
formula_7
with the first three dispersion coefficients being (in the case of identical QDOs):
formula_8
formula_9
formula_10
Since the response coefficients of QDOs depend on three parameters only, they are all related. Thus, these response coefficients can combine into four dimensionless constants, all equal to unity:
formula_11
formula_12
formula_13
The QDO representation of atoms is the basis of the many body dispersion model which is a popular way to account for electrostatic forces in molecular dynamics simulations. This representation allows describing the processes of biological ion transport, small drug molecules across hydrophobic cell membranes and the behavior of proteins in solutions. | [
{
"math_id": 0,
"text": "\\omega"
},
{
"math_id": 1,
"text": "q"
},
{
"math_id": 2,
"text": "\\mu"
},
{
"math_id": 3,
"text": "\\sigma = 1/\\sqrt{2 \\mu \\omega}"
},
{
"math_id": 4,
"text": "E^{(2)} = \\sum_{l=1}^{\\infty} E_l^{(2)} = \\sum_{l=1}^{\\infty} - \\frac{Q^2 \\alpha_l}{2 R^{2l + 2}}"
},
{
"math_id": 5,
"text": "\\alpha_l"
},
{
"math_id": 6,
"text": "\\alpha_l = \\left[ \\frac{q^2}{\\mu \\omega^2} \\right] \\left[ \\frac{(2 l - 1)!!}{l} \\right] \\left( \\frac{\\hbar}{2 \\mu \\omega} \\right)^{l-1}"
},
{
"math_id": 7,
"text": "E^{(2)} = \\sum_{l=3}^{\\infty} C_{2 l} R^{-2 l}"
},
{
"math_id": 8,
"text": "C_6 = \\frac{3}{4} \\alpha_1 \\alpha_1 \\hbar \\omega"
},
{
"math_id": 9,
"text": "C_8 = 5 \\alpha_1 \\alpha_2 \\hbar \\omega"
},
{
"math_id": 10,
"text": "C_{10} = \\left( \\frac{21}{2} \\alpha_1 \\alpha_3 + \\frac{70}{4} \\alpha_2 \\alpha_2 \\right) \\hbar \\omega"
},
{
"math_id": 11,
"text": "\\sqrt{\\frac{20}{9}} \\frac{\\alpha_2}{\\sqrt{\\alpha_1 \\alpha_3}} = 1"
},
{
"math_id": 12,
"text": "\\sqrt{\\frac{49}{40}} \\frac{C_8}{\\sqrt{C_6 C_{10}}} = 1"
},
{
"math_id": 13,
"text": "\\frac{C_6 \\alpha_1}{4 C_9} = 1"
}
] | https://en.wikipedia.org/wiki?curid=8557344 |
855776 | Einstein–Hilbert action | Concept in general relativity
The Einstein–Hilbert action in general relativity is the action that yields the Einstein field equations through the stationary-action principle. With the (− + + +) metric signature, the gravitational part of the action is given as
formula_0
where formula_1 is the determinant of the metric tensor matrix, formula_2 is the Ricci scalar, and formula_3 is the Einstein gravitational constant (formula_4 is the gravitational constant and formula_5 is the speed of light in vacuum). If it converges, the integral is taken over the whole spacetime. If it does not converge, formula_6 is no longer well-defined, but a modified definition where one integrates over arbitrarily large, relatively compact domains, still yields the Einstein equation as the Euler–Lagrange equation of the Einstein–Hilbert action. The action was proposed by David Hilbert in 1915 as part of his application of the variational principle to a combination of gravity and electromagnetism.
Discussion.
Deriving equations of motion from an action has several advantages. First, it allows for easy unification of general relativity with other classical field theories (such as Maxwell theory), which are also formulated in terms of an action. In the process, the derivation identifies a natural candidate for the source term coupling the metric to matter fields. Moreover, symmetries of the action allow for easy identification of conserved quantities through Noether's theorem.
In general relativity, the action is usually assumed to be a functional of the metric (and matter fields), and the connection is given by the Levi-Civita connection. The Palatini formulation of general relativity assumes the metric and connection to be independent, and varies with respect to both independently, which makes it possible to include fermionic matter fields with non-integer spin.
The Einstein equations in the presence of matter are given by adding the matter action to the Einstein–Hilbert action.
Derivation of Einstein field equations.
Suppose that the full action of the theory is given by the Einstein–Hilbert term plus a term formula_7 describing any matter fields appearing in the theory.
The stationary-action principle then tells us that to recover a physical law, we must demand that the variation of this action with respect to the inverse metric be zero, yielding
formula_8.
Since this equation should hold for any variation formula_9, it implies that
is the equation of motion for the metric field. The right hand side of this equation is (by definition) proportional to the stress–energy tensor,
formula_10.
To calculate the left hand side of the equation we need the variations of the Ricci scalar formula_2 and the determinant of the metric. These can be obtained by standard textbook calculations such as the one given below, which is strongly based on the one given in Carroll (2004).
Variation of the Ricci scalar.
The variation of the Ricci scalar follows from varying the Riemann curvature tensor, and then the Ricci curvature tensor.
The first step is captured by the Palatini identity
formula_11.
Using the product rule, the variation of the Ricci scalar formula_12 then becomes
formula_13
where we also used the metric compatibility formula_14, and renamed the summation indices formula_15 in the last term.
When multiplied by formula_16, the term formula_17 becomes a total derivative, since for any vector formula_18 and any tensor density formula_19, we have
formula_20 or formula_21.
By Stokes' theorem, this only yields a boundary term when integrated. The boundary term is in general non-zero, because the integrand depends not only on formula_22 but also on its partial derivatives formula_23; see the article Gibbons–Hawking–York boundary term for details. However, when the variation of the metric formula_9 vanishes in a neighbourhood of the boundary or when there is no boundary, this term does not contribute to the variation of the action. Thus, we can forget about this term and simply obtain
at events not in the closure of the boundary.
Variation of the determinant.
Jacobi's formula, the rule for differentiating a determinant, gives:
formula_24,
or one could transform to a coordinate system where formula_25 is diagonal and then apply the product rule to differentiate the product of factors on the main diagonal. Using this we get
formula_26
In the last equality we used the fact that
formula_27
which follows from the rule for differentiating the inverse of a matrix
formula_28.
Thus we conclude that
Equation of motion.
Now that we have all the necessary variations at our disposal, we can insert (3) and (4) into the equation of motion (2) for the metric field to obtain
which is the Einstein field equations, and
formula_29
has been chosen such that the non-relativistic limit yields the usual form of Newton's gravity law, where formula_4 is the gravitational constant (see here for details).
Cosmological constant.
When a cosmological constant Λ is included in the Lagrangian, the action:
formula_30
Taking variations with respect to the inverse metric:
formula_31
Using the action principle:
formula_32
Combining this expression with the results obtained before:
formula_33
We can obtain:
formula_34
With formula_35, the expression becomes the field equations with a cosmological constant:
formula_36
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S = {1 \\over 2\\kappa} \\int R \\sqrt{-g} \\, \\mathrm{d}^4x,"
},
{
"math_id": 1,
"text": "g=\\det(g_{\\mu\\nu})"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "\\kappa = 8\\pi Gc^{-4}"
},
{
"math_id": 4,
"text": "G"
},
{
"math_id": 5,
"text": "c"
},
{
"math_id": 6,
"text": "S"
},
{
"math_id": 7,
"text": "\\mathcal{L}_\\mathrm{M}"
},
{
"math_id": 8,
"text": "\\begin{align}\n0 &= \\delta S \\\\\n &= \\int \\left[ \\frac{1}{2\\kappa} \\frac{\\delta \\left(\\sqrt{-g}R\\right)}{\\delta g^{\\mu\\nu}} + \\frac{\\delta \\left(\\sqrt{-g} \\mathcal{L}_\\mathrm{M}\\right)}{\\delta g^{\\mu\\nu}}\n \\right] \\delta g^{\\mu\\nu} \\, \\mathrm{d}^4x \\\\\n &= \\int \\left[ \\frac{1}{2\\kappa} \\left( \\frac{\\delta R}{\\delta g^{\\mu\\nu}} + \\frac{R}{\\sqrt{-g}} \\frac{\\delta \\sqrt{-g}}{\\delta g^{\\mu\\nu}}\n \\right) + \\frac{1}{\\sqrt{-g}} \\frac{\\delta \\left(\\sqrt{-g} \\mathcal{L}_\\mathrm{M}\\right)}{\\delta g^{\\mu\\nu}} \\right] \\delta g^{\\mu\\nu} \\sqrt{-g}\\, \\mathrm{d}^4x\n\\end{align}"
},
{
"math_id": 9,
"text": "\\delta g^{\\mu\\nu}"
},
{
"math_id": 10,
"text": "T_{\\mu\\nu} := \\frac{-2}{\\sqrt{-g}}\\frac{\\delta (\\sqrt{-g} \\mathcal{L}_\\mathrm{M})}{\\delta g^{\\mu\\nu}} = -2 \\frac{\\delta \\mathcal{L}_\\mathrm{M}}{\\delta g^{\\mu\\nu}} + g_{\\mu\\nu} \\mathcal{L}_\\mathrm{M}"
},
{
"math_id": 11,
"text": "\n \\delta R_{\\sigma\\nu} \\equiv \\delta {R^\\rho}_{\\sigma\\rho\\nu} =\n \\nabla_\\rho \\left( \\delta \\Gamma^\\rho_{\\nu\\sigma} \\right) - \\nabla_\\nu \\left( \\delta \\Gamma^\\rho_{\\rho\\sigma} \\right)"
},
{
"math_id": 12,
"text": "R = g^{\\sigma\\nu} R_{\\sigma\\nu}"
},
{
"math_id": 13,
"text": "\\begin{align}\n\\delta R &= R_{\\sigma\\nu} \\delta g^{\\sigma\\nu} + g^{\\sigma\\nu} \\delta R_{\\sigma\\nu}\\\\\n &= R_{\\sigma\\nu} \\delta g^{\\sigma\\nu} + \\nabla_\\rho \\left( g^{\\sigma\\nu} \\delta\\Gamma^\\rho_{\\nu\\sigma} - g^{\\sigma\\rho} \\delta \\Gamma^\\mu_{\\mu\\sigma} \\right),\n\\end{align}"
},
{
"math_id": 14,
"text": "\\nabla_\\sigma g^{\\mu\\nu} = 0"
},
{
"math_id": 15,
"text": "(\\rho,\\nu) \\rightarrow (\\mu,\\rho)"
},
{
"math_id": 16,
"text": "\\sqrt{-g}"
},
{
"math_id": 17,
"text": "\\nabla_\\rho \\left( g^{\\sigma\\nu} \\delta\\Gamma^\\rho_{\\nu\\sigma} - g^{\\sigma\\rho}\\delta\\Gamma^\\mu_{\\mu\\sigma} \\right)"
},
{
"math_id": 18,
"text": "A^\\lambda"
},
{
"math_id": 19,
"text": "\\sqrt{-g}\\,A^\\lambda"
},
{
"math_id": 20,
"text": "\n \\sqrt{-g} \\, A^\\lambda_{;\\lambda} =\n \\left(\\sqrt{-g} \\, A^\\lambda\\right)_{;\\lambda} =\n \\left(\\sqrt{-g} \\, A^\\lambda\\right)_{,\\lambda}\n"
},
{
"math_id": 21,
"text": "\n \\sqrt{-g} \\, \\nabla_\\mu A^\\mu =\n \\nabla_\\mu\\left(\\sqrt{-g} \\, A^\\mu\\right) =\n \\partial_\\mu\\left(\\sqrt{-g} \\, A^\\mu\\right)\n"
},
{
"math_id": 22,
"text": "\\delta g^{\\mu\\nu},"
},
{
"math_id": 23,
"text": "\\partial_\\lambda\\, \\delta g^{\\mu\\nu} \\equiv \\delta\\, \\partial_\\lambda g^{\\mu\\nu}"
},
{
"math_id": 24,
"text": "\\delta g = \\delta \\det(g_{\\mu\\nu}) = g g^{\\mu\\nu} \\delta g_{\\mu\\nu}"
},
{
"math_id": 25,
"text": "g_{\\mu\\nu}"
},
{
"math_id": 26,
"text": "\\delta \\sqrt{-g} = -\\frac{1}{2\\sqrt{-g}}\\delta g = \\frac{1}{2} \\sqrt{-g} \\left( g^{\\mu\\nu} \\delta g_{\\mu\\nu} \\right) = -\\frac{1}{2} \\sqrt{-g} \\left( g_{\\mu\\nu} \\delta g^{\\mu\\nu} \\right)"
},
{
"math_id": 27,
"text": "g_{\\mu\\nu}\\delta g^{\\mu\\nu} = -g^{\\mu\\nu} \\delta g_{\\mu\\nu}"
},
{
"math_id": 28,
"text": "\\delta g^{\\mu\\nu} = - g^{\\mu\\alpha} \\left( \\delta g_{\\alpha\\beta} \\right) g^{\\beta\\nu}"
},
{
"math_id": 29,
"text": "\\kappa = \\frac{8\\pi G}{c^4}"
},
{
"math_id": 30,
"text": "S = \\int \\left[ \\frac{1}{2\\kappa} (R-2 \\Lambda ) + \\mathcal{L}_\\mathrm{M} \\right] \\sqrt{-g} \\, \\mathrm{d}^4 x "
},
{
"math_id": 31,
"text": "\\begin{align} \n \\delta S\n &= \\int \\left[ \\frac{\\sqrt{-g}}{2\\kappa} \\frac{\\delta R}{\\delta g^{\\mu \\nu}} + \\frac{R}{2\\kappa} \\frac{\\delta \\sqrt{-g}}{\\delta g^{\\mu \\nu}} - \\frac{\\Lambda}{\\kappa} \\frac{\\delta \\sqrt{-g}}{\\delta g^{\\mu \\nu}} + \\sqrt{-g}\\frac{\\delta \\mathcal{L}_\\mathrm{M}}{\\delta g^{\\mu \\nu}} + \\mathcal{L}_\\mathrm{M} \\frac{\\delta \\sqrt{-g}}{\\delta g^{\\mu \\nu}} \\right] \\delta g^{\\mu \\nu} \\mathrm{d}^4 x \\\\\n &= \\int \\left[ \\frac{1}{2\\kappa} \\frac{\\delta R}{\\delta g^{\\mu \\nu}} + \\frac{R}{2\\kappa} \\frac{1}{\\sqrt{-g}} \\frac{\\delta \\sqrt{-g}}{\\delta g^{\\mu \\nu}} - \\frac{\\Lambda}{\\kappa} \\frac{1}{\\sqrt{-g}} \\frac{\\delta \\sqrt{-g}}{\\delta g^{\\mu \\nu}} + \\frac{\\delta \\mathcal{L}_\\mathrm{M}}{\\delta g^{\\mu \\nu}} + \\frac{\\mathcal{L}_\\mathrm{M}}{\\sqrt{-g}} \\frac{\\delta \\sqrt{-g}}{\\delta g^{\\mu \\nu}} \\right] \\delta g^{\\mu \\nu} \\sqrt{-g} \\, \\mathrm{d}^4 x \n\\end{align}"
},
{
"math_id": 32,
"text": "\n 0 = \\delta S =\n \\frac{1}{2\\kappa} \\frac{\\delta R}{\\delta g^{\\mu \\nu}} + \\frac{R}{2\\kappa} \\frac{1}{\\sqrt{-g}} \\frac{\\delta \\sqrt{-g}}{\\delta g^{\\mu \\nu}} - \n \\frac{\\Lambda}{\\kappa} \\frac{1}{\\sqrt{-g}} \\frac{\\delta \\sqrt{-g}}{\\delta g^{\\mu \\nu}} + \\frac{\\delta \\mathcal{L}_\\mathrm{M}}{\\delta g^{\\mu \\nu}} + \n \\frac{\\mathcal{L}_\\mathrm{M}}{\\sqrt{-g}} \\frac{\\delta \\sqrt{-g}}{\\delta g^{\\mu \\nu}}\n"
},
{
"math_id": 33,
"text": "\\begin{align}\n \\frac{\\delta R}{\\delta g^{\\mu \\nu}} &= R_{\\mu \\nu} \\\\\n \\frac{1}{\\sqrt{-g}} \\frac{\\delta \\sqrt{-g}}{\\delta g^{\\mu \\nu}} &= \\frac{-g_{\\mu \\nu}}{2} \\\\\n T_{\\mu \\nu} &= \\mathcal{L}_\\mathrm{M} g_{\\mu \\nu} - 2 \\frac{\\delta\\mathcal{L}_\\mathrm{M}}{\\delta g^{\\mu \\nu}}\n\\end{align}"
},
{
"math_id": 34,
"text": "\\begin{align}\n \\frac{1}{2\\kappa} R_{\\mu \\nu} + \\frac{R}{2\\kappa} \\frac{-g_{\\mu \\nu}}{2} - \\frac{\\Lambda}{\\kappa} \\frac{-g_{\\mu \\nu}}{2} + \\left(\\frac{\\delta \\mathcal{L}_\\mathrm{M}}{\\delta g^{\\mu \\nu}} + \\mathcal{L}_\\mathrm{M}\\frac{-g_{\\mu \\nu}}{2} \\right) &= 0 \\\\\n R_{\\mu \\nu} - \\frac{R}{2} g_{\\mu \\nu} + \\Lambda g_{\\mu \\nu} + \\kappa \\left(2 \\frac{\\delta \\mathcal{L}_\\mathrm{M}}{\\delta g^{\\mu \\nu}} - \\mathcal{L}_\\mathrm{M}g_{\\mu \\nu} \\right) &= 0 \\\\\n R_{\\mu \\nu} - \\frac{R}{2} g_{\\mu \\nu} + \\Lambda g_{\\mu \\nu} - \\kappa T_{\\mu \\nu} &= 0\n\\end{align} "
},
{
"math_id": 35,
"text": "\\kappa = \\frac{8 \\pi G}{c^4} "
},
{
"math_id": 36,
"text": "R_{\\mu \\nu} - \\frac{1}{2} g_{\\mu \\nu} R + \\Lambda g_{\\mu \\nu} = \\frac{8 \\pi G}{c^4} T_{\\mu \\nu}."
}
] | https://en.wikipedia.org/wiki?curid=855776 |
855856 | Nuclear quadrupole resonance | Nuclear quadrupole resonance spectroscopy or NQR is a chemical analysis technique related to nuclear magnetic resonance (NMR). Unlike NMR, NQR transitions of nuclei can be detected in the absence of a magnetic field, and for this reason NQR spectroscopy is referred to as "zero Field NMR". The NQR resonance is mediated by the interaction of the electric field gradient (EFG) with the quadrupole moment of the nuclear charge distribution. Unlike NMR, NQR is applicable only to solids and not liquids, because in liquids the electric field gradient at the nucleus averages to zero (the EFG tensor has trace zero). Because the EFG at the location of a nucleus in a given substance is determined primarily by the valence electrons involved in the particular bond with other nearby nuclei, the NQR frequency at which transitions occur is unique for a given substance. A particular NQR frequency in a compound or crystal is proportional to the product of the nuclear quadrupole moment, a property of the nucleus, and the EFG in the neighborhood of the nucleus. It is this product which is termed the nuclear quadrupole coupling constant for a given isotope in a material and can be found in tables of known NQR transitions. In NMR, an analogous but not identical phenomenon is the coupling constant, which is also the result of an internuclear interaction between nuclei in the analyte.
Principle.
Any nucleus with more than one unpaired nuclear particle (protons or neutrons) will have a charge distribution which results in an electric quadrupole moment. Allowed nuclear energy levels are shifted unequally due to the interaction of the nuclear charge with an electric field gradient supplied by the non-uniform distribution of electron density (e.g. from bonding electrons) and/or surrounding ions. As in the case of NMR, irradiation of the nucleus with a burst of RF electromagnetic radiation may result in absorption of some energy by the nucleus which can be viewed as a perturbation of the quadrupole energy level. Unlike the NMR case, NQR absorption takes place in the absence of an external magnetic field. Application of an external static field to a quadrupolar nucleus splits the quadrupole levels by the energy predicted from the Zeeman interaction. The technique is very sensitive to the nature and symmetry of the bonding around the nucleus. It can characterize phase transitions in solids when performed at varying temperature. Due to symmetry, the shifts become averaged to zero in the liquid phase, so NQR spectra can only be measured for solids.
Analogy with NMR.
In the case of NMR, nuclei with spin ≥ 1/2 have a magnetic dipole moment so that their energies are split by a magnetic field, allowing resonance absorption of energy related to the Larmor frequency:
formula_0
where formula_1 is the gyromagnetic ratio and formula_2 is the (normally applied) magnetic field external to the nucleus.
In the case of NQR, nuclei with spin ≥ 1, such as 14N, 17O, 35Cl and 63Cu, also have an electric quadrupole moment. The nuclear quadrupole moment is associated with non-spherical nuclear charge distributions. As such it is a measure of the degree to which the nuclear charge distribution deviates from that of a sphere; that is, the prolate or oblate shape of the nucleus. NQR is a direct observation of the interaction of the quadrupole moment with the local electric field gradient (EFG) created by the electronic structure of its environment. The NQR transition frequencies are proportional to the product of the electric quadrupole moment of the nucleus and a measure of the strength of the local EFG:
formula_3
where q is related to the largest principal component of the EFG tensor at the nucleus. formula_4 is referred to as the quadrupole coupling constant.
In principle, the NQR experimenter could apply a specified EFG in order to influence formula_5 just as the NMR experimenter is free to choose the Larmor frequency by adjusting the magnetic field. However, in solids, the strength of the EFG is many kV/m^2, making the application of EFG's for NQR in the manner that external magnetic fields are chosen for NMR impractical. Consequently, the NQR spectrum of a substance is specific to the substance - and NQR spectrum is a so called "chemical fingerprint." Because NQR frequencies are not chosen by the experimenter, they can be difficult to find making NQR a technically difficult technique to carry out. Since NQR is done in an environment without a static (or DC) magnetic field, it is sometimes called "zero field NMR". Many NQR transition frequencies depend strongly upon temperature.
Derivation of resonance frequency.
Consider a nucleus with a non-zero quadrupole moment formula_6 and charge density formula_7, which is surrounded by a potential formula_8. This potential may be produced by the electrons as stated above, whose probability distribution might be non-isotropic in general. The potential energy in this system equals to the integral over the charge distribution formula_7 and the potential formula_8 within a domain formula_9:
formula_10One can write the potential as a Taylor-expansion at the center of the considered nucleus. This method corresponds to the multipole expansion in cartesian coordinates (note that the equations below use the Einstein sum-convention):
formula_11
The first term involving formula_12 will not be relevant and can therefore be omitted. Since nuclei do not have an electric dipole moment formula_13, which would interact with the electric field formula_14, the first derivatives can also be neglected. One is therefore left with all nine combinations of second derivatives. However if one deals with a homogeneous oblate or prolate nucleus the matrix formula_15 will be diagonal and elements with formula_16 vanish. This leads to a simplification because the equation for the potential energy now contains only the second derivatives in respect to the same variable:
formula_17The remaining terms in the integral are related to the charge distribution and hence the quadrupole moment. The formula can be simplified even further by introducing the electric field gradient formula_18 , choosing the z-axis as the one with the maximal principal component formula_19 and using the Laplace equation to obtain the proportionality written above. For an formula_20 nucleus one obtains with the frequency-energy relation formula_21:
formula_22
Applications.
There are several research groups around the world currently working on ways to use NQR to detect explosives. Units designed to detect landmines and explosives concealed in luggage have been tested. A detection system consists of a radio frequency (RF) power source, a coil to produce the magnetic excitation field and a detector circuit which monitors for a RF NQR response coming from the explosive component of the object.
A fake device known as the ADE 651 claimed to exploit NQR to detect explosives but in fact could do no such thing. Nonetheless, the device was successfully sold for millions to dozens of countries, including the government of Iraq.
Another practical use for NQR is measuring the water/gas/oil coming out of an oil well in realtime.
This particular technique allows local or remote monitoring of the extraction process, calculation of the well's remaining capacity and the water/detergents ratio the input pump must send to efficiently extract oil.
Due to the strong temperature dependence of the NQR frequency, it can be used as a precise temperature sensor with resolution on the order of 10−4 °C.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\omega_L = \\gamma B"
},
{
"math_id": 1,
"text": "\\gamma"
},
{
"math_id": 2,
"text": "B"
},
{
"math_id": 3,
"text": " \\omega_Q \\sim \\frac{e^2 Q q}{\\hbar} = C_q"
},
{
"math_id": 4,
"text": "C_q"
},
{
"math_id": 5,
"text": "\\omega_Q"
},
{
"math_id": 6,
"text": "\\textbf{Q}"
},
{
"math_id": 7,
"text": "\\rho(\\textbf{r})"
},
{
"math_id": 8,
"text": "V(\\textbf{r})"
},
{
"math_id": 9,
"text": "\\mathcal{D}"
},
{
"math_id": 10,
"text": "U = - \\int_{\\mathcal{D}}d^3r \\rho(\\textbf{r})V(\\textbf{r})"
},
{
"math_id": 11,
"text": "V(\\textbf{r}) = V(0) + \\left[ \\left( \\frac{\\partial V}{\\partial x_i}\\right)\\Bigg\\vert_0 \\cdot x_i \\right] + \\frac{1}{2} \\left[ \\left( \\frac{\\partial^2 V}{\\partial x_i x_j}\\right) \\Bigg\\vert_0 \\cdot x_i x_j \\right] + ..."
},
{
"math_id": 12,
"text": "V(0)"
},
{
"math_id": 13,
"text": "\\textbf{p}"
},
{
"math_id": 14,
"text": "\\textbf{E} = - \\mathrm{grad} V(\\textbf{r})"
},
{
"math_id": 15,
"text": "Q_{ij}"
},
{
"math_id": 16,
"text": "i \\neq j"
},
{
"math_id": 17,
"text": "U = - \\frac{1}{2} \\int_{\\mathcal{D}}d^3r \\rho(\\textbf{r}) \\left[ \\left( \\frac{\\partial^2 V}{\\partial x_i^2}\\right) \\Bigg\\vert_0 \\cdot x_i^2 \\right] = - \\frac{1}{2} \\int_{\\mathcal{D}}d^3r \\rho(\\textbf{r}) \\left[ \\left( \\frac{\\partial E_i}{\\partial x_i}\\right) \\Bigg\\vert_0 \\cdot x_i^2 \\right] = - \\frac{1}{2} \\left( \\frac{\\partial E_i}{\\partial x_i}\\right) \\Bigg\\vert_0 \\cdot \\int_{\\mathcal{D}}d^3r \\left[\\rho(\\textbf{r}) \\cdot x_i^2 \\right]"
},
{
"math_id": 18,
"text": "V_{ii} = \\frac{\\partial^2 V}{\\partial x_i^2} = eq "
},
{
"math_id": 19,
"text": "Q_{zz} "
},
{
"math_id": 20,
"text": "I = 3/2"
},
{
"math_id": 21,
"text": "E = h\\nu"
},
{
"math_id": 22,
"text": "\\nu = \\frac{1}{2}\\left(\\frac{e^2qQ}{h}\\right)"
}
] | https://en.wikipedia.org/wiki?curid=855856 |
8559342 | Deal–Grove model | Mathematical model of semiconductor oxidation
The Deal–Grove model mathematically describes the growth of an oxide layer on the surface of a material. In particular, it is used to predict and interpret thermal oxidation of silicon in semiconductor device fabrication. The model was first published in 1965 by Bruce Deal and Andrew Grove of Fairchild Semiconductor, building on Mohamed M. Atalla's work on silicon surface passivation by thermal oxidation at Bell Labs in the late 1950s. This served as a step in the development of CMOS devices and the fabrication of integrated circuits.
Physical assumptions.
The model assumes that the oxidation reaction occurs at the interface between the oxide layer and the substrate material, rather than between the oxide and the ambient gas. Thus, it considers three phenomena that the oxidizing species undergoes, in this order:
The model assumes that each of these stages proceeds at a rate proportional to the oxidant's concentration. In the first step, this means Henry's law; in the second, Fick's law of diffusion; in the third, a first-order reaction with respect to the oxidant. It also assumes steady state conditions, i.e. that transient effects do not appear.
Results.
Given these assumptions, the flux of oxidant through each of the three phases can be expressed in terms of concentrations, material properties, and temperature.
formula_0
By setting the three fluxes equal to each other formula_1 the following relations can be derived:
formula_2
Assuming a diffusion controlled growth i.e. where formula_3 determines the growth rate, and substituting formula_4 and formula_5 in terms of formula_6 from the above two relations into formula_3 and formula_7 equation respectively, one obtains:
formula_8
If "N" is the concentration of the oxidant inside a unit volume of the oxide, then the oxide growth rate can be written in the form of a differential equation. The solution to this equation gives the oxide thickness at any time "t".
formula_9
where the constants formula_10 and formula_11 encapsulate the properties of the reaction and the oxide layer respectively, and formula_12 is the initial layer of oxide that was present at the surface. These constants are given as:
formula_13
where formula_14, with formula_15 being the gas solubility parameter of the Henry's law and formula_16 is the partial pressure of the diffusing gas.
Solving the quadratic equation for "x" yields:
formula_17
Taking the short and long time limits of the above equation reveals two main modes of operation. The first mode, where the growth is linear, occurs initially when formula_18 is small. The second mode gives a "quadratic" growth and occurs when the oxide thickens as the oxidation time increases.
formula_19
The quantities "B" and "B"/"A" are often called the "quadratic" and "linear reaction rate constants". They depend exponentially on temperature, like this:
formula_20
where formula_21 is the activation energy and formula_22 is the Boltzmann constant in eV. formula_21 differs from one equation to the other. The following table lists the values of the four parameters for single-crystal silicon under conditions typically used in industry (low doping, atmospheric pressure). The linear rate constant depends on the orientation of the crystal (usually indicated by the Miller indices of the crystal plane facing the surface). The table gives values for formula_23 and formula_24 silicon.
Validity for silicon.
The Deal–Grove model works very well for single-crystal silicon under most conditions. However, experimental data shows that very thin oxides (less than about 25 nanometres) grow much more quickly in formula_25 than the model predicts. In silicon nanostructures (e.g., silicon nanowires) this rapid growth is generally followed by diminishing oxidation kinetics in a process known as self-limiting oxidation, necessitating a modification of the Deal–Grove model.
If the oxide grown in a particular oxidation step greatly exceeds 25 nm, a simple adjustment accounts for the aberrant growth rate. The model yields accurate results for thick oxides if, instead of assuming zero initial thickness (or any initial thickness less than 25 nm), we assume that 25 nm of oxide exists before oxidation begins. However, for oxides near to or thinner than this threshold, more sophisticated models must be used.
In the 1980s, it became obvious that an update to the Deal-Grove model is necessary to model the aforementioned thin oxides (self-limiting cases). One such approach that more accurately models thin oxides is the Massoud model from 1985 [2]. The Massoud model is analytical and based on parallel oxidation mechanisms. It changes the parameters of the Deal-Grove model to better model the initial oxide growth with the addition of rate-enhancement terms.
The Deal-Grove model also fails for polycrystalline silicon ("poly-silicon"). First, the random orientation of the crystal grains makes it difficult to choose a value for the linear rate constant. Second, oxidant molecules diffuse rapidly along grain boundaries, so that poly-silicon oxidizes more rapidly than single-crystal silicon.
Dopant atoms strain the silicon lattice, and make it easier for silicon atoms to bond with incoming oxygen. This effect may be neglected in many cases, but heavily doped silicon oxidizes significantly faster. The pressure of the ambient gas also affects oxidation rate. | [
{
"math_id": 0,
"text": "\n\\begin{align}\nJ_\\text{gas} & = h_g (C_g- C_s) \\\\[8pt]\nJ_\\text{oxide} & = D_\\text{ox} \\frac{C_s- C_i}{x} \\\\[8pt]\nJ_\\text{reacting} & = k_i C_i\n\\end{align}\n"
},
{
"math_id": 1,
"text": "J_\\text{gas} = J_\\text{oxide} = J_\\text{reacting}, "
},
{
"math_id": 2,
"text": "\n\\begin{align}\n\\frac {C_i}{C_g} & = \\frac {1}{1+k_i/h_g+k_ix/D_\\text{ox}} \\\\[8pt]\n\\frac {C_s}{C_g} & = \\frac {1+k_ix/D_\\text{ox}}{1+k_i/h_g+k_ix/D_\\text{ox}}\n\\end{align}\n"
},
{
"math_id": 3,
"text": "J_\\text{oxide}"
},
{
"math_id": 4,
"text": "C_i"
},
{
"math_id": 5,
"text": "C_s"
},
{
"math_id": 6,
"text": "C_g"
},
{
"math_id": 7,
"text": "J_\\text{reacting}"
},
{
"math_id": 8,
"text": "J_\\text{oxide} = J_\\text{reacting} = \\frac {k_iC_g}{1+k_i/h_g+k_ix/D_\\text{ox}}"
},
{
"math_id": 9,
"text": "\n\\begin{align}\n& \\frac{dx}{dt} = \\frac{J_\\text{oxide}}{N} = \\frac {k_iC_g/N}{1+k_i/h_g+k_ix/D_\\text{ox}} \\\\[8pt]\n& x^2 + Ax = Bt + {x_i}^2 + Ax_i \\\\[8pt]\n& x^2 + Ax = B(t+\\tau)\n\\end{align}\n"
},
{
"math_id": 10,
"text": "A"
},
{
"math_id": 11,
"text": "B"
},
{
"math_id": 12,
"text": " x_i "
},
{
"math_id": 13,
"text": "\n\\begin{align}\nA=2 D_\\text{ox} \\left(\\frac{1}{k_i} + \\frac{1}{h_g} \\right) \\\\[8pt]\nB= \\frac {2D_\\text{ox} C_s}{N} \\\\[8pt]\n\\tau = \\frac{x_i^2 + A x_i}{B}\n\\end{align}\n"
},
{
"math_id": 14,
"text": " C_s = H P_g "
},
{
"math_id": 15,
"text": " H "
},
{
"math_id": 16,
"text": " P_g "
},
{
"math_id": 17,
"text": "x(t) = \\frac{-A+\\sqrt{A^2+4(B)(t+\\tau)}}{2}"
},
{
"math_id": 18,
"text": "t+\\tau"
},
{
"math_id": 19,
"text": "\n\\begin{align}\nt+\\tau \\ll \\frac{A^2}{4B} \\Rightarrow x(t) = \\frac{B}{A}(t+\\tau) \\\\[8pt]\nt+\\tau \\gg \\frac{A^2}{4B} \\Rightarrow x(t) = \\sqrt{B(t+\\tau)}\n\\end{align}\n"
},
{
"math_id": 20,
"text": "B = B_0 e^{-E_A/kT}; \\quad B/A = (B/A)_0 e^{-E_A/kT} "
},
{
"math_id": 21,
"text": "E_A"
},
{
"math_id": 22,
"text": "k"
},
{
"math_id": 23,
"text": " \\langle 100\\rangle "
},
{
"math_id": 24,
"text": " \\langle 111\\rangle "
},
{
"math_id": 25,
"text": "O_2"
}
] | https://en.wikipedia.org/wiki?curid=8559342 |
8559560 | Algebraic fraction | Sort of mathematical expression
In algebra, an algebraic fraction is a fraction whose numerator and denominator are algebraic expressions. Two examples of algebraic fractions are formula_0 and formula_1. Algebraic fractions are subject to the same laws as arithmetic fractions.
A rational fraction is an algebraic fraction whose numerator and denominator are both polynomials. Thus formula_0 is a rational fraction, but not formula_2 because the numerator contains a square root function.
Terminology.
In the algebraic fraction formula_3, the dividend "a" is called the "numerator" and the divisor "b" is called the "denominator". The numerator and denominator are called the "terms" of the algebraic fraction.
A "complex fraction" is a fraction whose numerator or denominator, or both, contains a fraction. A "simple fraction" contains no fraction either in its numerator or its denominator. A fraction is in "lowest terms" if the only factor common to the numerator and the denominator is 1.
An expression which is not in fractional form is an "integral expression". An integral expression can always be written in fractional form by giving it the denominator 1. A "mixed expression" is the algebraic sum of one or more integral expressions and one or more fractional terms.
Rational fractions.
If the expressions "a" and "b" are polynomials, the algebraic fraction is called a "rational algebraic fraction" or simply "rational fraction". Rational fractions are also known as rational expressions. A rational fraction formula_4 is called "proper" if formula_5, and "improper" otherwise. For example, the rational fraction formula_6 is proper, and the rational fractions formula_7 and formula_8 are improper. Any improper rational fraction can be expressed as the sum of a polynomial (possibly constant) and a proper rational fraction. In the first example of an improper fraction one has
formula_9
where the second term is a proper rational fraction. The sum of two proper rational fractions is a proper rational fraction as well. The reverse process of expressing a proper rational fraction as the sum of two or more fractions is called resolving it into partial fractions. For example,
formula_10
Here, the two terms on the right are called partial fractions.
Irrational fractions.
An "irrational fraction" is one that contains the variable under a fractional exponent. An example of an irrational fraction is
formula_11
The process of transforming an irrational fraction to a rational fraction is known as rationalization. Every irrational fraction in which the radicals are monomials may be rationalized by finding the least common multiple of the indices of the roots, and substituting the variable for another variable with the least common multiple as exponent. In the example given, the least common multiple is 6, hence we can substitute formula_12 to obtain
formula_13
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{3x}{x^2+2x-3}"
},
{
"math_id": 1,
"text": "\\frac{\\sqrt{x+2}}{x^2-3}"
},
{
"math_id": 2,
"text": "\\frac{\\sqrt{x+2}}{x^2-3},"
},
{
"math_id": 3,
"text": "\\tfrac{a}{b}"
},
{
"math_id": 4,
"text": "\\tfrac{f(x)}{g(x)}"
},
{
"math_id": 5,
"text": "\\deg f(x) < \\deg g(x)"
},
{
"math_id": 6,
"text": "\\tfrac{2x}{x^2-1}"
},
{
"math_id": 7,
"text": "\\tfrac{x^3+x^2+1}{x^2-5x+6}"
},
{
"math_id": 8,
"text": "\\tfrac{x^2-x+1}{5x^2+3}"
},
{
"math_id": 9,
"text": "\\frac{x^3+x^2+1}{x^2-5x+6} = (x+6) + \\frac{24x-35}{x^2-5x+6},"
},
{
"math_id": 10,
"text": "\\frac{2x}{x^2-1} = \\frac{1}{x-1} + \\frac{1}{x+1}."
},
{
"math_id": 11,
"text": "\\frac{x^{1/2} - \\tfrac13 a}{x^{1/3} - x^{1/2}}."
},
{
"math_id": 12,
"text": "x = z^6"
},
{
"math_id": 13,
"text": "\\frac{z^3 - \\tfrac13 a}{z^2 - z^3}."
}
] | https://en.wikipedia.org/wiki?curid=8559560 |
8559743 | Multi-layer insulation | Materials science product key to spacecraft thermal management and cryogenics
Multi-layer insulation (MLI) is thermal insulation composed of multiple layers of thin sheets and is often used on spacecraft and cryogenics. Also referred to as superinsulation, MLI is one of the main items of the spacecraft thermal design, primarily intended to reduce heat loss by thermal radiation. In its basic form, it does not appreciably insulate against other thermal losses such as heat conduction or convection. It is therefore commonly used on satellites and other applications in vacuum where conduction and convection are much less significant and radiation dominates. MLI gives many satellites and other space probes the appearance of being covered with gold foil which is the effect of the amber-coloured Kapton layer deposited over the silver Aluminized mylar.
For non-spacecraft applications, MLI works only as part of a vacuum insulation system. For use in cryogenics, wrapped MLI can be installed inside the annulus of vacuum jacketed pipes. MLI may also be combined with advanced vacuum insulation for use in high temperature applications.
Function and design.
The principle behind MLI is radiation balance. To see why it works, start with a concrete example - imagine a square meter of a surface in outer space, held at a fixed temperature of , with an emissivity of 1, facing away from the sun or other heat sources. From the Stefan–Boltzmann law, this surface will radiate 460 W. Now imagine placing a thin (but opaque) layer away from the plate, also with an emissivity of 1. This new layer will cool until it is radiating 230 W from each side, at which point everything is in balance. The new layer receives 460 W from the original plate. 230 W is radiated back to the original plate, and 230 W to space. The original surface still radiates 460 W, but gets 230 W back from the new layers, for a net loss of 230 W. So overall, the radiation losses from the surface have been reduced by half by adding the additional layer.
More layers can be added to reduce the loss further. The blanket can be further improved by making the outside surfaces highly reflective to thermal radiation, which reduces both absorption and emission. The performance of a layer stack can be quantified in terms of its overall heat transfer coefficient "U", which defines the radiative heat flow rate "Q" between two parallel surfaces with a temperature difference formula_0 and area "A" as
formula_1
Theoretically, the heat transfer coefficient between two layers with emissivities formula_2 and formula_3, at absolute temperatures formula_4 and formula_5 under vacuum, is
formula_6
where formula_7 Wm−2K−4 is the Stefan-Boltzmann Constant. If the temperature difference is not too large (formula_8, then a stack of "N" of layers, all with the same emissivity formula_9 on both sides, will have an overall heat transfer coefficient
formula_10
where formula_11 is the average temperature of the layers. Clearly, increasing the number of layers and decreasing the emissivity both lower the heat transfer coefficient, which is equivalent to a higher insulation value. In space, where the apparent outside temperature could be 3 K (cosmic background radiation), the exact "U" value is different.
The layers of MLI can be arbitrarily close to each other, as long as they are not in thermal contact. The separation space only needs to be minute, which is the function of the extremely thin scrim or polyester 'bridal veil' as shown in the photo. To reduce weight and blanket thickness, the internal layers are made very thin, but they must be opaque to thermal radiation. Since they don't need much structural strength, these internal layers are usually made of very thin plastic, about thick, such as Mylar or Kapton, coated on one or both sides with a thin layer of metal, typically silver or aluminium. For compactness, the layers are spaced as close to each other as possible, though without touching, since there should be little or no thermal conduction between the layers. A typical insulation blanket has 40 or more layers. The layers may be embossed or crinkled, so they only touch at a few points, or held apart by a thin cloth mesh, or scrim, which can be seen in the picture above. The outer layers must be stronger, and are often thicker and stronger plastic, reinforced with a stronger scrim material such as fiberglass.
In satellite applications, the MLI will be full of air at launch time. As the rocket ascends, this air must be able to escape without damaging the blanket. This may require holes or perforations in the layers, even though this reduces their effectiveness.
In cryogenics, the MLI is the most effective kind of insulation. Therefore, it is commonly used in liquefied gas tanks (e.g. LNG, , , ), cryostats, cryogenic pipelines and superconducting devices. Additionally it is valued for its compact size and weight. A blanket composed of 40 layers of MLI has thickness of about and weight of approximately .
Methods tend to vary between manufacturers with some MLI blankets being constructed primarily using sewing technology. The layers are cut, stacked on top of each other, and sewn together at the edges.
Other more recent methods include the use of Computer-aided design and Computer-aided manufacturing technology to weld a precise outline of the final blanket shape using Ultrasonic welding onto a "pack" (the final set of layers before the external "skin" is added by hand.)
Seams and gaps in the insulation are responsible for most of the heat leakage through MLI blankets. A new method is being developed to use polyetheretherketone (PEEK) tag pins (similar to plastic hooks used to attach price tags to garments) to fix the film layers in place instead of sewing to improve the thermal performance.
Additional properties.
Spacecraft also may use MLI as a first line of defence against dust impacts. This normally means spacing it a cm or so away from the surface it is insulating. Also, one or more of the layers may be replaced by a mechanically strong material, such as beta cloth.
In most applications the insulating layers must be grounded, so they cannot build up a charge and arc, causing radio interference. Since the normal construction results in electrical as well as thermal insulation, these applications may include aluminium spacers as opposed to cloth scrim at the points where the blankets are sewn together.
Using similar materials, Single-layer Insulation and Dual-layer insulation (SLI and DLI respectively) are also commonplace on spacecraft.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta T"
},
{
"math_id": 1,
"text": "Q = U A \\Delta T."
},
{
"math_id": 2,
"text": "\\epsilon_1"
},
{
"math_id": 3,
"text": "\\epsilon_2"
},
{
"math_id": 4,
"text": "T_1"
},
{
"math_id": 5,
"text": "T_2"
},
{
"math_id": 6,
"text": "U = \\sigma (T_1^2+ T_2^2)(T_1+T_2)\\frac{1}{1/\\epsilon_1 + 1/\\epsilon_2 - 1},"
},
{
"math_id": 7,
"text": "\\sigma=5.7\\times10^{-8}"
},
{
"math_id": 8,
"text": "|\\Delta T|<<(T_1+T_2)/2"
},
{
"math_id": 9,
"text": "\\epsilon"
},
{
"math_id": 10,
"text": "U = 4\\sigma T^3 \\frac{1}{(N-1)(2/\\epsilon - 1)},"
},
{
"math_id": 11,
"text": "T=(T_1+T_2)/2"
}
] | https://en.wikipedia.org/wiki?curid=8559743 |
856005 | Rotation matrix | Matrix representing a Euclidean rotation
In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix
formula_0
rotates points in the xy plane counterclockwise through an angle θ about the origin of a two-dimensional Cartesian coordinate system. To perform the rotation on a plane point with standard coordinates v = ("x", "y"), it should be written as a column vector, and multiplied by the matrix R:
formula_1
If x and y are the endpoint coordinates of a vector, where x is cosine and y is sine, then the above equations become the trigonometric summation angle formulae. Indeed, a rotation matrix can be seen as the trigonometric summation angle formulae in matrix form. One way to understand this is to say we have a vector at an angle 30° from the x axis, and we wish to rotate that angle by a further 45°. We simply need to compute the vector endpoint coordinates at 75°.
The examples in this article apply to "active rotations" of vectors "counterclockwise" in a "right-handed coordinate system" (y counterclockwise from x) by "pre-multiplication" (R on the left). If any one of these is changed (such as rotating axes instead of vectors, a "passive transformation"), then the inverse of the example matrix should be used, which coincides with its transpose.
Since matrix multiplication has no effect on the zero vector (the coordinates of the origin), rotation matrices describe rotations about the origin. Rotation matrices provide an algebraic description of such rotations, and are used extensively for computations in geometry, physics, and computer graphics. In some literature, the term "rotation" is generalized to include improper rotations, characterized by orthogonal matrices with a determinant of −1 (instead of +1). These combine "proper" rotations with "reflections" (which invert orientation). In other cases, where reflections are not being considered, the label "proper" may be dropped. The latter convention is followed in this article.
Rotation matrices are square matrices, with real entries. More specifically, they can be characterized as orthogonal matrices with determinant 1; that is, a square matrix "R" is a rotation matrix if and only if "R"T
"R"−1 and det "R"
1. The set of all orthogonal matrices of size n with determinant +1 is a representation of a group known as the special orthogonal group SO("n"), one example of which is the rotation group SO(3). The set of all orthogonal matrices of size n with determinant +1 or −1 is a representation of the (general) orthogonal group O("n").
In two dimensions.
In two dimensions, the standard rotation matrix has the following form:
formula_2
This rotates column vectors by means of the following matrix multiplication,
formula_3
Thus, the new coordinates ("x"′, "y"′) of a point ("x", "y") after rotation are
formula_4
Examples.
For example, when the vector
formula_5
is rotated by an angle θ, its new coordinates are
formula_6
and when the vector
formula_7
is rotated by an angle θ, its new coordinates are
formula_8
Direction.
The direction of vector rotation is counterclockwise if θ is positive (e.g. 90°), and clockwise if θ is negative (e.g. −90°) for formula_9. Thus the clockwise rotation matrix is found as
formula_10
The two-dimensional case is the only non-trivial (i.e. not one-dimensional) case where the rotation matrices group is commutative, so that it does not matter in which order multiple rotations are performed. An alternative convention uses rotating axes, and the above matrices also represent a rotation of the "axes clockwise" through an angle θ.
Non-standard orientation of the coordinate system.
If a standard right-handed Cartesian coordinate system is used, with the to the right and the up, the rotation "R"("θ") is counterclockwise. If a left-handed Cartesian coordinate system is used, with x directed to the right but y directed down, "R"("θ") is clockwise. Such non-standard orientations are rarely used in mathematics but are common in 2D computer graphics, which often have the origin in the top left corner and the down the screen or page.
See below for other alternative conventions which may change the sense of the rotation produced by a rotation matrix.
Common 2D rotations.
Particularly useful are the matrices
formula_11
for 90°, 180°, and 270° counter-clockwise rotations.
Relationship with complex plane.
Since
formula_12
the matrices of the shape
formula_13
form a ring isomorphic to the field of the complex numbers &NoBreak;&NoBreak;. Under this isomorphism, the rotation matrices correspond to circle of the unit complex numbers, the complex numbers of modulus 1.
If one identifies formula_14 with formula_15 through the linear isomorphism formula_16 the action of a matrix of the above form on vectors of formula_14 corresponds to the multiplication by the complex number "x" + "iy", and rotations correspond to multiplication by complex numbers of modulus 1.
As every rotation matrix can be written
formula_17
the above correspondence associates such a matrix with the complex number
formula_18
(this last equality is Euler's formula).
In three dimensions.
Basic 3D rotations.
A basic 3D rotation (also called elemental rotation) is a rotation about one of the axes of a coordinate system. The following three basic rotation matrices rotate vectors by an angle θ about the x-, y-, or z-axis, in three dimensions, using the right-hand rule—which codifies their alternating signs. Notice that the right-hand rule only works when multiplying formula_19. (The same matrices can also represent a clockwise rotation of the axes.)
formula_20
For column vectors, each of these basic vector rotations appears counterclockwise when the axis about which they occur points toward the observer, the coordinate system is right-handed, and the angle θ is positive. "R""z", for instance, would rotate toward the a vector aligned with the , as can easily be checked by operating with "R""z" on the vector (1,0,0):
formula_21
This is similar to the rotation produced by the above-mentioned two-dimensional rotation matrix. See below for alternative conventions which may apparently or actually invert the sense of the rotation produced by these matrices.
General 3D rotations.
Other 3D rotation matrices can be obtained from these three using matrix multiplication. For example, the product
formula_22
represents a rotation whose yaw, pitch, and roll angles are α, β and γ, respectively. More formally, it is an intrinsic rotation whose Tait–Bryan angles are α, β, γ, about axes z, y, x, respectively.
Similarly, the product
formula_23
represents an extrinsic rotation whose (improper) Euler angles are α, β, γ, about axes x, y, z.
These matrices produce the desired effect only if they are used to premultiply column vectors, and (since in general matrix multiplication is not commutative) only if they are applied in the specified order (see Ambiguities for more details). The order of rotation operations is from right to left; the matrix adjacent to the column vector is the first to be applied, and then the one to the left.
Conversion from rotation matrix to axis–angle.
Every rotation in three dimensions is defined by its axis (a vector along this axis is unchanged by the rotation), and its angle — the amount of rotation about that axis (Euler rotation theorem).
There are several methods to compute the axis and angle from a rotation matrix (see also axis–angle representation). Here, we only describe the method based on the computation of the eigenvectors and eigenvalues of the rotation matrix. It is also possible to use the trace of the rotation matrix.
Determining the axis.
Given a 3 × 3 rotation matrix R, a vector u parallel to the rotation axis must satisfy
formula_24
since the rotation of u around the rotation axis must result in u. The equation above may be solved for u which is unique up to a scalar factor unless "R"
"I".
Further, the equation may be rewritten
formula_25
which shows that u lies in the null space of "R" − "I".
Viewed in another way, u is an eigenvector of R corresponding to the eigenvalue "λ"
1. Every rotation matrix must have this eigenvalue, the other two eigenvalues being complex conjugates of each other. It follows that a general rotation matrix in three dimensions has, up to a multiplicative constant, only one real eigenvector.
One way to determine the rotation axis is by showing that:
formula_26
Since ("R" − "R"T) is a skew-symmetric matrix, we can choose u such that
formula_27
The matrix–vector product becomes a cross product of a vector with itself, ensuring that the result is zero:
formula_28
Therefore, if
formula_29
then
formula_30
The magnitude of u computed this way is , where θ is the angle of rotation.
This does not work if R is symmetric. Above, if "R" − "R"T is zero, then all subsequent steps are invalid. In this case, it is necessary to diagonalize R and find the eigenvector corresponding to an eigenvalue of 1.
Determining the angle.
To find the angle of a rotation, once the axis of the rotation is known, select a vector v perpendicular to the axis. Then the angle of the rotation is the angle between v and "R"v.
A more direct method, however, is to simply calculate the trace: the sum of the diagonal elements of the rotation matrix. Care should be taken to select the right sign for the angle θ to match the chosen axis:
formula_31
from which follows that the angle's absolute value is
formula_32
For the rotation axis formula_33, you can get the correct angle from
formula_34
where
formula_35
Rotation matrix from axis and angle.
The matrix of a proper rotation R by angle θ around the axis u
("ux", "uy", "uz"), a unit vector with "u" + "u" + "u"
1, is given by:
formula_36
A derivation of this matrix from first principles can be found in section 9.2 here. The basic idea to derive this matrix is dividing the problem into few known simple steps.
This can be written more concisely as
formula_37
where [u]× is the cross product matrix of u; the expression u ⊗ u is the outer product, and I is the identity matrix. Alternatively, the matrix entries are:
formula_38
where εjkl is the Levi-Civita symbol with "ε"123
1. This is a matrix form of Rodrigues' rotation formula, (or the equivalent, differently parametrized Euler–Rodrigues formula) with
formula_39
In formula_40 the rotation of a vector x around the axis u by an angle θ can be written as:
formula_41
or equivalently:
formula_42
This can also be written in tensor notation as:
formula_43
If the 3D space is right-handed and "θ" > 0, this rotation will be counterclockwise when u points towards the observer (Right-hand rule). Explicitly, with formula_44 a right-handed orthonormal basis,
formula_45
Note the striking "merely apparent differences" to the "equivalent" Lie-algebraic formulation below.
Properties.
For any n-dimensional rotation matrix R acting on formula_46
formula_47 (The rotation is an orthogonal matrix)
It follows that:
formula_48
A rotation is termed proper if det "R"
1, and improper (or a roto-reflection) if det "R"
–1. For even dimensions "n"
2"k", the n eigenvalues λ of a proper rotation occur as pairs of complex conjugates which are roots of unity: "λ"
"e"±"iθj" for "j"
1, ..., "k", which is real only for "λ"
±1. Therefore, there may be no vectors fixed by the rotation ("λ"
1), and thus no axis of rotation. Any fixed eigenvectors occur in pairs, and the axis of rotation is an even-dimensional subspace.
For odd dimensions "n"
2"k" + 1, a proper rotation R will have an odd number of eigenvalues, with at least one "λ"
1 and the axis of rotation will be an odd dimensional subspace. Proof:
formula_49
Here I is the identity matrix, and we use det("R"T)
det("R")
1, as well as (−1)"n"
−1 since n is odd. Therefore, det("R" – "I")
0, meaning there is a nonzero vector v with ("R – I")v
0, that is "R"v
v, a fixed eigenvector. There may also be pairs of fixed eigenvectors in the even-dimensional subspace orthogonal to v, so the total dimension of fixed eigenvectors is odd.
For example, in 2-space "n"
2, a rotation by angle θ has eigenvalues "λ"
"eiθ" and "λ"
"e"−"iθ", so there is no axis of rotation except when "θ"
0, the case of the null rotation. In 3-space "n"
3, the axis of a non-null proper rotation is always a unique line, and a rotation around this axis by angle θ has eigenvalues "λ"
1, "eiθ", "e"−"iθ". In 4-space "n"
4, the four eigenvalues are of the form "e"±"iθ", "e"±"iφ". The null rotation has "θ"
"φ"
0. The case of "θ"
0, "φ" ≠ 0 is called a "simple rotation", with two unit eigenvalues forming an "axis plane", and a two-dimensional rotation orthogonal to the axis plane. Otherwise, there is no axis plane. The case of "θ"
"φ" is called an "isoclinic rotation", having eigenvalues "e"±"iθ" repeated twice, so every vector is rotated through an angle θ.
The trace of a rotation matrix is equal to the sum of its eigenvalues. For "n"
2, a rotation by angle θ has trace 2 cos "θ". For "n"
3, a rotation around any axis by angle θ has trace 1 + 2 cos "θ". For "n"
4, and the trace is 2(cos "θ" + cos "φ"), which becomes 4 cos "θ" for an isoclinic rotation.
Examples.
<templatestyles src="Col-begin/styles.css"/>
Geometry.
In Euclidean geometry, a rotation is an example of an isometry, a transformation that moves points without changing the distances between them. Rotations are distinguished from other isometries by two additional properties: they leave (at least) one point fixed, and they leave "handedness" unchanged. In contrast, a translation moves every point, a reflection exchanges left- and right-handed ordering, a glide reflection does both, and an improper rotation combines a change in handedness with a normal rotation.
If a fixed point is taken as the origin of a Cartesian coordinate system, then every point can be given coordinates as a displacement from the origin. Thus one may work with the vector space of displacements instead of the points themselves. Now suppose ("p"1, ..., "pn") are the coordinates of the vector p from the origin O to point P. Choose an orthonormal basis for our coordinates; then the squared distance to P, by Pythagoras, is
formula_50
which can be computed using the matrix multiplication
formula_51
A geometric rotation transforms lines to lines, and preserves ratios of distances between points. From these properties it can be shown that a rotation is a linear transformation of the vectors, and thus can be written in matrix form, "Q"p. The fact that a rotation preserves, not just ratios, but distances themselves, is stated as
formula_52
or
formula_53
Because this equation holds for all vectors, p, one concludes that every rotation matrix, "Q", satisfies the orthogonality condition,
formula_54
Rotations preserve handedness because they cannot change the ordering of the axes, which implies the special matrix condition,
formula_55
Equally important, it can be shown that any matrix satisfying these two conditions acts as a rotation.
Multiplication.
The inverse of a rotation matrix is its transpose, which is also a rotation matrix:
formula_56
The product of two rotation matrices is a rotation matrix:
formula_57
For "n" > 2, multiplication of "n" × "n" rotation matrices is generally not commutative.
formula_58
Noting that any identity matrix is a rotation matrix, and that matrix multiplication is associative, we may summarize all these properties by saying that the "n" × "n" rotation matrices form a group, which for "n" > 2 is non-abelian, called a special orthogonal group, and denoted by SO("n"), SO("n",R), SO"n", or SO"n"(R), the group of "n" × "n" rotation matrices is isomorphic to the group of rotations in an space. This means that multiplication of rotation matrices corresponds to composition of rotations, applied in left-to-right order of their corresponding matrices.
Ambiguities.
The interpretation of a rotation matrix can be subject to many ambiguities.
In most cases the effect of the ambiguity is equivalent to the effect of a rotation matrix inversion (for these orthogonal matrices equivalently matrix transpose).
The coordinates of a point "P" may change due to either a rotation of the coordinate system "CS" (alias), or a rotation of the point "P" (alibi). In the latter case, the rotation of "P" also produces a rotation of the vector v representing "P". In other words, either "P" and v are fixed while "CS" rotates (alias), or "CS" is fixed while "P" and v rotate (alibi). Any given rotation can be legitimately described both ways, as vectors and coordinate systems actually rotate with respect to each other, about the same axis but in opposite directions. Throughout this article, we chose the alibi approach to describe rotations. For instance,
formula_59
represents a counterclockwise rotation of a vector v by an angle "θ", or a rotation of "CS" by the same angle but in the opposite direction (i.e. clockwise). Alibi and alias transformations are also known as active and passive transformations, respectively.
The same point "P" can be represented either by a column vector v or a row vector w. Rotation matrices can either pre-multiply column vectors ("Rv), or post-multiply row vectors (wR"). However, "Rv produces a rotation in the opposite direction with respect to wR". Throughout this article, rotations produced on column vectors are described by means of a pre-multiplication. To obtain exactly the same rotation (i.e. the same final coordinates of point "P"), the equivalent row vector must be post-multiplied by the transpose of R (i.e. w"R"T).
The matrix and the vector can be represented with respect to a right-handed or left-handed coordinate system. Throughout the article, we assumed a right-handed orientation, unless otherwise specified.
The vector space has a dual space of linear forms, and the matrix can act on either vectors or forms.
Decompositions.
Independent planes.
Consider the 3 × 3 rotation matrix
formula_60
If "Q" acts in a certain direction, v, purely as a scaling by a factor λ, then we have
formula_61
so that
formula_62
Thus λ is a root of the characteristic polynomial for Q,
formula_63
Two features are noteworthy. First, one of the roots (or eigenvalues) is 1, which tells us that some direction is unaffected by the matrix. For rotations in three dimensions, this is the "axis" of the rotation (a concept that has no meaning in any other dimension). Second, the other two roots are a pair of complex conjugates, whose product is 1 (the constant term of the quadratic), and whose sum is 2 cos "θ" (the negated linear term). This factorization is of interest for 3 × 3 rotation matrices because the same thing occurs for all of them. (As special cases, for a null rotation the "complex conjugates" are both 1, and for a 180° rotation they are both −1.) Furthermore, a similar factorization holds for any "n" × "n" rotation matrix. If the dimension, n, is odd, there will be a "dangling" eigenvalue of 1; and for any dimension the rest of the polynomial factors into quadratic terms like the one here (with the two special cases noted). We are guaranteed that the characteristic polynomial will have degree n and thus n eigenvalues. And since a rotation matrix commutes with its transpose, it is a normal matrix, so can be diagonalized. We conclude that every rotation matrix, when expressed in a suitable coordinate system, partitions into independent rotations of two-dimensional subspaces, at most of them.
The sum of the entries on the main diagonal of a matrix is called the trace; it does not change if we reorient the coordinate system, and always equals the sum of the eigenvalues. This has the convenient implication for 2 × 2 and 3 × 3 rotation matrices that the trace reveals the angle of rotation, θ, in the two-dimensional space (or subspace). For a 2 × 2 matrix the trace is 2 cos "θ", and for a 3 × 3 matrix it is 1 + 2 cos "θ". In the three-dimensional case, the subspace consists of all vectors perpendicular to the rotation axis (the invariant direction, with eigenvalue 1). Thus we can extract from any 3 × 3 rotation matrix a rotation axis and an angle, and these completely determine the rotation.
Sequential angles.
The constraints on a 2 × 2 rotation matrix imply that it must have the form
formula_64
with "a"2 + "b"2
1. Therefore, we may set "a"
cos "θ" and "b"
sin "θ", for some angle θ. To solve for θ it is not enough to look at a alone or b alone; we must consider both together to place the angle in the correct quadrant, using a two-argument arctangent function.
Now consider the first column of a 3 × 3 rotation matrix,
formula_65
Although "a"2 + "b"2 will probably not equal 1, but some value "r"2 < 1, we can use a slight variation of the previous computation to find a so-called Givens rotation that transforms the column to
formula_66
zeroing b. This acts on the subspace spanned by the x- and y-axes. We can then repeat the process for the xz-subspace to zero c. Acting on the full matrix, these two rotations produce the schematic form
formula_67
Shifting attention to the second column, a Givens rotation of the yz-subspace can now zero the z value. This brings the full matrix to the form
formula_68
which is an identity matrix. Thus we have decomposed Q as
formula_69
An "n" × "n" rotation matrix will have ("n" − 1) + ("n" − 2) + ⋯ + 2 + 1, or
formula_70
entries below the diagonal to zero. We can zero them by extending the same idea of stepping through the columns with a series of rotations in a fixed sequence of planes. We conclude that the set of "n" × "n" rotation matrices, each of which has "n"2 entries, can be parameterized by "n"("n" − 1) angles.
In three dimensions this restates in matrix form an observation made by Euler, so mathematicians call the ordered sequence of three angles Euler angles. However, the situation is somewhat more complicated than we have so far indicated. Despite the small dimension, we actually have considerable freedom in the sequence of axis pairs we use; and we also have some freedom in the choice of angles. Thus we find many different conventions employed when three-dimensional rotations are parameterized for physics, or medicine, or chemistry, or other disciplines. When we include the option of world axes or body axes, 24 different sequences are possible. And while some disciplines call any sequence Euler angles, others give different names (Cardano, Tait–Bryan, roll-pitch-yaw) to different sequences.
One reason for the large number of options is that, as noted previously, rotations in three dimensions (and higher) do not commute. If we reverse a given sequence of rotations, we get a different outcome. This also implies that we cannot compose two rotations by adding their corresponding angles. Thus Euler angles are not vectors, despite a similarity in appearance as a triplet of numbers.
Nested dimensions.
A 3 × 3 rotation matrix such as
formula_71
suggests a 2 × 2 rotation matrix,
formula_72
is embedded in the upper left corner:
formula_73
This is no illusion; not just one, but many, copies of n-dimensional rotations are found within ("n" + 1)-dimensional rotations, as subgroups. Each embedding leaves one direction fixed, which in the case of 3 × 3 matrices is the rotation axis. For example, we have
formula_74
fixing the x-axis, the y-axis, and the z-axis, respectively. The rotation axis need not be a coordinate axis; if u
("x","y","z") is a unit vector in the desired direction, then
formula_75
where "cθ"
cos "θ", "sθ"
sin "θ", is a rotation by angle θ leaving axis u fixed.
A direction in ("n" + 1)-dimensional space will be a unit magnitude vector, which we may consider a point on a generalized sphere, "S""n". Thus it is natural to describe the rotation group SO("n" + 1) as combining SO("n") and "S""n". A suitable formalism is the fiber bundle,
formula_76
where for every direction in the base space, "S""n", the fiber over it in the total space, SO("n" + 1), is a copy of the fiber space, SO("n"), namely the rotations that keep that direction fixed.
Thus we can build an "n" × "n" rotation matrix by starting with a 2 × 2 matrix, aiming its fixed axis on "S"2 (the ordinary sphere in three-dimensional space), aiming the resulting rotation on "S"3, and so on up through "S""n"−1. A point on "S""n" can be selected using n numbers, so we again have "n"("n" − 1) numbers to describe any "n" × "n" rotation matrix.
In fact, we can view the sequential angle decomposition, discussed previously, as reversing this process. The composition of "n" − 1 Givens rotations brings the first column (and row) to (1, 0, ..., 0), so that the remainder of the matrix is a rotation matrix of dimension one less, embedded so as to leave (1, 0, ..., 0) fixed.
Skew parameters via Cayley's formula.
When an "n" × "n" rotation matrix Q, does not include a −1 eigenvalue, thus none of the planar rotations which it comprises are 180° rotations, then "Q" + "I" is an invertible matrix. Most rotation matrices fit this description, and for them it can be shown that ("Q" − "I")("Q" + "I")−1 is a skew-symmetric matrix, A. Thus "A"T
−"A"; and since the diagonal is necessarily zero, and since the upper triangle determines the lower one, A contains "n"("n" − 1) independent numbers.
Conveniently, "I" − "A" is invertible whenever A is skew-symmetric; thus we can recover the original matrix using the "Cayley transform",
formula_77
which maps any skew-symmetric matrix A to a rotation matrix. In fact, aside from the noted exceptions, we can produce any rotation matrix in this way. Although in practical applications we can hardly afford to ignore 180° rotations, the Cayley transform is still a potentially useful tool, giving a parameterization of most rotation matrices without trigonometric functions.
In three dimensions, for example, we have
formula_78
If we condense the skew entries into a vector, ("x","y","z"), then we produce a 90° rotation around the x-axis for (1, 0, 0), around the y-axis for (0, 1, 0), and around the z-axis for (0, 0, 1). The 180° rotations are just out of reach; for, in the limit as "x" → ∞, ("x", 0, 0) does approach a 180° rotation around the x axis, and similarly for other directions.
Decomposition into shears.
For the 2D case, a rotation matrix can be decomposed into three shear matrices ():
formula_79
This is useful, for instance, in computer graphics, since shears can be implemented with fewer multiplication instructions than rotating a bitmap directly. On modern computers, this may not matter, but it can be relevant for very old or low-end microprocessors.
A rotation can also be written as two shears and scaling ():
formula_80
Group theory.
Below follow some basic facts about the role of the collection of "all" rotation matrices of a fixed dimension (here mostly 3) in mathematics and particularly in physics where rotational symmetry is a "requirement" of every truly fundamental law (due to the assumption of isotropy of space), and where the same symmetry, when present, is a "simplifying property" of many problems of less fundamental nature. Examples abound in classical mechanics and quantum mechanics. Knowledge of the part of the solutions pertaining to this symmetry applies (with qualifications) to "all" such problems and it can be factored out of a specific problem at hand, thus reducing its complexity. A prime example – in mathematics and physics – would be the theory of spherical harmonics. Their role in the group theory of the rotation groups is that of being a representation space for the entire set of finite-dimensional irreducible representations of the rotation group SO(3). For this topic, see Rotation group SO(3) § Spherical harmonics.
The main articles listed in each subsection are referred to for more detail.
Lie group.
The "n" × "n" rotation matrices for each n form a group, the special orthogonal group, SO("n"). This algebraic structure is coupled with a topological structure inherited from formula_81 in such a way that the operations of multiplication and taking the inverse are analytic functions of the matrix entries. Thus SO("n") is for each n a Lie group. It is compact and connected, but not simply connected. It is also a semi-simple group, in fact a simple group with the exception SO(4). The relevance of this is that all theorems and all machinery from the theory of analytic manifolds (analytic manifolds are in particular smooth manifolds) apply and the well-developed representation theory of compact semi-simple groups is ready for use.
Lie algebra.
The Lie algebra so("n") of SO("n") is given by
formula_82
and is the space of skew-symmetric matrices of dimension "n", see classical group, where o("n") is the Lie algebra of O("n"), the orthogonal group. For reference, the most common basis for so(3) is
formula_83
Exponential map.
Connecting the Lie algebra to the Lie group is the exponential map, which is defined using the standard matrix exponential series for eA For any skew-symmetric matrix A, exp("A") is always a rotation matrix.
An important practical example is the 3 × 3 case. In rotation group SO(3), it is shown that one can identify every "A" ∈ so(3) with an Euler vector ω = "θ"u, where u = ("x", "y", "z") is a unit magnitude vector.
By the properties of the identification formula_84, u is in the null space of A. Thus, u is left invariant by exp("A") and is hence a rotation axis.
According to Rodrigues' rotation formula on matrix form, one obtains,
formula_85
where
formula_86
This is the matrix for a rotation around axis u by the angle θ. For full detail, see exponential map SO(3).
Baker–Campbell–Hausdorff formula.
The BCH formula provides an explicit expression for "Z" = log("e""X""e""Y") in terms of a series expansion of nested commutators of X and Y. This general expansion unfolds as
formula_87
In the 3 × 3 case, the general infinite expansion has a compact form,
formula_88
for suitable trigonometric function coefficients, detailed in the Baker–Campbell–Hausdorff formula for SO(3).
As a group identity, the above holds for "all faithful representations", including the doublet (spinor representation), which is simpler. The same explicit formula thus follows straightforwardly through Pauli matrices; see the 2 × 2 derivation for SU(2). For the general "n" × "n" case, one might use Ref.
Spin group.
The Lie group of "n" × "n" rotation matrices, SO("n"), is not simply connected, so Lie theory tells us it is a homomorphic image of a universal covering group. Often the covering group, which in this case is called the spin group denoted by Spin("n"), is simpler and more natural to work with.
In the case of planar rotations, SO(2) is topologically a circle, "S"1. Its universal covering group, Spin(2), is isomorphic to the real line, R, under addition. Whenever angles of arbitrary magnitude are used one is taking advantage of the convenience of the universal cover. Every 2 × 2 rotation matrix is produced by a countable infinity of angles, separated by integer multiples of 2π. Correspondingly, the fundamental group of SO(2) is isomorphic to the integers, Z.
In the case of spatial rotations, SO(3) is topologically equivalent to three-dimensional real projective space, RP3. Its universal covering group, Spin(3), is isomorphic to the 3-sphere, "S"3. Every 3 × 3 rotation matrix is produced by two opposite points on the sphere. Correspondingly, the fundamental group of SO(3) is isomorphic to the two-element group, Z2.
We can also describe Spin(3) as isomorphic to quaternions of unit norm under multiplication, or to certain 4 × 4 real matrices, or to 2 × 2 complex special unitary matrices, namely SU(2). The covering maps for the first and the last case are given by
formula_89
and
formula_90
For a detailed account of the SU(2)-covering and the quaternionic covering, see spin group SO(3).
Many features of these cases are the same for higher dimensions. The coverings are all two-to-one, with SO("n"), "n" > 2, having fundamental group Z2. The natural setting for these groups is within a Clifford algebra. One type of action of the rotations is produced by a kind of "sandwich", denoted by "qvq"∗. More importantly in applications to physics, the corresponding spin representation of the Lie algebra sits inside the Clifford algebra. It can be exponentiated in the usual way to give rise to a 2-valued representation, also known as projective representation of the rotation group. This is the case with SO(3) and SU(2), where the 2-valued representation can be viewed as an "inverse" of the covering map. By properties of covering maps, the inverse can be chosen ono-to-one as a local section, but not globally.
Infinitesimal rotations.
The matrices in the Lie algebra are not themselves rotations; the skew-symmetric matrices are derivatives, proportional differences of rotations. An actual "differential rotation", or "infinitesimal rotation matrix" has the form
formula_91
where "dθ" is vanishingly small and "A" ∈ so(n), for instance with "A" = "L""x",
formula_92
The computation rules are as usual except that infinitesimals of second order are routinely dropped. With these rules, these matrices do not satisfy all the same properties as ordinary finite rotation matrices under the usual treatment of infinitesimals. It turns out that "the order in which infinitesimal rotations are applied is irrelevant". To see this exemplified, consult infinitesimal rotations SO(3).
Conversions.
We have seen the existence of several decompositions that apply in any dimension, namely independent planes, sequential angles, and nested dimensions. In all these cases we can either decompose a matrix or construct one. We have also given special attention to 3 × 3 rotation matrices, and these warrant further attention, in both directions .
Quaternion.
Given the unit quaternion q
"w" + "xi + "yj + "z"k, the equivalent pre-multiplied (to be used with column vectors) 3 × 3 rotation matrix is
formula_93
Now every quaternion component appears multiplied by two in a term of degree two, and if all such terms are zero what is left is an identity matrix. This leads to an efficient, robust conversion from any quaternion – whether unit or non-unit – to a 3 × 3 rotation matrix. Given:
formula_94
we can calculate
formula_95
Freed from the demand for a unit quaternion, we find that nonzero quaternions act as homogeneous coordinates for 3 × 3 rotation matrices. The Cayley transform, discussed earlier, is obtained by scaling the quaternion so that its w component is 1. For a 180° rotation around any axis, w will be zero, which explains the Cayley limitation.
The sum of the entries along the main diagonal (the trace), plus one, equals 4 − 4("x"2 + "y"2 + "z"2), which is 4"w"2. Thus we can write the trace itself as 2"w"2 + 2"w"2 − 1; and from the previous version of the matrix we see that the diagonal entries themselves have the same form: 2"x"2 + 2"w"2 − 1, 2"y"2 + 2"w"2 − 1, and 2"z"2 + 2"w"2 − 1. So we can easily compare the magnitudes of all four quaternion components using the matrix diagonal. We can, in fact, obtain all four magnitudes using sums and square roots, and choose consistent signs using the skew-symmetric part of the off-diagonal entries:
formula_96
Alternatively, use a single square root and division
formula_97
This is numerically stable so long as the trace, t, is not negative; otherwise, we risk dividing by (nearly) zero. In that case, suppose Qxx is the largest diagonal entry, so x will have the largest magnitude (the other cases are derived by cyclic permutation); then the following is safe.
formula_98
If the matrix contains significant error, such as accumulated numerical error, we may construct a symmetric 4 × 4 matrix,
formula_99
and find the eigenvector, ("x", "y", "z", "w"), of its largest magnitude eigenvalue. (If Q is truly a rotation matrix, that value will be 1.) The quaternion so obtained will correspond to the rotation matrix closest to the given matrix (Note: formulation of the cited article is post-multiplied, works with row vectors).
Polar decomposition.
If the "n" × "n" matrix M is nonsingular, its columns are linearly independent vectors; thus the Gram–Schmidt process can adjust them to be an orthonormal basis. Stated in terms of numerical linear algebra, we convert M to an orthogonal matrix, Q, using QR decomposition. However, we often prefer a Q closest to M, which this method does not accomplish. For that, the tool we want is the polar decomposition (; ).
To measure closeness, we may use any matrix norm invariant under orthogonal transformations. A convenient choice is the Frobenius norm, , squared, which is the sum of the squares of the element differences. Writing this in terms of the trace, Tr, our goal is,
Find Q minimizing Tr( ("Q" − "M")T("Q" − "M") ), subject to "Q"T"Q"
"I".
Though written in matrix terms, the objective function is just a quadratic polynomial. We can minimize it in the usual way, by finding where its derivative is zero. For a 3 × 3 matrix, the orthogonality constraint implies six scalar equalities that the entries of Q must satisfy. To incorporate the constraint(s), we may employ a standard technique, Lagrange multipliers, assembled as a symmetric matrix, Y. Thus our method is:
Differentiate Tr( ("Q" − "M")T("Q" − "M") + ("Q"T"Q" − "I")"Y" ) with respect to (the entries of) Q, and equate to zero.
Consider a 2 × 2 example. Including constraints, we seek to minimize
formula_100
Taking the derivative with respect to Qxx, Qxy, Qyx, Qyy in turn, we assemble a matrix.
formula_101
In general, we obtain the equation
formula_102
so that
formula_103
where Q is orthogonal and S is symmetric. To ensure a minimum, the Y matrix (and hence S) must be positive definite. Linear algebra calls QS the polar decomposition of M, with S the positive square root of "S"2
"M"T"M".
formula_104
When M is non-singular, the Q and S factors of the polar decomposition are uniquely determined. However, the determinant of S is positive because S is positive definite, so Q inherits the sign of the determinant of M. That is, Q is only guaranteed to be orthogonal, not a rotation matrix. This is unavoidable; an M with negative determinant has no uniquely defined closest rotation matrix.
Axis and angle.
To efficiently construct a rotation matrix Q from an angle θ and a unit axis u, we can take advantage of symmetry and skew-symmetry within the entries. If x, y, and z are the components of the unit vector representing the axis, and
formula_105
then
formula_106
Determining an axis and angle, like determining a quaternion, is only possible up to the sign; that is, (u, "θ") and (−u, −"θ") correspond to the same rotation matrix, just like "q" and −"q". Additionally, axis–angle extraction presents additional difficulties. The angle can be restricted to be from 0° to 180°, but angles are formally ambiguous by multiples of 360°. When the angle is zero, the axis is undefined. When the angle is 180°, the matrix becomes symmetric, which has implications in extracting the axis. Near multiples of 180°, care is needed to avoid numerical problems: in extracting the angle, a two-argument arctangent with atan2(sin "θ", cos "θ") equal to θ avoids the insensitivity of arccos; and in computing the axis magnitude in order to force unit magnitude, a brute-force approach can lose accuracy through underflow .
A partial approach is as follows:
formula_107
The x-, y-, and z-components of the axis would then be divided by r. A fully robust approach will use a different algorithm when t, the trace of the matrix Q, is negative, as with quaternion extraction. When r is zero because the angle is zero, an axis must be provided from some source other than the matrix.
Euler angles.
Complexity of conversion escalates with Euler angles (used here in the broad sense). The first difficulty is to establish which of the twenty-four variations of Cartesian axis order we will use. Suppose the three angles are "θ"1, "θ"2, "θ"3; physics and chemistry may interpret these as
formula_108
while aircraft dynamics may use
formula_109
One systematic approach begins with choosing the rightmost axis. Among all permutations of ("x","y","z"), only two place that axis first; one is an even permutation and the other odd. Choosing parity thus establishes the middle axis. That leaves two choices for the left-most axis, either duplicating the first or not. These three choices gives us 3 × 2 × 2
12 variations; we double that to 24 by choosing static or rotating axes.
This is enough to construct a matrix from angles, but triples differing in many ways can give the same rotation matrix. For example, suppose we use the zyz convention above; then we have the following equivalent pairs:
Angles for any order can be found using a concise common routine (; ).
The problem of singular alignment, the mathematical analog of physical gimbal lock, occurs when the middle rotation aligns the axes of the first and last rotations. It afflicts every axis order at either even or odd multiples of 90°. These singularities are not characteristic of the rotation matrix as such, and only occur with the usage of Euler angles.
The singularities are avoided when considering and manipulating the rotation matrix as orthonormal row vectors (in 3D applications often named the right-vector, up-vector and out-vector) instead of as angles. The singularities are also avoided when working with quaternions.
Vector to vector formulation.
In some instances it is interesting to describe a rotation by specifying how a vector is mapped into another through the shortest path (smallest angle). In formula_40 this completely describes the associated rotation matrix. In general, given "x", "y" ∈ formula_110"n", the matrix
formula_111
belongs to SO("n" + 1) and maps x to y.
Uniform random rotation matrices.
We sometimes need to generate a uniformly distributed random rotation matrix. It seems intuitively clear in two dimensions that this means the rotation angle is uniformly distributed between 0 and 2π. That intuition is correct, but does not carry over to higher dimensions. For example, if we decompose 3 × 3 rotation matrices in axis–angle form, the angle should "not" be uniformly distributed; the probability that (the magnitude of) the angle is at most θ should be ("θ" − sin "θ"), for 0 ≤ "θ" ≤ π.
Since SO("n") is a connected and locally compact Lie group, we have a simple standard criterion for uniformity, namely that the distribution be unchanged when composed with any arbitrary rotation (a Lie group "translation"). This definition corresponds to what is called "Haar measure". show how to use the Cayley transform to generate and test matrices according to this criterion.
We can also generate a uniform distribution in any dimension using the "subgroup algorithm" of . This recursively exploits the nested dimensions group structure of SO("n"), as follows. Generate a uniform angle and construct a 2 × 2 rotation matrix. To step from "n" to "n" + 1, generate a vector v uniformly distributed on the n-sphere "S""n", embed the "n" × "n" matrix in the next larger size with last column (0, ..., 0, 1), and rotate the larger matrix so the last column becomes v.
As usual, we have special alternatives for the 3 × 3 case. Each of these methods begins with three independent random scalars uniformly distributed on the unit interval. takes advantage of the odd dimension to change a Householder reflection to a rotation by negation, and uses that to aim the axis of a uniform planar rotation.
Another method uses unit quaternions. Multiplication of rotation matrices is homomorphic to multiplication of quaternions, and multiplication by a unit quaternion rotates the unit sphere. Since the homomorphism is a local isometry, we immediately conclude that to produce a uniform distribution on SO(3) we may use a uniform distribution on "S"3. In practice: create a four-element vector where each element is a sampling of a normal distribution. Normalize its length and you have a uniformly sampled random unit quaternion which represents a uniformly sampled random rotation. Note that the aforementioned only applies to rotations in dimension 3. For a generalised idea of quaternions, one should look into Rotors.
Euler angles can also be used, though not with each angle uniformly distributed (; ).
For the axis–angle form, the axis is uniformly distributed over the unit sphere of directions, "S"2, while the angle has the nonuniform distribution over noted previously .
See also.
<templatestyles src="Div col/styles.css"/>
Remarks.
<templatestyles src="Reflist/styles.css" />
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R = \\begin{bmatrix}\n\\cos \\theta & -\\sin \\theta \\\\ \n\\sin \\theta & \\cos \\theta \n\\end{bmatrix} \n"
},
{
"math_id": 1,
"text": "\nR\\mathbf{v} = \\begin{bmatrix}\n\\cos \\theta & -\\sin \\theta \\\\ \n\\sin \\theta & \\cos \\theta \n\\end{bmatrix} \n\\begin{bmatrix} x \\\\ y \\end{bmatrix}\n=\n\\begin{bmatrix} \nx\\cos\\theta-y\\sin\\theta \\\\\nx\\sin\\theta+y\\cos\\theta\n\\end{bmatrix}.\n"
},
{
"math_id": 2,
"text": "R(\\theta) = \\begin{bmatrix}\n \\cos\\theta & -\\sin\\theta \\\\\n \\sin\\theta & \\cos\\theta \\\\\n\\end{bmatrix}."
},
{
"math_id": 3,
"text": "\n\\begin{bmatrix}\n x' \\\\\n y' \\\\\n\\end{bmatrix} = \\begin{bmatrix}\n \\cos\\theta & -\\sin\\theta \\\\\n \\sin\\theta & \\cos\\theta \\\\\n\\end{bmatrix}\\begin{bmatrix}\n x \\\\\n y \\\\\n\\end{bmatrix}."
},
{
"math_id": 4,
"text": "\\begin{align}\n x' &= x \\cos\\theta - y \\sin\\theta\\, \\\\\n y' &= x \\sin\\theta + y \\cos\\theta\\,\n\\end{align}."
},
{
"math_id": 5,
"text": "\n \\mathbf{\\hat{x}} = \\begin{bmatrix}\n 1 \\\\\n 0 \\\\\n \\end{bmatrix}\n"
},
{
"math_id": 6,
"text": "\n\\begin{bmatrix}\n \\cos\\theta \\\\\n \\sin\\theta \\\\\n\\end{bmatrix},\n"
},
{
"math_id": 7,
"text": "\n \\mathbf{\\hat{y}} = \\begin{bmatrix}\n 0 \\\\\n 1 \\\\\n \\end{bmatrix}\n"
},
{
"math_id": 8,
"text": "\n \\begin{bmatrix}\n -\\sin\\theta \\\\\n \\cos\\theta \\\\\n \\end{bmatrix}.\n"
},
{
"math_id": 9,
"text": "\n R(\\theta)"
},
{
"math_id": 10,
"text": "\n R(-\\theta) = \\begin{bmatrix}\n \\cos\\theta & \\sin\\theta \\\\\n -\\sin\\theta & \\cos\\theta \\\\\n \\end{bmatrix}."
},
{
"math_id": 11,
"text": "\\begin{bmatrix}\n 0 & -1 \\\\[3pt]\n 1 & 0 \\\\\n\\end{bmatrix}, \\quad\n\\begin{bmatrix}\n -1 & 0 \\\\[3pt]\n 0 & -1 \\\\\n\\end{bmatrix}, \\quad\n\\begin{bmatrix}\n 0 & 1 \\\\[3pt]\n -1 & 0 \\\\\n\\end{bmatrix}"
},
{
"math_id": 12,
"text": "\\begin{bmatrix} 0 & -1 \\\\ 1 & 0 \\end{bmatrix}^2 \\ =\\ \\begin{bmatrix} -1 & 0 \\\\ 0 & -1 \\end{bmatrix} \\ = -I,"
},
{
"math_id": 13,
"text": "\\begin{bmatrix} x & -y \\\\ y & x \\end{bmatrix}"
},
{
"math_id": 14,
"text": "\\mathbb R^2"
},
{
"math_id": 15,
"text": "\\mathbb C"
},
{
"math_id": 16,
"text": "(a,b)\\mapsto a+ib,"
},
{
"math_id": 17,
"text": "\\begin{pmatrix}\\cos t&-\\sin t\\\\ \\sin t&\\cos t\\end{pmatrix},"
},
{
"math_id": 18,
"text": "\\cos t + i\\sin t=e^{it}"
},
{
"math_id": 19,
"text": "R \\cdot \\vec{x}"
},
{
"math_id": 20,
"text": "\n\\begin{alignat}{1}\nR_x(\\theta) &= \\begin{bmatrix}\n1 & 0 & 0 \\\\\n0 & \\cos \\theta & -\\sin \\theta \\\\[3pt]\n0 & \\sin \\theta & \\cos \\theta \\\\[3pt]\n\\end{bmatrix} \\\\[6pt]\nR_y(\\theta) &= \\begin{bmatrix}\n\\cos \\theta & 0 & \\sin \\theta \\\\[3pt]\n0 & 1 & 0 \\\\[3pt]\n-\\sin \\theta & 0 & \\cos \\theta \\\\\n\\end{bmatrix} \\\\[6pt]\nR_z(\\theta) &= \\begin{bmatrix}\n\\cos \\theta & -\\sin \\theta & 0 \\\\[3pt]\n\\sin \\theta & \\cos \\theta & 0 \\\\[3pt]\n0 & 0 & 1 \\\\\n\\end{bmatrix}\n\\end{alignat}\n"
},
{
"math_id": 21,
"text": " R_z(90^\\circ) \\begin{bmatrix} 1 \\\\ 0 \\\\ 0 \\\\ \\end{bmatrix} =\n\\begin{bmatrix} \\cos 90^\\circ & -\\sin 90^\\circ & 0 \\\\ \\sin 90^\\circ & \\quad\\cos 90^\\circ & 0\\\\ 0 & 0 & 1\\\\ \\end{bmatrix} \n\\begin{bmatrix} 1 \\\\ 0 \\\\ 0 \\\\ \\end{bmatrix} =\n\\begin{bmatrix} 0 & -1 & 0 \\\\ 1 & 0 & 0\\\\ 0 & 0 & 1\\\\ \\end{bmatrix} \n\\begin{bmatrix} 1 \\\\ 0 \\\\ 0 \\\\ \\end{bmatrix} = \\begin{bmatrix} 0 \\\\ 1 \\\\ 0 \\\\ \\end{bmatrix}\n"
},
{
"math_id": 22,
"text": "\\begin{align}\n R = R_z(\\alpha) \\, R_y(\\beta) \\, R_x(\\gamma) &=\n \\overset\\text{yaw}{\\begin{bmatrix}\n \\cos \\alpha & -\\sin \\alpha & 0 \\\\\n \\sin \\alpha & \\cos \\alpha & 0 \\\\\n 0 & 0 & 1 \\\\\n \\end{bmatrix}}\\overset\\text{pitch}{\\begin{bmatrix}\n \\cos \\beta & 0 & \\sin \\beta \\\\\n 0 & 1 & 0 \\\\\n -\\sin \\beta & 0 & \\cos \\beta \\\\\n \\end{bmatrix}}\\overset\\text{roll}{\\begin{bmatrix}\n 1 & 0 & 0 \\\\\n 0 & \\cos \\gamma & -\\sin \\gamma \\\\\n 0 & \\sin \\gamma & \\cos \\gamma \\\\\n \\end{bmatrix}} \\\\\n &= \\begin{bmatrix}\n \\cos\\alpha\\cos\\beta &\n \\cos\\alpha\\sin\\beta\\sin\\gamma - \\sin\\alpha\\cos\\gamma &\n \\cos\\alpha\\sin\\beta\\cos\\gamma + \\sin\\alpha\\sin\\gamma \\\\\n \\sin\\alpha\\cos\\beta &\n \\sin\\alpha\\sin\\beta\\sin\\gamma + \\cos\\alpha\\cos\\gamma &\n \\sin\\alpha\\sin\\beta\\cos\\gamma - \\cos\\alpha\\sin\\gamma \\\\\n -\\sin\\beta & \\cos\\beta\\sin\\gamma & \\cos\\beta\\cos\\gamma \\\\\n \\end{bmatrix}\n\\end{align}"
},
{
"math_id": 23,
"text": "\\begin{align} \\\\\n R = R_z(\\gamma) \\, R_y(\\beta) \\, R_x(\\alpha) &= \\begin{bmatrix}\n \\cos \\gamma & -\\sin \\gamma & 0 \\\\\n \\sin \\gamma & \\cos \\gamma & 0 \\\\\n 0 & 0 & 1 \\\\\n \\end{bmatrix}\\begin{bmatrix}\n \\cos \\beta & 0 & \\sin \\beta \\\\\n 0 & 1 & 0 \\\\\n -\\sin \\beta & 0 & \\cos \\beta \\\\\n \\end{bmatrix}\\begin{bmatrix}\n 1 & 0 & 0 \\\\\n 0 & \\cos \\alpha & -\\sin \\alpha \\\\\n 0 & \\sin \\alpha & \\cos \\alpha \\\\\n \\end{bmatrix} \\\\\n &= \\begin{bmatrix}\n \\cos\\beta\\cos\\gamma & \\sin\\alpha\\sin\\beta\\cos\\gamma - \\cos\\alpha\\sin\\gamma & \\cos\\alpha\\sin\\beta\\cos\\gamma + \\sin\\alpha\\sin\\gamma \\\\\n \\cos\\beta\\sin\\gamma & \\sin\\alpha\\sin\\beta\\sin\\gamma + \\cos\\alpha\\cos\\gamma & \\cos\\alpha\\sin\\beta\\sin\\gamma - \\sin\\alpha\\cos\\gamma \\\\\n -\\sin\\beta & \\sin\\alpha\\cos\\beta & \\cos\\alpha\\cos\\beta \\\\\n \\end{bmatrix} \n\\end{align}"
},
{
"math_id": 24,
"text": "R\\mathbf{u} = \\mathbf{u},"
},
{
"math_id": 25,
"text": "R\\mathbf{u} = I \\mathbf{u} \\implies \\left(R - I\\right) \\mathbf{u} = 0,"
},
{
"math_id": 26,
"text": "\\begin{align}\n0 &= R^\\mathsf{T} 0 + 0 \\\\\n&= R^\\mathsf{T}\\left(R - I\\right) \\mathbf{u} + \\left(R - I\\right) \\mathbf{u} \\\\\n&= \\left(R^\\mathsf{T}R - R^\\mathsf{T} + R - I\\right) \\mathbf{u} \\\\\n&= \\left(I - R^\\mathsf{T} + R - I\\right) \\mathbf{u} \\\\\n&= \\left(R - R^\\mathsf{T}\\right) \\mathbf{u}\n\\end{align}"
},
{
"math_id": 27,
"text": "[\\mathbf u]_{\\times} = \\left(R - R^\\mathsf{T}\\right)."
},
{
"math_id": 28,
"text": "\\left(R - R^\\mathsf{T}\\right) \\mathbf{u} = [\\mathbf u]_{\\times} \\mathbf{u} = \\mathbf{u} \\times \\mathbf{u} = 0\\,"
},
{
"math_id": 29,
"text": "R = \\begin{bmatrix} a & b & c \\\\ d & e & f \\\\ g & h & i \\\\ \\end{bmatrix},"
},
{
"math_id": 30,
"text": "\\mathbf{u} = \\begin{bmatrix} h-f \\\\ c-g \\\\ d-b \\\\ \\end{bmatrix}."
},
{
"math_id": 31,
"text": "\\operatorname{tr} (R) = 1 + 2\\cos\\theta ,"
},
{
"math_id": 32,
"text": "|\\theta| = \\arccos\\left(\\frac{\\operatorname{tr}(R) - 1}{2}\\right)."
},
{
"math_id": 33,
"text": "\\mathbf{n}=(n_1,n_2,n_3)"
},
{
"math_id": 34,
"text": "\\left\\{\\begin{matrix}\n\\cos \\theta&=&\\dfrac{\\operatorname{tr}(R) - 1}{2}\\\\\n\\sin \\theta&=&-\\dfrac{\\operatorname{tr}(K_n R)}{2}\n\\end{matrix}\\right.\n"
},
{
"math_id": 35,
"text": "K_n=\\begin{bmatrix}\n0 & -n_3 & n_2\\\\\nn_3 & 0 & -n_1\\\\\n-n_2 & n_1 & 0\\\\\n\\end{bmatrix}\n"
},
{
"math_id": 36,
"text": "R = \\begin{bmatrix} \nu_x^2 \\left(1-\\cos \\theta\\right) + \\cos \\theta & u_x u_y \\left(1-\\cos \\theta\\right) - u_z \\sin \\theta & u_x u_z \\left(1-\\cos \\theta\\right) + u_y \\sin \\theta \\\\ \nu_x u_y \\left(1-\\cos \\theta\\right) + u_z \\sin \\theta & u_y^2\\left(1-\\cos \\theta\\right) + \\cos \\theta & u_y u_z \\left(1-\\cos \\theta\\right) - u_x \\sin \\theta \\\\ \nu_x u_z \\left(1-\\cos \\theta\\right) - u_y \\sin \\theta & u_y u_z \\left(1-\\cos \\theta\\right) + u_x \\sin \\theta & u_z^2\\left(1-\\cos \\theta\\right) + \\cos \\theta\n\\end{bmatrix}."
},
{
"math_id": 37,
"text": "R = (\\cos\\theta)\\,I + (\\sin\\theta)\\,[\\mathbf u]_{\\times} + (1-\\cos\\theta)\\,(\\mathbf{u}\\otimes\\mathbf{u}),"
},
{
"math_id": 38,
"text": "R_{jk}=\\begin{cases}\n\\cos^2\\frac{\\theta}{2}+\\sin^2\\frac{\\theta}{2}\\left(2u_j^2-1\\right), \\quad &\\text{if }j=k\\\\\n2u_ju_k\\sin^2\\frac{\\theta}{2}-\\varepsilon_{jkl}u_l\\sin\\theta, \\quad &\\text{if }j\\neq k\n\\end{cases}"
},
{
"math_id": 39,
"text": " \\mathbf{u}\\otimes\\mathbf{u} = \\mathbf{u}\\mathbf{u}^\\mathsf{T} = \\begin{bmatrix}\nu_x^2 & u_x u_y & u_x u_z \\\\[3pt]\nu_x u_y & u_y^2 & u_y u_z \\\\[3pt]\nu_x u_z & u_y u_z & u_z^2\n\\end{bmatrix},\\qquad [\\mathbf u]_{\\times} = \\begin{bmatrix}\n0 & -u_z & u_y \\\\[3pt]\nu_z & 0 & -u_x \\\\[3pt]\n-u_y & u_x & 0\n\\end{bmatrix}."
},
{
"math_id": 40,
"text": "\\mathbb{R}^3"
},
{
"math_id": 41,
"text": "R_{\\mathbf{u}}(\\theta)\\mathbf{x}=\\mathbf{u}(\\mathbf{u}\\cdot\\mathbf{x})+\\cos\\left(\\theta\\right)(\\mathbf{u}\\times\\mathbf{x})\\times\\mathbf{u}+\\sin\\left(\\theta\\right)(\\mathbf{u}\\times\\mathbf{x})"
},
{
"math_id": 42,
"text": "R_{\\mathbf{u}}(\\theta)\\mathbf{x}= \\mathbf{x} \\cos(\\theta) + \\mathbf{u}(\\mathbf{x} \\cdot \\mathbf{u})(1- \\cos(\\theta)) - \\mathbf{x} \\times \\mathbf{u} \\sin{\\theta}"
},
{
"math_id": 43,
"text": "(R_{\\mathbf{u}}(\\theta)\\mathbf{x})_i = (R_{\\mathbf{u}}(\\theta))_{ij} {\\mathbf{x}}_{j} \\quad \\text{with} \\quad (R_{\\mathbf{u}}(\\theta))_{ij} = \\delta_{ij}\\cos(\\theta) + \\mathbf{u}_i\\mathbf{u}_j (1- \\cos(\\theta)) - \\sin{\\theta} \\varepsilon_{ijk} \\mathbf{u}_{k} "
},
{
"math_id": 44,
"text": "(\\boldsymbol{\\alpha}, \\boldsymbol{\\beta},\\mathbf u)"
},
{
"math_id": 45,
"text": "\nR_{\\mathbf{u}}(\\theta)\\boldsymbol{\\alpha}= \\cos\\left(\\theta\\right) \\boldsymbol{\\alpha} + \\sin\\left(\\theta\\right) \\boldsymbol{\\beta}, \\quad\nR_{\\mathbf{u}}(\\theta)\\boldsymbol{\\beta}= - \\sin\\left(\\theta\\right) \\boldsymbol{\\alpha} + \\cos\\left(\\theta\\right) \\boldsymbol{\\beta}, \\quad\nR_{\\mathbf{u}}(\\theta)\\mathbf{u}=\\mathbf{u}.\n"
},
{
"math_id": 46,
"text": "\\mathbb{R}^n,"
},
{
"math_id": 47,
"text": " R^\\mathsf{T} = R^{-1}"
},
{
"math_id": 48,
"text": " \\det R = \\pm 1"
},
{
"math_id": 49,
"text": "\\begin{align}\n \\det\\left(R - I\\right)\n &= \\det\\left(R^\\mathsf{T}\\right) \\det\\left(R - I\\right)\n = \\det\\left(R^\\mathsf{T}R - R^\\mathsf{T}\\right)\n = \\det\\left(I - R^\\mathsf{T}\\right) \\\\\n &= \\det(I - R)\n = \\left(-1\\right)^n \\det\\left(R - I\\right)\n = -\\det\\left(R - I\\right).\n\\end{align}"
},
{
"math_id": 50,
"text": " d^2(O,P) = \\| \\mathbf{p} \\|^2 = \\sum_{r=1}^n p_r^2 "
},
{
"math_id": 51,
"text": " \\| \\mathbf{p} \\|^2 = \\begin{bmatrix}p_1 \\cdots p_n\\end{bmatrix} \\begin{bmatrix}p_1 \\\\ \\vdots \\\\ p_n \\end{bmatrix} = \\mathbf{p}^\\mathsf{T} \\mathbf{p} . "
},
{
"math_id": 52,
"text": " \\mathbf{p}^\\mathsf{T} \\mathbf{p} = (Q \\mathbf{p})^\\mathsf{T} (Q \\mathbf{p}) , "
},
{
"math_id": 53,
"text": "\\begin{align}\n \\mathbf{p}^\\mathsf{T} I \\mathbf{p}&{}= \\left(\\mathbf{p}^\\mathsf{T} Q^\\mathsf{T}\\right) (Q \\mathbf{p}) \\\\\n &{}= \\mathbf{p}^\\mathsf{T} \\left(Q^\\mathsf{T} Q\\right) \\mathbf{p} .\n\\end{align}"
},
{
"math_id": 54,
"text": " Q^\\mathsf{T} Q = I . "
},
{
"math_id": 55,
"text": " \\det Q = +1 . "
},
{
"math_id": 56,
"text": "\\begin{align} \\left(Q^\\mathsf{T}\\right)^\\mathsf{T} \\left(Q^\\mathsf{T}\\right) &= Q Q^\\mathsf{T} = I\\\\ \\det Q^\\mathsf{T} &= \\det Q = +1. \\end{align}"
},
{
"math_id": 57,
"text": "\\begin{align}\n \\left(Q_1 Q_2\\right)^\\mathsf{T} \\left(Q_1 Q_2\\right) &= Q_2^\\mathsf{T} \\left(Q_1^\\mathsf{T} Q_1\\right) Q_2 = I \\\\\n \\det \\left(Q_1 Q_2\\right) &= \\left(\\det Q_1\\right) \\left(\\det Q_2\\right) = +1.\n\\end{align}"
},
{
"math_id": 58,
"text": "\\begin{align}\nQ_1 &= \\begin{bmatrix}0 & -1 & 0 \\\\ 1 & 0 & 0 \\\\ 0 & 0 & 1\\end{bmatrix} &\nQ_2 &= \\begin{bmatrix}0 & 0 & 1 \\\\ 0 & 1 & 0 \\\\ -1 & 0 & 0\\end{bmatrix} \\\\\nQ_1 Q_2 &= \\begin{bmatrix}0 & -1 & 0 \\\\ 0 & 0 & 1 \\\\ -1 & 0 & 0\\end{bmatrix} &\nQ_2 Q_1 &= \\begin{bmatrix}0 & 0 & 1 \\\\ 1 & 0 & 0 \\\\ 0 & 1 & 0\\end{bmatrix}.\n\\end{align}"
},
{
"math_id": 59,
"text": "\nR(\\theta) = \\begin{bmatrix}\n\\cos \\theta & -\\sin \\theta \\\\\n\\sin \\theta & \\cos \\theta \\\\\n\\end{bmatrix}"
},
{
"math_id": 60,
"text": " Q = \\begin{bmatrix} 0.36 & 0.48 & -0.80 \\\\ -0.80 & 0.60 & 0.00 \\\\ 0.48 & 0.64 & 0.60 \\end{bmatrix} . "
},
{
"math_id": 61,
"text": " Q \\mathbf{v} = \\lambda \\mathbf{v}, "
},
{
"math_id": 62,
"text": " \\mathbf{0} = (\\lambda I - Q) \\mathbf{v} . "
},
{
"math_id": 63,
"text": "\\begin{align}\n 0 &{}= \\det (\\lambda I - Q) \\\\\n &{}= \\lambda^3 - \\tfrac{39}{25} \\lambda^2 + \\tfrac{39}{25} \\lambda - 1 \\\\\n &{}= (\\lambda-1) \\left(\\lambda^2 - \\tfrac{14}{25} \\lambda + 1\\right).\n\\end{align}"
},
{
"math_id": 64,
"text": "Q = \\begin{bmatrix} a & -b \\\\ b & a \\end{bmatrix}"
},
{
"math_id": 65,
"text": "\\begin{bmatrix}a\\\\b\\\\c\\end{bmatrix} . "
},
{
"math_id": 66,
"text": "\\begin{bmatrix}r\\\\0\\\\c\\end{bmatrix} , "
},
{
"math_id": 67,
"text": "Q_{xz}Q_{xy}Q = \\begin{bmatrix}1&0&0\\\\0&\\ast&\\ast\\\\0&\\ast&\\ast\\end{bmatrix} . "
},
{
"math_id": 68,
"text": "Q_{yz}Q_{xz}Q_{xy}Q = \\begin{bmatrix}1&0&0\\\\0&1&0\\\\0&0&1\\end{bmatrix} , "
},
{
"math_id": 69,
"text": "Q = Q_{xy}^{-1}Q_{xz}^{-1}Q_{yz}^{-1} . "
},
{
"math_id": 70,
"text": "\\sum_{k=1}^{n-1} k = \\frac{1}{2}n(n - 1) "
},
{
"math_id": 71,
"text": "Q_{3 \\times 3} = \\begin{bmatrix}\n \\cos \\theta & -\\sin \\theta & {\\color{CadetBlue}0} \\\\\n \\sin \\theta & \\cos \\theta & {\\color{CadetBlue}0} \\\\\n {\\color{CadetBlue}0} & {\\color{CadetBlue}0} & {\\color{CadetBlue}1}\n\\end{bmatrix}"
},
{
"math_id": 72,
"text": "Q_{2 \\times 2} =\n \\begin{bmatrix}\n \\cos \\theta & -\\sin \\theta \\\\ \n \\sin \\theta & \\cos \\theta\n \\end{bmatrix},\n"
},
{
"math_id": 73,
"text": "Q_{3 \\times 3} = \\left[ \\begin{matrix} Q_{2 \\times 2} & \\mathbf{0} \\\\ \\mathbf{0}^\\mathsf{T} & 1 \\end{matrix} \\right]."
},
{
"math_id": 74,
"text": "\\begin{align}\n Q_{\\mathbf{x}}(\\theta) &= \\begin{bmatrix}\n {\\color{CadetBlue}1} & {\\color{CadetBlue}0} & {\\color{CadetBlue}0} \\\\\n {\\color{CadetBlue}0} & \\cos \\theta & -\\sin \\theta \\\\\n {\\color{CadetBlue}0} & \\sin \\theta & \\cos \\theta\n \\end{bmatrix}, \\\\[8px]\n Q_{\\mathbf{y}}(\\theta) &= \\begin{bmatrix}\n \\cos \\theta & {\\color{CadetBlue}0} & \\sin \\theta \\\\\n {\\color{CadetBlue}0} & {\\color{CadetBlue}1} & {\\color{CadetBlue}0} \\\\\n -\\sin \\theta & {\\color{CadetBlue}0} & \\cos \\theta\n \\end{bmatrix}, \\\\[8px]\n Q_{\\mathbf{z}}(\\theta) &= \\begin{bmatrix}\n \\cos \\theta & -\\sin \\theta & {\\color{CadetBlue}0} \\\\\n \\sin \\theta & \\cos \\theta & {\\color{CadetBlue}0} \\\\\n {\\color{CadetBlue}0} & {\\color{CadetBlue}0} & {\\color{CadetBlue}1}\n \\end{bmatrix},\n\\end{align}"
},
{
"math_id": 75,
"text": "\\begin{align}\n Q_\\mathbf{u}(\\theta)\n &= \\begin{bmatrix}\n 0 & -z & y \\\\\n z & 0 & -x \\\\\n -y & x & 0\n \\end{bmatrix} \\sin\\theta + \\left(I - \\mathbf{u}\\mathbf{u}^\\mathsf{T}\\right) \\cos\\theta + \\mathbf{u}\\mathbf{u}^\\mathsf{T} \\\\[8px]\n &= \\begin{bmatrix}\n \\left(1 - x^2\\right) c_\\theta + x^2 &\n -z s_\\theta - x y c_\\theta + x y &\n y s_\\theta - x z c_\\theta + x z \\\\\n z s_\\theta - x y c_\\theta + x y &\n \\left(1 - y^2\\right) c_\\theta + y^2 &\n -x s_\\theta - y z c_\\theta + y z \\\\\n -y s_\\theta - x z c_\\theta + x z &\n x s_\\theta - y z c_\\theta + y z &\n \\left(1 - z^2\\right) c_\\theta + z^2\n \\end{bmatrix} \\\\[8px]\n &= \\begin{bmatrix}\n x^2 (1 - c_\\theta) + c_\\theta &\n x y (1 - c_\\theta) - z s_\\theta &\n x z (1 - c_\\theta) + y s_\\theta \\\\\n x y (1 - c_\\theta) + z s_\\theta &\n y^2 (1 - c_\\theta) + c_\\theta &\n y z (1 - c_\\theta) - x s_\\theta \\\\\n x z (1 - c_\\theta) - y s_\\theta &\n y z (1 - c_\\theta) + x s_\\theta &\n z^2 (1 - c_\\theta) + c_\\theta\n \\end{bmatrix}, \n\\end{align}"
},
{
"math_id": 76,
"text": "SO(n) \\hookrightarrow SO(n + 1) \\to S^n ,"
},
{
"math_id": 77,
"text": " A \\mapsto (I+A)(I-A)^{-1} , "
},
{
"math_id": 78,
"text": "\\begin{align}\n &\\begin{bmatrix}\n 0 & -z & y \\\\\n z & 0 & -x \\\\\n -y & x & 0\n \\end{bmatrix} \\mapsto \\\\[3pt]\n \\quad \\frac{1}{1 + x^2 + y^2 + z^2}\n &\\begin{bmatrix}\n 1 + x^2 - y^2 - z^2 & 2xy - 2z & 2y + 2xz \\\\\n 2xy + 2z & 1 - x^2 + y^2 - z^2 & 2yz - 2x \\\\\n 2xz - 2y & 2x + 2yz & 1 - x^2 - y^2 + z^2\n \\end{bmatrix} .\n\\end{align}"
},
{
"math_id": 79,
"text": "\\begin{align}\n R(\\theta)\n &{}= \n \\begin{bmatrix}\n 1 & -\\tan \\frac{\\theta}{2}\\\\\n 0 & 1\n \\end{bmatrix}\n \\begin{bmatrix}\n 1 & 0\\\\\n \\sin \\theta & 1 \n \\end{bmatrix}\n \\begin{bmatrix}\n 1 & -\\tan \\frac{\\theta}{2}\\\\\n 0 & 1\n \\end{bmatrix}\n\\end{align}\n"
},
{
"math_id": 80,
"text": "\\begin{align}\n R(\\theta)\n &{}= \n \\begin{bmatrix}\n 1 & 0\\\\\n \\tan\\theta & 1\n \\end{bmatrix}\n \\begin{bmatrix}\n 1 & -\\sin\\theta\\cos\\theta\\\\\n 0 & 1\n \\end{bmatrix}\n \\begin{bmatrix}\n \\cos\\theta & 0\\\\\n 0 & \\frac{1}{\\cos\\theta}\n \\end{bmatrix}\n\\end{align}\n"
},
{
"math_id": 81,
"text": "\\operatorname{GL}_n(\\R)"
},
{
"math_id": 82,
"text": "\\mathfrak{so}(n) = \\mathfrak{o}(n) = \\left\\{X \\in M_n(\\mathbb{R}) \\mid X = -X^\\mathsf{T} \\right\\},"
},
{
"math_id": 83,
"text": "\n L_{\\mathbf{x}} = \\begin{bmatrix}0&0&0\\\\0&0&-1\\\\0&1&0\\end{bmatrix} , \\quad\n L_{\\mathbf{y}} = \\begin{bmatrix}0&0&1\\\\0&0&0\\\\-1&0&0\\end{bmatrix} , \\quad\n L_{\\mathbf{z}} = \\begin{bmatrix}0&-1&0\\\\1&0&0\\\\0&0&0\\end{bmatrix}.\n"
},
{
"math_id": 84,
"text": "\\mathbf{su}(2) \\cong \\mathbb{R}^3"
},
{
"math_id": 85,
"text": "\\begin{align}\n \\exp( A ) &= \\exp\\bigl(\\theta(\\mathbf{u}\\cdot\\mathbf{L})\\bigr) \\\\\n &= \\exp \\left( \\begin{bmatrix} 0 & -z \\theta & y \\theta \\\\ z \\theta & 0&-x \\theta \\\\ -y \\theta & x \\theta & 0 \\end{bmatrix} \\right) \\\\\n &= I + \\sin \\theta \\ \\mathbf{u}\\cdot\\mathbf{L} + (1-\\cos \\theta)(\\mathbf{u}\\cdot\\mathbf{L} )^2 ,\n\\end{align}"
},
{
"math_id": 86,
"text": " \\mathbf{u}\\cdot\\mathbf{L} = \\begin{bmatrix} 0 & -z & y \\\\ z & 0&-x \\\\ -y & x & 0 \\end{bmatrix} ."
},
{
"math_id": 87,
"text": " Z = C(X, Y) = X + Y + \\tfrac{1}{2} [X, Y] + \\tfrac{1}{12} \\bigl[X,[X,Y]\\bigr] - \\tfrac{1}{12} \\bigl[Y,[X,Y]\\bigr] + \\cdots ."
},
{
"math_id": 88,
"text": "Z = \\alpha X + \\beta Y + \\gamma[X, Y],"
},
{
"math_id": 89,
"text": " \\mathbb{H} \\supset \\{q \\in \\mathbb{H}: \\|q\\| = 1\\} \\ni w + \\mathbf{i}x + \\mathbf{j}y + \\mathbf{k}z \\mapsto\n \\begin{bmatrix}\n 1 - 2 y^2 - 2 z^2 & 2 x y - 2 z w & 2 x z + 2 y w \\\\\n 2 x y + 2 z w & 1 - 2 x^2 - 2 z^2 & 2 y z - 2 x w \\\\\n 2 x z - 2 y w & 2 y z + 2 x w & 1 - 2 x^2 - 2 y^2\n \\end{bmatrix} \\in \\mathrm{SO}(3),\n"
},
{
"math_id": 90,
"text": "\\mathrm{SU}(2) \\ni \\begin{bmatrix}\n \\alpha & \\beta \\\\\n -\\overline{\\beta} & \\overline{\\alpha}\n \\end{bmatrix} \\mapsto \n \\begin{bmatrix}\n \\frac{1}{2}\\left(\\alpha^2 - \\beta^2 + \\overline{\\alpha^2} - \\overline{\\beta^2}\\right) &\n \\frac{i}{2}\\left(-\\alpha^2 - \\beta^2 + \\overline{\\alpha^2} + \\overline{\\beta^2}\\right) &\n -\\alpha\\beta - \\overline{\\alpha}\\overline{\\beta} \\\\\n \\frac{i}{2}\\left(\\alpha^2 - \\beta^2 - \\overline{\\alpha^2} + \\overline{\\beta^2}\\right) &\n \\frac{i}{2}\\left(\\alpha^2 + \\beta^2 + \\overline{\\alpha^2} + \\overline{\\beta^2}\\right) &\n -i\\left(+\\alpha\\beta - \\overline{\\alpha}\\overline{\\beta}\\right) \\\\\n \\alpha\\overline{\\beta} + \\overline{\\alpha}\\beta &\n i\\left(-\\alpha\\overline{\\beta} + \\overline{\\alpha}\\beta\\right) &\n \\alpha\\overline{\\alpha} - \\beta\\overline{\\beta}\n \\end{bmatrix} \\in \\mathrm{SO}(3).\n"
},
{
"math_id": 91,
"text": " I + A \\, d\\theta ,"
},
{
"math_id": 92,
"text": " dL_{x} = \\begin{bmatrix} 1 & 0 & 0 \\\\ 0 & 1 & -d\\theta \\\\ 0 & d\\theta & 1 \\end{bmatrix}. "
},
{
"math_id": 93,
"text": " Q = \\begin{bmatrix}\n 1 - 2 y^2 - 2 z^2 & 2 x y - 2 z w & 2 x z + 2 y w \\\\\n 2 x y + 2 z w & 1 - 2 x^2 - 2 z^2 & 2 y z - 2 x w \\\\\n 2 x z - 2 y w & 2 y z + 2 x w & 1 - 2 x^2 - 2 y^2\n \\end{bmatrix}\n."
},
{
"math_id": 94,
"text": "\\begin{align}\n n &= w \\times w + x \\times x + y \\times y + z \\times z \\\\\n s &= \\begin{cases}\n 0 &\\text{if } n = 0 \\\\\n \\frac{2}{n} &\\text{otherwise}\n \\end{cases} \\\\\n\\end{align}"
},
{
"math_id": 95,
"text": "Q = \\begin{bmatrix}\n 1 - s(yy + zz) & s(xy - wz) & s(xz + wy) \\\\\n s(xy + wz) & 1 - s(xx + zz) & s(yz - wx) \\\\\n s(xz - wy) & s(yz + wx) & 1 - s(xx + yy)\n\\end{bmatrix}"
},
{
"math_id": 96,
"text": "\\begin{align}\n t &= \\operatorname{tr} Q = Q_{xx} + Q_{yy} + Q_{zz} \\quad (\\text{the trace of }Q) \\\\\n r &= \\sqrt{1 + t} \\\\\n w &= \\tfrac{1}{2} r \\\\\n x &= \\operatorname{sgn}\\left(Q_{zy} - Q_{yz}\\right)\\left|\\tfrac12 \\sqrt{1 + Q_{xx} - Q_{yy} - Q_{zz}}\\right| \\\\\n y &= \\operatorname{sgn}\\left(Q_{xz} - Q_{zx}\\right)\\left|\\tfrac12 \\sqrt{1 - Q_{xx} + Q_{yy} - Q_{zz}}\\right| \\\\\n z &= \\operatorname{sgn}\\left(Q_{yx} - Q_{xy}\\right)\\left|\\tfrac12 \\sqrt{1 - Q_{xx} - Q_{yy} + Q_{zz}}\\right|\n\\end{align}"
},
{
"math_id": 97,
"text": "\\begin{align}\n t &= \\operatorname{tr} Q = Q_{xx} + Q_{yy} + Q_{zz} \\\\\n r &= \\sqrt{1 + t} \\\\\n s &= \\tfrac{1}{2r} \\\\\n w &= \\tfrac{1}{2} r \\\\\n x &= \\left(Q_{zy} - Q_{yz}\\right)s \\\\\n y &= \\left(Q_{xz} - Q_{zx}\\right)s \\\\\n z &= \\left(Q_{yx} - Q_{xy}\\right)s\n\\end{align}"
},
{
"math_id": 98,
"text": "\\begin{align}\n r &= \\sqrt{1 + Q_{xx} - Q_{yy} - Q_{zz}} \\\\\n s &= \\tfrac{1}{2r} \\\\\n w &= \\left(Q_{zy} - Q_{yz}\\right)s \\\\\n x &= \\tfrac12 r \\\\\n y &= \\left(Q_{xy} + Q_{yx}\\right)s \\\\\n z &= \\left(Q_{zx} + Q_{xz}\\right)s\n\\end{align}"
},
{
"math_id": 99,
"text": " K = \\frac13 \\begin{bmatrix}\n Q_{xx}-Q_{yy}-Q_{zz} & Q_{yx}+Q_{xy} & Q_{zx}+Q_{xz} & Q_{zy}-Q_{yz} \\\\\n Q_{yx}+Q_{xy} & Q_{yy}-Q_{xx}-Q_{zz} & Q_{zy}+Q_{yz} & Q_{xz}-Q_{zx} \\\\\n Q_{zx}+Q_{xz} & Q_{zy}+Q_{yz} & Q_{zz}-Q_{xx}-Q_{yy} & Q_{yx}-Q_{xy} \\\\\n Q_{zy}-Q_{yz} & Q_{xz}-Q_{zx} & Q_{yx}-Q_{xy} & Q_{xx}+Q_{yy}+Q_{zz} \n \\end{bmatrix} ,"
},
{
"math_id": 100,
"text": "\\begin{align}\n &\\left(Q_{xx} - M_{xx}\\right)^2 + \\left(Q_{xy} - M_{xy}\\right)^2 + \\left(Q_{yx} - M_{yx}\\right)^2 + \\left(Q_{yy} - M_{yy}\\right)^2 \\\\\n &\\quad {}+ \\left(Q_{xx}^2 + Q_{yx}^2 - 1\\right)Y_{xx} + \\left(Q_{xy}^2 + Q_{yy}^2 - 1\\right)Y_{yy} + 2\\left(Q_{xx} Q_{xy} + Q_{yx} Q_{yy}\\right)Y_{xy} .\n\\end{align}"
},
{
"math_id": 101,
"text": "2\\begin{bmatrix}\n Q_{xx} - M_{xx} + Q_{xx} Y_{xx} + Q_{xy} Y_{xy} & Q_{xy} - M_{xy} + Q_{xx} Y_{xy} + Q_{xy} Y_{yy} \\\\\n Q_{yx} - M_{yx} + Q_{yx} Y_{xx} + Q_{yy} Y_{xy} & Q_{yy} - M_{yy} + Q_{yx} Y_{xy} + Q_{yy} Y_{yy}\n\\end{bmatrix}"
},
{
"math_id": 102,
"text": " 0 = 2(Q - M) + 2QY , "
},
{
"math_id": 103,
"text": " M = Q(I + Y) = QS , "
},
{
"math_id": 104,
"text": " S^2 = \\left(Q^\\mathsf{T} M\\right)^\\mathsf{T} \\left(Q^\\mathsf{T} M\\right) = M^\\mathsf{T} Q Q^\\mathsf{T} M = M^\\mathsf{T} M "
},
{
"math_id": 105,
"text": "\\begin{align}\nc &= \\cos \\theta\\\\\ns &= \\sin \\theta\\\\\nC &= 1-c\n\\end{align}"
},
{
"math_id": 106,
"text": "Q(\\theta) = \\begin{bmatrix}\nxxC+c & xyC-zs & xzC+ys\\\\\nyxC+zs & yyC+c & yzC-xs\\\\\nzxC-ys & zyC+xs & zzC+c\n\\end{bmatrix}"
},
{
"math_id": 107,
"text": "\\begin{align}\n x &= Q_{zy} - Q_{yz}\\\\\n y &= Q_{xz} - Q_{zx}\\\\\n z &= Q_{yx} - Q_{xy}\\\\\n r &= \\sqrt{x^2 + y^2 + z^2}\\\\\n t &= Q_{xx} + Q_{yy} + Q_{zz}\\\\\n \\theta &= \\operatorname{atan2}(r,t-1)\\end{align}"
},
{
"math_id": 108,
"text": " Q(\\theta_1,\\theta_2,\\theta_3)= Q_{\\mathbf{z}}(\\theta_1) Q_{\\mathbf{y}}(\\theta_2) Q_{\\mathbf{z}}(\\theta_3) , "
},
{
"math_id": 109,
"text": " Q(\\theta_1,\\theta_2,\\theta_3)= Q_{\\mathbf{z}}(\\theta_3) Q_{\\mathbf{y}}(\\theta_2) Q_{\\mathbf{x}}(\\theta_1) . "
},
{
"math_id": 110,
"text": "\\mathbb{S}"
},
{
"math_id": 111,
"text": "R:=I+y x^\\mathsf{T}-x y^\\mathsf{T}+\\frac{1}{1+\\langle x,y\\rangle}\\left(yx^\\mathsf{T}-xy^\\mathsf{T}\\right)^2"
}
] | https://en.wikipedia.org/wiki?curid=856005 |
8562 | Differential topology | Branch of mathematics
In mathematics, differential topology is the field dealing with the topological properties and smooth properties of smooth manifolds. In this sense differential topology is distinct from the closely related field of differential geometry, which concerns the "geometric" properties of smooth manifolds, including notions of size, distance, and rigid shape. By comparison differential topology is concerned with coarser properties, such as the number of holes in a manifold, its homotopy type, or the structure of its diffeomorphism group. Because many of these coarser properties may be captured algebraically, differential topology has strong links to algebraic topology.
The central goal of the field of differential topology is the classification of all smooth manifolds up to diffeomorphism. Since dimension is an invariant of smooth manifolds up to diffeomorphism type, this classification is often studied by classifying the (connected) manifolds in each dimension separately:
Beginning in dimension 4, the classification becomes much more difficult for two reasons. Firstly, every finitely presented group appears as the fundamental group of some 4-manifold, and since the fundamental group is a diffeomorphism invariant, this makes the classification of 4-manifolds at least as difficult as the classification of finitely presented groups. By the word problem for groups, which is equivalent to the halting problem, it is impossible to classify such groups, so a full topological classification is impossible. Secondly, beginning in dimension four it is possible to have smooth manifolds that are homeomorphic, but with distinct, non-diffeomorphic smooth structures. This is true even for the Euclidean space formula_2, which admits many exotic formula_2 structures. This means that the study of differential topology in dimensions 4 and higher must use tools genuinely outside the realm of the regular continuous topology of topological manifolds. One of the central open problems in differential topology is the four-dimensional smooth Poincaré conjecture, which asks if every smooth 4-manifold that is homeomorphic to the 4-sphere, is also diffeomorphic to it. That is, does the 4-sphere admit only one smooth structure? This conjecture is true in dimensions 1, 2, and 3, by the above classification results, but is known to be false in dimension 7 due to the Milnor spheres.
Important tools in studying the differential topology of smooth manifolds include the construction of smooth topological invariants of such manifolds, such as de Rham cohomology or the intersection form, as well as smoothable topological constructions, such as smooth surgery theory or the construction of cobordisms. Morse theory is an important tool which studies smooth manifolds by considering the critical points of differentiable functions on the manifold, demonstrating how the smooth structure of the manifold enters into the set of tools available. Oftentimes more geometric or analytical techniques may be used, by equipping a smooth manifold with a Riemannian metric or by studying a differential equation on it. Care must be taken to ensure that the resulting information is insensitive to this choice of extra structure, and so genuinely reflects only the topological properties of the underlying smooth manifold. For example, the Hodge theorem provides a geometric and analytical interpretation of the de Rham cohomology, and gauge theory was used by Simon Donaldson to prove facts about the intersection form of simply connected 4-manifolds. In some cases techniques from contemporary physics may appear, such as topological quantum field theory, which can be used to compute topological invariants of smooth spaces.
Famous theorems in differential topology include the Whitney embedding theorem, the hairy ball theorem, the Hopf theorem, the Poincaré–Hopf theorem, Donaldson's theorem, and the Poincaré conjecture.
Description.
Differential topology considers the properties and structures that require only a smooth structure on a manifold to be defined. Smooth manifolds are 'softer' than manifolds with extra geometric structures, which can act as obstructions to certain types of equivalences and deformations that exist in differential topology. For instance, volume and Riemannian curvature are invariants that can distinguish different geometric structures on the same smooth manifold—that is, one can smoothly "flatten out" certain manifolds, but it might require distorting the space and affecting the curvature or volume.
On the other hand, smooth manifolds are more rigid than the topological manifolds. John Milnor discovered that some spheres have more than one smooth structure—see Exotic sphere and Donaldson's theorem. Michel Kervaire exhibited topological manifolds with no smooth structure at all. Some constructions of smooth manifold theory, such as the existence of tangent bundles, can be done in the topological setting with much more work, and others cannot.
One of the main topics in differential topology is the study of special kinds of smooth mappings between manifolds, namely immersions and submersions, and the intersections of submanifolds via transversality. More generally one is interested in properties and invariants of smooth manifolds that are carried over by diffeomorphisms, another special kind of smooth mapping. Morse theory is another branch of differential topology, in which topological information about a manifold is deduced from changes in the rank of the Jacobian of a function.
For a list of differential topology topics, see the following reference: List of differential geometry topics.
Differential topology versus differential geometry.
Differential topology and differential geometry are first characterized by their "similarity". They both study primarily the properties of differentiable manifolds, sometimes with a variety of structures imposed on them.
One major difference lies in the nature of the problems that each subject tries to address. In one view, differential topology distinguishes itself from differential geometry by studying primarily those problems that are "inherently global". Consider the example of a coffee cup and a donut. From the point of view of differential topology, the donut and the coffee cup are "the same" (in a sense). This is an inherently global view, though, because there is no way for the differential topologist to tell whether the two objects are the same (in this sense) by looking at just a tiny ("local") piece of either of them. They must have access to each entire ("global") object.
From the point of view of differential geometry, the coffee cup and the donut are "different" because it is impossible to rotate the coffee cup in such a way that its configuration matches that of the donut. This is also a global way of thinking about the problem. But an important distinction is that the geometer does not need the entire object to decide this. By looking, for instance, at just a tiny piece of the handle, they can decide that the coffee cup is different from the donut because the handle is thinner (or more curved) than any piece of the donut.
To put it succinctly, differential topology studies structures on manifolds that, in a sense, have no interesting local structure. Differential geometry studies structures on manifolds that do have an interesting local (or sometimes even infinitesimal) structure.
More mathematically, for example, the problem of constructing a diffeomorphism between two manifolds of the same dimension is inherently global since "locally" two such manifolds are always diffeomorphic. Likewise, the problem of computing a quantity on a manifold that is invariant under differentiable mappings is inherently global, since any local invariant will be "trivial" in the sense that it is already exhibited in the topology of formula_3. Moreover, differential topology does not restrict itself necessarily to the study of diffeomorphism. For example, symplectic topology—a subbranch of differential topology—studies global properties of symplectic manifolds. Differential geometry concerns itself with problems—which may be local "or" global—that always have some non-trivial local properties. Thus differential geometry may study differentiable manifolds equipped with a "connection", a "metric" (which may be Riemannian, pseudo-Riemannian, or Finsler), a special sort of "distribution" (such as a CR structure), and so on.
This distinction between differential geometry and differential topology is blurred, however, in questions specifically pertaining to local diffeomorphism invariants such as the tangent space at a point. Differential topology also deals with questions like these, which specifically pertain to the properties of differentiable mappings on formula_3 (for example the tangent bundle, jet bundles, the Whitney extension theorem, and so forth).
The distinction is concise in abstract terms:
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "[0,1)"
},
{
"math_id": 1,
"text": "[0,1]"
},
{
"math_id": 2,
"text": "\\mathbb{R}^4"
},
{
"math_id": 3,
"text": "\\R^n"
}
] | https://en.wikipedia.org/wiki?curid=8562 |
856347 | Tor functor | Construction in homological algebra
In mathematics, the Tor functors are the derived functors of the tensor product of modules over a ring. Along with the Ext functor, Tor is one of the central concepts of homological algebra, in which ideas from algebraic topology are used to construct invariants of algebraic structures. The homology of groups, Lie algebras, and associative algebras can all be defined in terms of Tor. The name comes from a relation between the first Tor group Tor1 and the torsion subgroup of an abelian group.
In the special case of abelian groups, Tor was introduced by Eduard Čech (1935) and named by Samuel Eilenberg around 1950. It was first applied to the Künneth theorem and universal coefficient theorem in topology. For modules over any ring, Tor was defined by Henri Cartan and Eilenberg in their 1956 book "Homological Algebra".
Definition.
Let "R" be a ring. Write "R"-Mod for the category of left "R"-modules and Mod-"R" for the category of right "R"-modules. (If "R" is commutative, the two categories can be identified.) For a fixed left "R"-module "B", let formula_0 for "A" in Mod-"R". This is a right exact functor from Mod-"R" to the category of abelian groups Ab, and so it has left derived functors formula_1. The Tor groups are the abelian groups defined by
formula_2
for an integer "i". By definition, this means: take any projective resolution
formula_3
and remove "A", and form the chain complex:
formula_4
For each integer "i", the group formula_5 is the homology of this complex at position "i". It is zero for "i" negative. Moreover, formula_6 is the cokernel of the map formula_7, which is isomorphic to formula_8.
Alternatively, one can define Tor by fixing "A" and taking the left derived functors of the right exact functor "G"("B") = "A" ⊗"R" "B". That is, tensor "A" with a projective resolution of "B" and take homology. Cartan and Eilenberg showed that these constructions are independent of the choice of projective resolution, and that both constructions yield the same Tor groups. Moreover, for a fixed ring "R", Tor is a functor in each variable (from "R"-modules to abelian groups).
For a commutative ring "R" and "R"-modules "A" and "B", Tor("A", "B") is an "R"-module (using that "A" ⊗"R" "B" is an "R"-module in this case). For a non-commutative ring "R", Tor("A", "B") is only an abelian group, in general. If "R" is an algebra over a ring "S" (which means in particular that "S" is commutative), then Tor("A", "B") is at least an "S"-module.
Properties.
Here are some of the basic properties and computations of Tor groups.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T(A) = A\\otimes_R B"
},
{
"math_id": 1,
"text": "L_i T"
},
{
"math_id": 2,
"text": "\\operatorname{Tor}_i^R(A,B) = (L_iT)(A),"
},
{
"math_id": 3,
"text": "\\cdots\\to P_2 \\to P_1 \\to P_0 \\to A\\to 0,"
},
{
"math_id": 4,
"text": "\\cdots \\to P_2\\otimes_R B \\to P_1\\otimes_R B \\to P_0\\otimes_R B \\to 0"
},
{
"math_id": 5,
"text": "\\operatorname{Tor}_i^R(A,B)"
},
{
"math_id": 6,
"text": "\\operatorname{Tor}_0^R(A,B)"
},
{
"math_id": 7,
"text": "P_1\\otimes_R B \\to P_0\\otimes_R B"
},
{
"math_id": 8,
"text": "A \\otimes_R B"
},
{
"math_id": 9,
"text": "\\cdots \\to \\operatorname{Tor}_2^R(M,B) \\to \\operatorname{Tor}_1^R(K,B) \\to \\operatorname{Tor}_1^R(L,B) \\to \\operatorname{Tor}_1^R (M,B) \\to K\\otimes_R B\\to L\\otimes_R B\\to M\\otimes_R B\\to 0,"
},
{
"math_id": 10,
"text": "\\operatorname{Tor}^R_i(R/(u),B)\\cong\\begin{cases} B/uB & i=0\\\\ B[u] & i=1\\\\ 0 &\\text{otherwise}\\end{cases}"
},
{
"math_id": 11,
"text": "B[u] = \\{x \\in B : ux =0 \\}"
},
{
"math_id": 12,
"text": "\\Z"
},
{
"math_id": 13,
"text": "\\operatorname{Tor}^{\\Z}_1(A,B)"
},
{
"math_id": 14,
"text": "\\operatorname{Tor}_*^R(k,k)"
},
{
"math_id": 15,
"text": "\\operatorname{Tor}^{\\Z}_i(A,B)=0"
},
{
"math_id": 16,
"text": "\\operatorname{Tor}^{R}_i(A,B)=0"
},
{
"math_id": 17,
"text": "\\begin{align}\n\\operatorname{Tor}_i^R \\left (\\bigoplus_{\\alpha} M_{\\alpha}, N \\right ) &\\cong \\bigoplus_{\\alpha} \\operatorname{Tor}_i^R(M_{\\alpha},N) \\\\\n\\operatorname{Tor}_i^R \\left (\\varinjlim_{\\alpha} M_{\\alpha}, N \\right ) &\\cong \\varinjlim_{\\alpha} \\operatorname{Tor}_i^R(M_{\\alpha},N)\n\\end{align}"
},
{
"math_id": 18,
"text": "\\mathrm{Tor}_i^R(A,B)\\otimes_R T \\cong \\mathrm{Tor}_i^T(A\\otimes_R T,B\\otimes_R T)."
},
{
"math_id": 19,
"text": "S^{-1} \\operatorname{Tor}_i^R(A, B) \\cong \\operatorname{Tor}_i^{S^{-1} R} \\left (S^{-1} A, S^{-1} B \\right )."
},
{
"math_id": 20,
"text": "H_*(G,M)=\\operatorname{Tor}^{\\Z[G]}_*(\\Z, M),"
},
{
"math_id": 21,
"text": "\\Z[G]"
},
{
"math_id": 22,
"text": "HH_*(A,M)=\\operatorname{Tor}_*^{A\\otimes_k A^{\\text{op}}}(A, M)."
},
{
"math_id": 23,
"text": "H_*(\\mathfrak g,M)=\\operatorname{Tor}_*^{U\\mathfrak g}(R,M)"
},
{
"math_id": 24,
"text": "\\mathfrak g"
},
{
"math_id": 25,
"text": "U\\mathfrak g"
}
] | https://en.wikipedia.org/wiki?curid=856347 |
856356 | Addition-chain exponentiation | Method of exponentiation by positive integers requiring a minimal number of multiplications
In mathematics and computer science, optimal addition-chain exponentiation is a method of exponentiation by a positive integer power that requires a minimal number of multiplications. Using "the form of" the shortest addition chain, with multiplication instead of addition, computes the desired exponent (instead of multiple) of the base. (This corresponds to OEIS sequence A003313 (Length of shortest addition chain for n).) Each exponentiation in the chain can be evaluated by multiplying two of the earlier exponentiation results. More generally, "addition-chain exponentiation" may also refer to exponentiation by non-minimal addition chains constructed by a variety of algorithms (since a shortest addition chain is very difficult to find).
The shortest addition-chain algorithm requires no more multiplications than binary exponentiation and usually less. The first example of where it does better is for "a"15, where the binary method needs six multiplications but the shortest addition chain requires only five:
formula_0 (binary, 6 multiplications)
formula_1 (shortest addition chain, 5 multiplications).
formula_2 (also shortest addition chain, 5 multiplications).
On the other hand, the determination of a shortest addition chain is hard: no efficient optimal methods are currently known for arbitrary exponents, and the related problem of finding a shortest addition chain for a given set of exponents has been proven NP-complete. Even given a shortest chain, addition-chain exponentiation requires more memory than the binary method, because it must potentially store many previous exponents from the chain. So in practice, shortest addition-chain exponentiation is primarily used for small fixed exponents for which a shortest chain can be pre-computed and is not too large.
There are also several methods to "approximate" a shortest addition chain, and which often require fewer multiplications than binary exponentiation; binary exponentiation itself is a suboptimal addition-chain algorithm. The optimal algorithm choice depends on the context (such as the relative cost of the multiplication and the number of times a given exponent is re-used).
The problem of finding the shortest addition chain cannot be solved by dynamic programming, because it does not satisfy the assumption of optimal substructure. That is, it is not sufficient to decompose the power into smaller powers, each of which is computed minimally, since the addition chains for the smaller powers may be related (to share computations). For example, in the shortest addition chain for "a"15 above, the subproblem for "a"6 must be computed as ("a"3)2 since "a"3 is re-used (as opposed to, say, "a"6 = "a"2("a"2)2, which also requires three multiplies).
Addition-subtraction–chain exponentiation.
If both multiplication and division are allowed, then an addition-subtraction chain may be used to obtain even fewer total multiplications+divisions (where subtraction corresponds to division). However, the slowness of division compared to multiplication makes this technique unattractive in general. For exponentiation to negative integer powers, on the other hand, since one division is required anyway, an addition-subtraction chain is often beneficial. One such example is "a"−31, where computing 1/"a"31 by a shortest addition chain for "a"31 requires 7 multiplications and one division, whereas the shortest addition-subtraction chain requires 5 multiplications and one division:
formula_3 (addition-subtraction chain, 5 mults + 1 div).
For exponentiation on elliptic curves, the inverse of a point ("x", "y") is available at no cost, since it is simply ("x", −"y"), and therefore addition-subtraction chains are optimal in this context even for positive integer exponents.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a^{15} = a \\times (a \\times [a \\times a^2]^2)^2 \\!"
},
{
"math_id": 1,
"text": "a^{15} = ([a^2]^2 \\times a)^3 \\!"
},
{
"math_id": 2,
"text": "a^{15} = a^3 \\times ([a^3]^2)^2 \\!"
},
{
"math_id": 3,
"text": "a^{-31} = a / ((((a^2)^2)^2)^2)^2 \\!"
}
] | https://en.wikipedia.org/wiki?curid=856356 |
8564 | Diffeomorphism | Isomorphism of differentiable manifolds
In mathematics, a diffeomorphism is an isomorphism of differentiable manifolds. It is an invertible function that maps one differentiable manifold to another such that both the function and its inverse are continuously differentiable.
Definition.
Given two differentiable manifolds formula_0 and formula_1, a differentiable map formula_2 is a diffeomorphism if it is a bijection and its inverse formula_3 is differentiable as well. If these functions are formula_4 times continuously differentiable, formula_5 is called a formula_6-diffeomorphism.
Two manifolds formula_0 and formula_1 are diffeomorphic (usually denoted formula_7) if there is a diffeomorphism formula_5 from formula_0 to formula_1. Two formula_6-differentiable manifolds are formula_6-diffeomorphic if there is an formula_8 times continuously differentiable bijective map between them whose inverse is also formula_4 times continuously differentiable.
Diffeomorphisms of subsets of manifolds.
Given a subset formula_9 of a manifold formula_0 and a subset formula_10 of a manifold formula_1, a function formula_11 is said to be smooth if for all formula_12 in formula_9 there is a neighborhood formula_13 of formula_12 and a smooth function formula_14 such that the restrictions agree: formula_15 (note that formula_16 is an extension of formula_5). The function formula_5 is said to be a diffeomorphism if it is bijective, smooth and its inverse is smooth.
Local description.
Testing whether a differentiable map is a diffeomorphism can be made locally under some mild restrictions. This is the Hadamard-Caccioppoli theorem:
If formula_17, formula_18 are connected open subsets of formula_19 such that formula_18 is simply connected, a differentiable map formula_20 is a diffeomorphism if it is proper and if the differential formula_21 is bijective (and hence a linear isomorphism) at each point formula_22 in formula_17.
Some remarks:
It is essential for formula_18 to be simply connected for the function formula_5 to be globally invertible (under the sole condition that its derivative be a bijective map at each point). For example, consider the "realification" of the complex square function
formula_23
Then formula_5 is surjective and it satisfies
formula_24
Thus, though formula_25 is bijective at each point, formula_5 is not invertible because it fails to be injective (e.g. formula_26).
Since the differential at a point (for a differentiable function)
formula_27
is a linear map, it has a well-defined inverse if and only if formula_25 is a bijection. The matrix representation of formula_25 is the formula_28 matrix of first-order partial derivatives whose entry in the formula_29-th row and formula_30-th column is formula_31. This so-called Jacobian matrix is often used for explicit computations.
Diffeomorphisms are necessarily between manifolds of the same dimension. Imagine formula_5 going from dimension formula_32 to dimension formula_33. If formula_34 then formula_25 could never be surjective, and if formula_35 then formula_25 could never be injective. In both cases, therefore, formula_25 fails to be a bijection.
If formula_25 is a bijection at formula_22 then formula_5 is said to be a local diffeomorphism (since, by continuity, formula_36 will also be bijective for all formula_37 sufficiently close to formula_22).
Given a smooth map from dimension formula_32 to dimension formula_33, if formula_38 (or, locally, formula_25) is surjective, formula_5 is said to be a submersion (or, locally, a "local submersion"); and if formula_38 (or, locally, formula_25) is injective, formula_5 is said to be an immersion (or, locally, a "local immersion").
A differentiable bijection is "not" necessarily a diffeomorphism. formula_39, for example, is not a diffeomorphism from formula_40 to itself because its derivative vanishes at 0 (and hence its inverse is not differentiable at 0). This is an example of a homeomorphism that is not a diffeomorphism.
When formula_5 is a map between differentiable manifolds, a diffeomorphic formula_5 is a stronger condition than a homeomorphic formula_5. For a diffeomorphism, formula_5 and its inverse need to be differentiable; for a homeomorphism, formula_5 and its inverse need only be continuous. Every diffeomorphism is a homeomorphism, but not every homeomorphism is a diffeomorphism.
formula_41 is a diffeomorphism if, in coordinate charts, it satisfies the definition above. More precisely: Pick any cover of formula_0 by compatible coordinate charts and do the same for formula_1. Let formula_42 and formula_43 be charts on, respectively, formula_0 and formula_1, with formula_17 and formula_18 as, respectively, the images of formula_42 and formula_43. The map formula_44 is then a diffeomorphism as in the definition above, whenever formula_45.
Examples.
Since any manifold can be locally parametrised, we can consider some explicit maps from formula_46 into formula_46.
formula_47
We can calculate the Jacobian matrix:
formula_48
The Jacobian matrix has zero determinant if and only if formula_49. We see that formula_5 could only be a diffeomorphism away from the formula_22-axis and the formula_37-axis. However, formula_5 is not bijective since formula_50, and thus it cannot be a diffeomorphism.
formula_51
where the formula_52 and formula_53 are arbitrary real numbers, and the omitted terms are of degree at least two in "x" and "y". We can calculate the Jacobian matrix at 0:
formula_54
We see that "g" is a local diffeomorphism at 0 if, and only if,
formula_55
i.e. the linear terms in the components of "g" are linearly independent as polynomials.
formula_56
We can calculate the Jacobian matrix:
formula_57
The Jacobian matrix has zero determinant everywhere! In fact we see that the image of "h" is the unit circle.
Surface deformations.
In mechanics, a stress-induced transformation is called a deformation and may be described by a diffeomorphism.
A diffeomorphism formula_20 between two surfaces formula_17 and formula_18 has a Jacobian matrix formula_38 that is an invertible matrix. In fact, it is required that for formula_12 in formula_17, there is a neighborhood of formula_12 in which the Jacobian formula_38 stays non-singular. Suppose that in a chart of the surface, formula_58
The total differential of "u" is
formula_59, and similarly for "v".
Then the image formula_60 is a linear transformation, fixing the origin, and expressible as the action of a complex number of a particular type. When ("dx", "dy") is also interpreted as that type of complex number, the action is of complex multiplication in the appropriate complex number plane. As such, there is a type of angle (Euclidean, hyperbolic, or slope) that is preserved in such a multiplication. Due to "Df" being invertible, the type of complex number is uniform over the surface. Consequently, a surface deformation or diffeomorphism of surfaces has the conformal property of preserving (the appropriate type of) angles.
Diffeomorphism group.
Let formula_0 be a differentiable manifold that is second-countable and Hausdorff. The diffeomorphism group of formula_0 is the group of all formula_6 diffeomorphisms of formula_0 to itself, denoted by formula_61 or, when formula_4 is understood, formula_62. This is a "large" group, in the sense that—provided formula_0 is not zero-dimensional—it is not locally compact.
Topology.
The diffeomorphism group has two natural topologies: "weak" and "strong" . When the manifold is compact, these two topologies agree. The weak topology is always metrizable. When the manifold is not compact, the strong topology captures the behavior of functions "at infinity" and is not metrizable. It is, however, still Baire.
Fixing a Riemannian metric on formula_0, the weak topology is the topology induced by the family of metrics
formula_63
as formula_64 varies over compact subsets of formula_0. Indeed, since formula_0 is formula_65-compact, there is a sequence of compact subsets formula_66 whose union is formula_0. Then:
formula_67
The diffeomorphism group equipped with its weak topology is locally homeomorphic to the space of formula_6 vector fields . Over a compact subset of formula_0, this follows by fixing a Riemannian metric on formula_0 and using the exponential map for that metric. If formula_4 is finite and the manifold is compact, the space of vector fields is a Banach space. Moreover, the transition maps from one chart of this atlas to another are smooth, making the diffeomorphism group into a Banach manifold with smooth right translations; left translations and inversion are only continuous. If formula_68, the space of vector fields is a Fréchet space. Moreover, the transition maps are smooth, making the diffeomorphism group into a Fréchet manifold and even into a regular Fréchet Lie group. If the manifold is formula_65-compact and not compact the full diffeomorphism group is not locally contractible for any of the two topologies. One has to restrict the group by controlling the deviation from the identity near infinity to obtain a diffeomorphism group which is a manifold; see .
Lie algebra.
The Lie algebra of the diffeomorphism group of formula_0 consists of all vector fields on formula_0 equipped with the Lie bracket of vector fields. Somewhat formally, this is seen by making a small change to the coordinate formula_22 at each point in space:
formula_69
so the infinitesimal generators are the vector fields
formula_70
Transitivity.
For a connected manifold formula_0, the diffeomorphism group acts transitively on formula_0. More generally, the diffeomorphism group acts transitively on the configuration space formula_83. If formula_0 is at least two-dimensional, the diffeomorphism group acts transitively on the configuration space formula_84 and the action on formula_0 is multiply transitive .
Extensions of diffeomorphisms.
In 1926, Tibor Radó asked whether the harmonic extension of any homeomorphism or diffeomorphism of the unit circle to the unit disc yields a diffeomorphism on the open disc. An elegant proof was provided shortly afterwards by Hellmuth Kneser. In 1945, Gustave Choquet, apparently unaware of this result, produced a completely different proof.
The (orientation-preserving) diffeomorphism group of the circle is pathwise connected. This can be seen by noting that any such diffeomorphism can be lifted to a diffeomorphism formula_5 of the reals satisfying formula_85; this space is convex and hence path-connected. A smooth, eventually constant path to the identity gives a second more elementary way of extending a diffeomorphism from the circle to the open unit disc (a special case of the Alexander trick). Moreover, the diffeomorphism group of the circle has the homotopy-type of the orthogonal group formula_86.
The corresponding extension problem for diffeomorphisms of higher-dimensional spheres formula_87 was much studied in the 1950s and 1960s, with notable contributions from René Thom, John Milnor and Stephen Smale. An obstruction to such extensions is given by the finite abelian group formula_88, the "group of twisted spheres", defined as the quotient of the abelian component group of the diffeomorphism group by the subgroup of classes extending to diffeomorphisms of the ball formula_89.
Connectedness.
For manifolds, the diffeomorphism group is usually not connected. Its component group is called the mapping class group. In dimension 2 (i.e. surfaces), the mapping class group is a finitely presented group generated by Dehn twists; this has been proved by Max Dehn, W. B. R. Lickorish, and Allen Hatcher). Max Dehn and Jakob Nielsen showed that it can be identified with the outer automorphism group of the fundamental group of the surface.
William Thurston refined this analysis by classifying elements of the mapping class group into three types: those equivalent to a periodic diffeomorphism; those equivalent to a diffeomorphism leaving a simple closed curve invariant; and those equivalent to pseudo-Anosov diffeomorphisms. In the case of the torus formula_90, the mapping class group is simply the modular group formula_91 and the classification becomes classical in terms of elliptic, parabolic and hyperbolic matrices. Thurston accomplished his classification by observing that the mapping class group acted naturally on a compactification of Teichmüller space; as this enlarged space was homeomorphic to a closed ball, the Brouwer fixed-point theorem became applicable. Smale conjectured that if formula_0 is an oriented smooth closed manifold, the identity component of the group of orientation-preserving diffeomorphisms is simple. This had first been proved for a product of circles by Michel Herman; it was proved in full generality by Thurston.
Homeomorphism and diffeomorphism.
Since every diffeomorphism is a homeomorphism, given a pair of manifolds which are diffeomorphic to each other they are in particular homeomorphic to each other. The converse is not true in general.
While it is easy to find homeomorphisms that are not diffeomorphisms, it is more difficult to find a pair of homeomorphic manifolds that are not diffeomorphic. In dimensions 1, 2 and 3, any pair of homeomorphic smooth manifolds are diffeomorphic. In dimension 4 or greater, examples of homeomorphic but not diffeomorphic pairs exist. The first such example was constructed by John Milnor in dimension 7. He constructed a smooth 7-dimensional manifold (called now Milnor's sphere) that is homeomorphic to the standard 7-sphere but not diffeomorphic to it. There are, in fact, 28 oriented diffeomorphism classes of manifolds homeomorphic to the 7-sphere (each of them is the total space of a fiber bundle over the 4-sphere with the 3-sphere as the fiber).
More unusual phenomena occur for 4-manifolds. In the early 1980s, a combination of results due to Simon Donaldson and Michael Freedman led to the discovery of exotic formula_100: there are uncountably many pairwise non-diffeomorphic open subsets of formula_100 each of which is homeomorphic to formula_100, and also there are uncountably many pairwise non-diffeomorphic differentiable manifolds homeomorphic to formula_100 that do not embed smoothly in formula_100.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "f \\colon M \\rightarrow N "
},
{
"math_id": 3,
"text": "f^{-1} \\colon N \\rightarrow M"
},
{
"math_id": 4,
"text": "r"
},
{
"math_id": 5,
"text": "f"
},
{
"math_id": 6,
"text": "C^r"
},
{
"math_id": 7,
"text": "M \\simeq N"
},
{
"math_id": 8,
"text": " r "
},
{
"math_id": 9,
"text": "X"
},
{
"math_id": 10,
"text": "Y"
},
{
"math_id": 11,
"text": "f:X\\to Y"
},
{
"math_id": 12,
"text": "p"
},
{
"math_id": 13,
"text": "U\\subset M"
},
{
"math_id": 14,
"text": "g:U\\to N"
},
{
"math_id": 15,
"text": "g_{|U \\cap X} = f_{|U \\cap X}"
},
{
"math_id": 16,
"text": "g"
},
{
"math_id": 17,
"text": "U"
},
{
"math_id": 18,
"text": "V"
},
{
"math_id": 19,
"text": "\\R^n"
},
{
"math_id": 20,
"text": "f:U\\to V"
},
{
"math_id": 21,
"text": "Df_x:\\R^n\\to\\R^n"
},
{
"math_id": 22,
"text": "x"
},
{
"math_id": 23,
"text": "\\begin{cases}\nf : \\R^2 \\setminus \\{(0,0)\\} \\to \\R^2 \\setminus \\{(0,0)\\} \\\\\n(x,y)\\mapsto(x^2-y^2,2xy).\n\\end{cases}"
},
{
"math_id": 24,
"text": "\\det Df_x = 4(x^2+y^2) \\neq 0."
},
{
"math_id": 25,
"text": "Df_x"
},
{
"math_id": 26,
"text": "f(1,0)=(1,0)=f(-1,0)"
},
{
"math_id": 27,
"text": "Df_x : T_xU \\to T_{f(x)}V"
},
{
"math_id": 28,
"text": "n\\times n"
},
{
"math_id": 29,
"text": "i"
},
{
"math_id": 30,
"text": "j"
},
{
"math_id": 31,
"text": "\\partial f_i / \\partial x_j"
},
{
"math_id": 32,
"text": "n"
},
{
"math_id": 33,
"text": "k"
},
{
"math_id": 34,
"text": "n<k"
},
{
"math_id": 35,
"text": "n>k"
},
{
"math_id": 36,
"text": "Df_y"
},
{
"math_id": 37,
"text": "y"
},
{
"math_id": 38,
"text": "Df"
},
{
"math_id": 39,
"text": "f(x)=x^3"
},
{
"math_id": 40,
"text": "\\R"
},
{
"math_id": 41,
"text": "f:M\\to N"
},
{
"math_id": 42,
"text": "\\phi"
},
{
"math_id": 43,
"text": "\\psi"
},
{
"math_id": 44,
"text": "\\psi f\\phi^{-1}:U\\to V"
},
{
"math_id": 45,
"text": "f(\\phi^{-1}(U))\\subseteq\\psi^{-1}(V)"
},
{
"math_id": 46,
"text": "\\R^2"
},
{
"math_id": 47,
"text": "f(x,y) = \\left (x^2 + y^3, x^2 - y^3 \\right )."
},
{
"math_id": 48,
"text": " J_f = \\begin{pmatrix} 2x & 3y^2 \\\\ 2x & -3y^2 \\end{pmatrix} . "
},
{
"math_id": 49,
"text": "xy=0"
},
{
"math_id": 50,
"text": "f(x,y)=f(-x,y)"
},
{
"math_id": 51,
"text": "g(x,y) = \\left (a_0 + a_{1,0}x + a_{0,1}y + \\cdots, \\ b_0 + b_{1,0}x + b_{0,1}y + \\cdots \\right )"
},
{
"math_id": 52,
"text": "a_{i,j}"
},
{
"math_id": 53,
"text": "b_{i,j}"
},
{
"math_id": 54,
"text": " J_g(0,0) = \\begin{pmatrix} a_{1,0} & a_{0,1} \\\\ b_{1,0} & b_{0,1} \\end{pmatrix}. "
},
{
"math_id": 55,
"text": "a_{1,0}b_{0,1} - a_{0,1}b_{1,0} \\neq 0,"
},
{
"math_id": 56,
"text": "h(x,y) = \\left (\\sin(x^2 + y^2), \\cos(x^2 + y^2) \\right )."
},
{
"math_id": 57,
"text": " J_h = \\begin{pmatrix} 2x\\cos(x^2 + y^2) & 2y\\cos(x^2 + y^2) \\\\ -2x\\sin(x^2+y^2) & -2y\\sin(x^2 + y^2) \\end{pmatrix} . "
},
{
"math_id": 58,
"text": "f(x,y) = (u,v)."
},
{
"math_id": 59,
"text": "du = \\frac{\\partial u}{\\partial x} dx + \\frac{\\partial u}{\\partial y} dy"
},
{
"math_id": 60,
"text": " (du, dv) = (dx, dy) Df "
},
{
"math_id": 61,
"text": "\\text{Diff}^r(M)"
},
{
"math_id": 62,
"text": "\\text{Diff}(M)"
},
{
"math_id": 63,
"text": "d_K(f,g) = \\sup\\nolimits_{x\\in K} d(f(x),g(x)) + \\sum\\nolimits_{1\\le p\\le r} \\sup\\nolimits_{x\\in K} \\left \\|D^pf(x) - D^pg(x) \\right \\|"
},
{
"math_id": 64,
"text": "K"
},
{
"math_id": 65,
"text": "\\sigma"
},
{
"math_id": 66,
"text": "K_n"
},
{
"math_id": 67,
"text": "d(f,g) = \\sum\\nolimits_n 2^{-n}\\frac{d_{K_n}(f,g)}{1+d_{K_n}(f,g)}."
},
{
"math_id": 68,
"text": "r=\\infty"
},
{
"math_id": 69,
"text": "x^{\\mu} \\mapsto x^{\\mu} + \\varepsilon h^{\\mu}(x)"
},
{
"math_id": 70,
"text": " L_{h} = h^{\\mu}(x)\\frac{\\partial}{\\partial x^\\mu}."
},
{
"math_id": 71,
"text": "M=G"
},
{
"math_id": 72,
"text": "G"
},
{
"math_id": 73,
"text": "\\text{Diff}(G)"
},
{
"math_id": 74,
"text": "\\text{Diff}(G)\\simeq G\\times\\text{Diff}(G,e)"
},
{
"math_id": 75,
"text": "\\text{Diff}(G,e)"
},
{
"math_id": 76,
"text": "\\text{Diff}(\\R^n,0)"
},
{
"math_id": 77,
"text": "f(x)\\to f(tx)/t, t\\in(0,1]"
},
{
"math_id": 78,
"text": "0\\to\\text{Diff}_0(M)\\to\\text{Diff}(M)\\to\\Sigma(\\pi_0(M))"
},
{
"math_id": 79,
"text": "\\text{Diff}_0(M)"
},
{
"math_id": 80,
"text": "\\Sigma(\\pi_0(M))"
},
{
"math_id": 81,
"text": "\\pi_0(M)"
},
{
"math_id": 82,
"text": "\\text{Diff}(M)\\to\\Sigma(\\pi_0(M))"
},
{
"math_id": 83,
"text": "C_k M"
},
{
"math_id": 84,
"text": "F_k M"
},
{
"math_id": 85,
"text": "[f(x+1)=f(x)+1]"
},
{
"math_id": 86,
"text": "O(2)"
},
{
"math_id": 87,
"text": "S^{n-1}"
},
{
"math_id": 88,
"text": "\\Gamma_n"
},
{
"math_id": 89,
"text": "B^n"
},
{
"math_id": 90,
"text": "S^1\\times S^1=\\R^2/\\Z^2"
},
{
"math_id": 91,
"text": "\\text{SL}(2,\\Z)"
},
{
"math_id": 92,
"text": "S^2"
},
{
"math_id": 93,
"text": "O(3)"
},
{
"math_id": 94,
"text": "S^1\\times S^1\\times\\text{GL}(2,\\Z)"
},
{
"math_id": 95,
"text": "g>1"
},
{
"math_id": 96,
"text": "n>3"
},
{
"math_id": 97,
"text": "\\text{Diff}(S^4)"
},
{
"math_id": 98,
"text": "n>6"
},
{
"math_id": 99,
"text": "\\text{Diff}(S^n)"
},
{
"math_id": 100,
"text": "\\R^4"
}
] | https://en.wikipedia.org/wiki?curid=8564 |
8564483 | Bogdanov–Takens bifurcation | In bifurcation theory, a field within mathematics, a Bogdanov–Takens bifurcation is a well-studied example of a bifurcation with co-dimension two, meaning that two parameters must be varied for the bifurcation to occur. It is named after Rifkat Bogdanov and Floris Takens, who independently and simultaneously described this bifurcation.
A system "y"' = "f"("y") undergoes a Bogdanov–Takens bifurcation if it has a fixed point and the linearization of "f" around that point has a double eigenvalue at zero (assuming that some technical nondegeneracy conditions are satisfied).
Three codimension-one bifurcations occur nearby: a saddle-node bifurcation, an Andronov–Hopf bifurcation and a homoclinic bifurcation. All associated bifurcation curves meet at the Bogdanov–Takens bifurcation.
The normal form of the Bogdanov–Takens bifurcation is
formula_0
There exist two codimension-three degenerate Takens–Bogdanov bifurcations, also known as Dumortier–Roussarie–Sotomayor bifurcations. | [
{
"math_id": 0,
"text": " \\begin{align}\ny_1' &= y_2, \\\\\ny_2' &= \\beta_1 + \\beta_2 y_1 + y_1^2 \\pm y_1 y_2.\n\\end{align} "
}
] | https://en.wikipedia.org/wiki?curid=8564483 |
8564970 | Landau–Kolmogorov inequality | In mathematics, the Landau–Kolmogorov inequality, named after Edmund Landau and Andrey Kolmogorov, is the following family of interpolation inequalities between different derivatives of a function "f" defined on a subset "T" of the real numbers:
formula_0
On the real line.
For "k" = 1, "n" = 2 and "T" = ["c",∞) or "T" = R, the inequality was first proved by Edmund Landau with the sharp constants "C"(2, 1, ["c",∞)) = 2 and "C"(2, 1, R) = √2. Following contributions by Jacques Hadamard and Georgiy Shilov, Andrey Kolmogorov found the sharp constants and arbitrary "n", "k":
formula_1
where "a""n" are the Favard constants.
On the half-line.
Following work by Matorin and others, the extremising functions were found by Isaac Jacob Schoenberg, explicit forms for the sharp constants are however still unknown.
Generalisations.
There are many generalisations, which are of the form
formula_2
Here all three norms can be different from each other (from "L1" to "L∞", with "p"="q"="r"=∞ in the classical case) and "T" may be the real axis, semiaxis or a closed segment.
The Kallman–Rota inequality generalizes the Landau–Kolmogorov inequalities from the derivative operator to more general contractions on Banach spaces.
Notes.
<templatestyles src="Reflist/styles.css" />
→ | [
{
"math_id": 0,
"text": " \\|f^{(k)}\\|_{L_\\infty(T)} \\le C(n, k, T) {\\|f\\|_{L_\\infty(T)}}^{1-k/n} {\\|f^{(n)}\\|_{L_\\infty(T)}}^{k/n} \\text{ for } 1\\le k < n."
},
{
"math_id": 1,
"text": " C(n, k, \\mathbb R) = a_{n-k} a_n^{-1+k/n}~, "
},
{
"math_id": 2,
"text": "\\|f^{(k)}\\|_{L_q(T)} \\le K \\cdot {\\|f\\|^\\alpha_{L_p(T)}} \\cdot {\\|f^{(n)}\\|^{1-\\alpha}_{L_r(T)}}\\text{ for }1\\le k < n."
}
] | https://en.wikipedia.org/wiki?curid=8564970 |
8565423 | No-teleportation theorem | Theorem stating the impossibility of converting qubits into bits
In quantum information theory, the no-teleportation theorem states that an arbitrary quantum state cannot be converted into a sequence of classical bits (or even an infinite number of such bits); nor can such bits be used to reconstruct the original state, thus "teleporting" it by merely moving classical bits around. Put another way, it states that the unit of quantum information, the qubit, cannot be exactly, precisely converted into classical information bits. This should not be confused with quantum teleportation, which does allow a quantum state to be destroyed in one location, and an exact replica to be created at a different location.
In crude terms, the no-teleportation theorem stems from the Heisenberg uncertainty principle and the EPR paradox: although a qubit formula_0 can be imagined to be a specific direction on the Bloch sphere, that direction cannot be measured precisely, for the general case formula_0; if it could, the results of that measurement would be describable with words, i.e. classical information.
The no-teleportation theorem is implied by the no-cloning theorem: if it were possible to convert a qubit into classical bits, then a qubit would be easy to copy (since classical bits are trivially copyable).
Formulation.
The term "quantum information" refers to information stored in the state of a quantum system. Two quantum states "ρ"1 and "ρ"2 are identical if the measurement results of any physical observable have the same expectation value for "ρ"1 and "ρ"2. Thus measurement can be viewed as an information channel with quantum input and classical output, that is, performing measurement on a quantum system transforms quantum information into classical information. On the other hand, preparing a quantum state takes classical information to quantum information.
In general, a quantum state is described by a density matrix. Suppose one has a quantum system in some mixed state "ρ". Prepare an ensemble, of the same system, as follows:
The no-teleportation theorem states that the result will be different from "ρ", irrespective of how the preparation procedure is related to measurement outcome. A quantum state cannot be determined via a single measurement. In other words, if a quantum channel measurement is followed by preparation, it cannot be the identity channel. Once converted to classical information, quantum information cannot be recovered.
In contrast, perfect transmission is possible if one wishes to convert classical information to quantum information then back to classical information. For classical bits, this can be done by encoding them in orthogonal quantum states, which can always be distinguished.
See also.
Among other no-go theorems in quantum information are:
With the aid of shared entanglement, quantum states can be teleported, see | [
{
"math_id": 0,
"text": "|\\psi\\rangle"
}
] | https://en.wikipedia.org/wiki?curid=8565423 |
8566056 | Chain rule for Kolmogorov complexity | Lower bound for size of software program
The chain rule for Kolmogorov complexity is an analogue of the chain rule for information entropy, which states:
formula_0
That is, the combined randomness of two sequences "X" and "Y" is the sum of the randomness of "X" plus whatever randomness is left in "Y" once we know "X".
This follows immediately from the definitions of conditional and joint entropy, and the fact from probability theory that the joint probability is the product of the marginal and conditional probability:
formula_1
formula_2
The equivalent statement for Kolmogorov complexity does not hold exactly; it is true only up to a logarithmic term:
formula_3
(An exact version, "KP"("x", "y") = "KP"("x") + "KP"("y"|"x"∗) + "O"(1),
holds for the prefix complexity "KP", where "x"∗ is a shortest program for "x".)
It states that the shortest program printing "X" and "Y" is obtained by concatenating a shortest program printing "X" with a program printing "Y" given "X", plus at most a logarithmic factor. The results implies that algorithmic mutual information, an analogue of mutual information for Kolmogorov complexity is symmetric: &NoBreak;&NoBreak; for all "x,y".
Proof.
The ≤ direction is obvious: we can write a program to produce "x" and "y" by concatenating a program to produce "x", a program to produce "y" given
access to "x", and (whence the log term) the length of one of the programs, so
that we know where to separate the two programs for "x" and "y"|"x" (log("K"("x", "y")) upper-bounds this length).
For the ≥ direction, it suffices to show that for all k,l such that &NoBreak;&NoBreak; we have that either
formula_4
or
formula_5.
Consider the list ("a"1,"b"1), ("a"2,"b"2), ..., ("ae,be") of all pairs &NoBreak;&NoBreak; produced by programs of length exactly &NoBreak;&NoBreak; [hence &NoBreak;&NoBreak;]. Note that this list
First, suppose that "x" appears less than 2"l" times as first element. We can specify "y" given x,k,l by enumerating ("a"1,"b"1), ("a"2,"b"2), ... and then selecting &NoBreak;&NoBreak; in the sub-list of pairs &NoBreak;&NoBreak;. By assumption, the index of &NoBreak;&NoBreak; in this sub-list is less than 2"l" and hence, there is a program for "y" given x,k,l of length &NoBreak;&NoBreak;.
Now, suppose that "x" appears at least 2"l" times as first element. This can happen for at most 2"K"("x,y")−l = 2"k" different strings. These strings can be enumerated given k,l and hence "x" can be specified by its index in this enumeration. The corresponding program for "x" has size &NoBreak;&NoBreak;. Theorem proved. | [
{
"math_id": 0,
"text": "\nH(X,Y) = H(X) + H(Y|X)\n"
},
{
"math_id": 1,
"text": "\nP(X,Y) = P(X) P(Y|X)\n"
},
{
"math_id": 2,
"text": "\n\\Rightarrow \\log P(X,Y) = \\log P(X) + \\log P(Y|X)\n"
},
{
"math_id": 3,
"text": "\nK(x,y) = K(x) + K(y|x) + O(\\log(K(x,y)))\n"
},
{
"math_id": 4,
"text": "K(x|k,l) \\le k + O(1)"
},
{
"math_id": 5,
"text": "K(y|x,k,l) \\le l + O(1)"
}
] | https://en.wikipedia.org/wiki?curid=8566056 |
856614 | Čech cohomology | In mathematics, specifically algebraic topology, Čech cohomology is a cohomology theory based on the intersection properties of open covers of a topological space. It is named for the mathematician Eduard Čech.
Motivation.
Let "X" be a topological space, and let formula_0 be an open cover of "X". Let formula_1 denote the nerve of the covering. The idea of Čech cohomology is that, for an open cover formula_0 consisting of sufficiently small open sets, the resulting simplicial complex formula_1 should be a good combinatorial model for the space "X". For such a cover, the Čech cohomology of "X" is defined to be the simplicial cohomology of the nerve. This idea can be formalized by the notion of a good cover. However, a more general approach is to take the direct limit of the cohomology groups of the nerve over the system of all possible open covers of "X", ordered by refinement. This is the approach adopted below.
Construction.
Let "X" be a topological space, and let formula_2 be a presheaf of abelian groups on "X". Let formula_0 be an open cover of "X".
Simplex.
A "q"-simplex σ of formula_0 is an ordered collection of "q"+1 sets chosen from formula_0, such that the intersection of all these sets is non-empty. This intersection is called the "support" of σ and is denoted |σ|.
Now let formula_3 be such a "q"-simplex. The "j-th partial boundary" of σ is defined to be the ("q"−1)-simplex obtained by removing the "j"-th set from σ, that is:
formula_4
The "boundary" of σ is defined as the alternating sum of the partial boundaries:
formula_5
viewed as an element of the free abelian group spanned by the simplices of formula_0.
Cochain.
A "q"-cochain of formula_0 with coefficients in formula_2 is a map which associates with each "q"-simplex σ an element of formula_6, and we denote the set of all "q"-cochains of formula_0 with coefficients in formula_2 by formula_7. formula_7 is an abelian group by pointwise addition.
Differential.
The cochain groups can be made into a cochain complex formula_8 by defining the coboundary operator formula_9 by:
formula_10
where formula_11 is the restriction morphism from formula_12 to formula_13 (Notice that ∂jσ ⊆ σ, but |σ| ⊆ |∂jσ|.)
A calculation shows that formula_14
The coboundary operator is analogous to the exterior derivative of De Rham cohomology, so it sometimes called
the differential of the cochain complex.
Cocycle.
A "q"-cochain is called a "q"-cocycle if it is in the kernel of formula_15, hence formula_16 is the set of all "q"-cocycles.
Thus a ("q"−1)-cochain formula_17 is a cocycle if for all "q"-simplices formula_18 the cocycle condition
formula_19
holds.
A 0-cocycle formula_17 is a collection of local sections of formula_2 satisfying a compatibility relation on every intersecting formula_20
formula_21
A 1-cocycle formula_17 satisfies for every non-empty formula_22 with formula_23
formula_24
Coboundary.
A "q"-cochain is called a "q"-coboundary if it is in the image of formula_15 and formula_25 is the set of all "q"-coboundaries.
For example, a 1-cochain formula_17 is a 1-coboundary if there exists a 0-cochain formula_26 such that for every intersecting formula_20
formula_27
Cohomology.
The Čech cohomology of formula_0 with values in formula_2 is defined to be the cohomology of the cochain complex formula_28. Thus the "q"th Čech cohomology is given by
formula_29.
The Čech cohomology of "X" is defined by considering refinements of open covers. If formula_30 is a refinement of formula_0 then there is a map in cohomology formula_31 The open covers of "X" form a directed set under refinement, so the above map leads to a direct system of abelian groups. The Čech cohomology of "X" with values in "formula_2" is defined as the direct limit formula_32 of this system.
The Čech cohomology of "X" with coefficients in a fixed abelian group "A", denoted formula_33, is defined as formula_34 where formula_35 is the constant sheaf on "X" determined by "A".
A variant of Čech cohomology, called numerable Čech cohomology, is defined as above, except that all open covers considered are required to be "numerable": that is, there is a partition of unity {ρ"i"} such that each support formula_36 is contained in some element of the cover. If "X" is paracompact and Hausdorff, then numerable Čech cohomology agrees with the usual Čech cohomology.
Relation to other cohomology theories.
If "X" is homotopy equivalent to a CW complex, then the Čech cohomology formula_37 is naturally isomorphic to the singular cohomology formula_38. If "X" is a differentiable manifold, then formula_39 is also naturally isomorphic to the de Rham cohomology; the article on de Rham cohomology provides a brief review of this isomorphism. For less well-behaved spaces, Čech cohomology differs from singular cohomology. For example if "X" is the closed topologist's sine curve, then formula_40 whereas formula_41
If "X" is a differentiable manifold and the cover formula_0 of "X" is a "good cover" ("i.e." all the sets "U"α are contractible to a point, and all finite intersections of sets in formula_0 are either empty or contractible to a point), then formula_42 is isomorphic to the de Rham cohomology.
If "X" is compact Hausdorff, then Čech cohomology (with coefficients in a discrete group) is isomorphic to Alexander-Spanier cohomology.
For a presheaf formula_2 on "X", let formula_43 denote its sheafification. Then we have a natural comparison map
formula_44
from Čech cohomology to sheaf cohomology. If "X" is paracompact Hausdorff, then formula_45 is an isomorphism. More generally, formula_45 is an isomorphism whenever the Čech cohomology of all presheaves on "X" with zero sheafification vanishes.
In algebraic geometry.
Čech cohomology can be defined more generally for objects in a site C endowed with a topology. This applies, for example, to the Zariski site or the etale site of a scheme "X". The Čech cohomology with values in some sheaf formula_2 is defined as
formula_46
where the colimit runs over all coverings (with respect to the chosen topology) of "X". Here formula_47 is defined as above, except that the "r"-fold intersections of open subsets inside the ambient topological space are replaced by the "r"-fold fiber product
formula_48
As in the classical situation of topological spaces, there is always a map
formula_49
from Čech cohomology to sheaf cohomology. It is always an isomorphism in degrees "n" = 0 and 1, but may fail to be so in general. For the Zariski topology on a Noetherian separated scheme, Čech and sheaf cohomology agree for any quasi-coherent sheaf. For the étale topology, the two cohomologies agree for any étale sheaf on "X", provided that any finite set of points of "X" are contained in some open affine subscheme. This is satisfied, for example, if "X" is quasi-projective over an affine scheme.
The possible difference between Čech cohomology and sheaf cohomology is a motivation for the use of hypercoverings: these are more general objects than the Čech nerve
formula_50
A hypercovering "K"∗ of "X" is a certain simplicial object in C, i.e., a collection of objects "K""n" together with boundary and degeneracy maps. Applying a sheaf formula_2 to "K"∗ yields a simplicial abelian group formula_51 whose "n"-th cohomology group is denoted formula_52. (This group is the same as formula_47 in case "K"∗ equals formula_53.) Then, it can be shown that there is a canonical isomorphism
formula_54
where the colimit now runs over all hypercoverings.
Examples.
The most basic example of Čech cohomology is given by the case where the presheaf formula_2 is a constant sheaf, e.g. formula_55. In such cases, each formula_56-cochain formula_17 is simply a function which maps every formula_56-simplex to formula_57. For example, we calculate the first Čech cohomology with values in formula_57 of the unit circle formula_58. Dividing formula_59 into three arcs and choosing sufficiently small open neighborhoods, we obtain an open cover formula_60 where formula_61 but formula_62.
Given any 1-cocycle formula_17, formula_63 is a 2-cochain which takes inputs of the form formula_64 where formula_65 (since formula_62 and hence formula_66 is not a 2-simplex for any permutation formula_67). The first three inputs give formula_68; the fourth gives
formula_69
Such a function is fully determined by the values of formula_70. Thus,
formula_71
On the other hand, given any 1-coboundary formula_72, we have
formula_73
However, upon closer inspection we see that formula_74 and hence each 1-coboundary formula_17 is uniquely determined by formula_75 and formula_76. This gives the set of 1-coboundaries:
formula_77
Therefore, formula_78. Since formula_0 is a good cover of formula_59, we have formula_79 by Leray's theorem.
We may also compute the coherent sheaf cohomology of formula_80 on the projective line formula_81 using the Čech complex. Using the cover
formula_82
we have the following modules from the cotangent sheaf
formula_83
If we take the conventions that formula_84 then we get the Čech complex
formula_85
Since formula_86 is injective and the only element not in the image of formula_86 is formula_87 we get that
formula_88 | [
{
"math_id": 0,
"text": "\\mathcal{U}"
},
{
"math_id": 1,
"text": "N(\\mathcal{U})"
},
{
"math_id": 2,
"text": "\\mathcal{F}"
},
{
"math_id": 3,
"text": "\\sigma = (U_i)_{i \\in \\{ 0 , \\ldots , q \\}}"
},
{
"math_id": 4,
"text": "\\partial_j \\sigma := (U_i)_{i \\in \\{ 0 , \\ldots , q \\} \\setminus \\{j\\}}."
},
{
"math_id": 5,
"text": "\\partial \\sigma := \\sum_{j=0}^q (-1)^{j+1} \\partial_j \\sigma"
},
{
"math_id": 6,
"text": "\\mathcal{F}(|\\sigma|)"
},
{
"math_id": 7,
"text": "C^q(\\mathcal U, \\mathcal F)"
},
{
"math_id": 8,
"text": "(C^{\\bullet}(\\mathcal U, \\mathcal F), \\delta)"
},
{
"math_id": 9,
"text": "\\delta_q : C^q(\\mathcal U, \\mathcal F) \\to C^{q+1}(\\mathcal{U}, \\mathcal{F}) "
},
{
"math_id": 10,
"text": " \\quad (\\delta_q f)(\\sigma) := \\sum_{j=0}^{q+1} (-1)^j \\mathrm{res}^{|\\partial_j \\sigma|}_{|\\sigma|} f (\\partial_j \\sigma),"
},
{
"math_id": 11,
"text": "\\mathrm{res}^{|\\partial_j \\sigma|}_{|\\sigma|}"
},
{
"math_id": 12,
"text": "\\mathcal F(|\\partial_j \\sigma|)"
},
{
"math_id": 13,
"text": "\\mathcal F(|\\sigma|)."
},
{
"math_id": 14,
"text": "\\delta_{q+1} \\circ \\delta_q = 0."
},
{
"math_id": 15,
"text": "\\delta"
},
{
"math_id": 16,
"text": "Z^q(\\mathcal{U}, \\mathcal{F}) := \\ker ( \\delta_q) \\subseteq C^q(\\mathcal U, \\mathcal F)"
},
{
"math_id": 17,
"text": "f"
},
{
"math_id": 18,
"text": "\\sigma"
},
{
"math_id": 19,
"text": "\\sum_{j=0}^{q} (-1)^j \\mathrm{res}^{|\\partial_j \\sigma|}_{|\\sigma|} f (\\partial_j \\sigma) = 0"
},
{
"math_id": 20,
"text": "A,B\\in \\mathcal{U}"
},
{
"math_id": 21,
"text": "f(A)|_{A \\cap B} = f(B)|_{A \\cap B}"
},
{
"math_id": 22,
"text": "U = A\\cap B \\cap C"
},
{
"math_id": 23,
"text": "A,B,C \\in \\mathcal{U}"
},
{
"math_id": 24,
"text": "f(B \\cap C)|_U - f(A \\cap C)|_U + f(A \\cap B)|_U = 0"
},
{
"math_id": 25,
"text": "B^q(\\mathcal{U}, \\mathcal{F}) := \\mathrm{Im} ( \\delta_{q-1}) \\subseteq C^{q}(\\mathcal{U}, \\mathcal{F})"
},
{
"math_id": 26,
"text": "h"
},
{
"math_id": 27,
"text": "f(A \\cap B) = h(A)|_{A \\cap B} - h(B)|_{A \\cap B}"
},
{
"math_id": 28,
"text": "(C^{\\bullet}(\\mathcal{U}, \\mathcal{F}), \\delta)"
},
{
"math_id": 29,
"text": "\\check{H}^q(\\mathcal{U}, \\mathcal{F}) := H^q((C^{\\bullet}(\\mathcal U, \\mathcal F), \\delta)) = Z^q(\\mathcal{U}, \\mathcal{F}) / B^q(\\mathcal{U}, \\mathcal{F})"
},
{
"math_id": 30,
"text": "\\mathcal{V}"
},
{
"math_id": 31,
"text": "\\check{H}^*(\\mathcal U,\\mathcal F) \\to \\check{H}^*(\\mathcal V,\\mathcal F)."
},
{
"math_id": 32,
"text": "\\check{H}(X,\\mathcal F) := \\varinjlim_{\\mathcal U} \\check{H}(\\mathcal U,\\mathcal F)"
},
{
"math_id": 33,
"text": "\\check{H}(X;A)"
},
{
"math_id": 34,
"text": "\\check{H}(X,\\mathcal{F}_A)"
},
{
"math_id": 35,
"text": "\\mathcal{F}_A"
},
{
"math_id": 36,
"text": "\\{x\\mid\\rho_i(x)>0\\}"
},
{
"math_id": 37,
"text": "\\check{H}^{*}(X;A)"
},
{
"math_id": 38,
"text": " H^*(X;A) \\,"
},
{
"math_id": 39,
"text": "\\check{H}^*(X;\\R)"
},
{
"math_id": 40,
"text": "\\check{H}^1(X;\\Z)=\\Z,"
},
{
"math_id": 41,
"text": "H^1(X;\\Z)=0."
},
{
"math_id": 42,
"text": "\\check{H}^{*}(\\mathcal U;\\R)"
},
{
"math_id": 43,
"text": "\\mathcal{F}^+"
},
{
"math_id": 44,
"text": "\\chi: \\check{H}^*(X,\\mathcal{F}) \\to H^*(X,\\mathcal{F}^+)"
},
{
"math_id": 45,
"text": "\\chi"
},
{
"math_id": 46,
"text": "\\check H^n (X, \\mathcal{F}) := \\varinjlim_{\\mathcal U} \\check H^n(\\mathcal U, \\mathcal{F})."
},
{
"math_id": 47,
"text": "\\check H^n(\\mathcal U, \\mathcal F)"
},
{
"math_id": 48,
"text": "\\mathcal U^{\\times^r_X} := \\mathcal U \\times_X \\dots \\times_X \\mathcal U."
},
{
"math_id": 49,
"text": "\\check H^n(X, \\mathcal F) \\rightarrow H^n(X, \\mathcal F)"
},
{
"math_id": 50,
"text": "N_X \\mathcal U : \\dots \\to \\mathcal U \\times_X \\mathcal U \\times_X \\mathcal U \\to \\mathcal U \\times_X \\mathcal U \\to \\mathcal U."
},
{
"math_id": 51,
"text": "\\mathcal{F}(K_\\ast)"
},
{
"math_id": 52,
"text": "H^n(\\mathcal F (K_\\ast))"
},
{
"math_id": 53,
"text": "N_X \\mathcal U "
},
{
"math_id": 54,
"text": "H^n (X, \\mathcal F) \\cong \\varinjlim_{K_*} H^n(\\mathcal F(K_*)),"
},
{
"math_id": 55,
"text": "\\mathcal{F}=\\mathbb{R}"
},
{
"math_id": 56,
"text": "q"
},
{
"math_id": 57,
"text": "\\mathbb{R}"
},
{
"math_id": 58,
"text": "X=S^1"
},
{
"math_id": 59,
"text": "X"
},
{
"math_id": 60,
"text": "\\mathcal{U}=\\{U_0,U_1,U_2\\}"
},
{
"math_id": 61,
"text": "U_i \\cap U_j \\ne \\phi"
},
{
"math_id": 62,
"text": "U_0 \\cap U_1 \\cap U_2 = \\phi"
},
{
"math_id": 63,
"text": "\\delta f"
},
{
"math_id": 64,
"text": "(U_i,U_i,U_i),(U_i,U_i,U_j),(U_j,U_i,U_i),(U_i,U_j,U_i)"
},
{
"math_id": 65,
"text": "i \\ne j"
},
{
"math_id": 66,
"text": "(U_i,U_j,U_k)"
},
{
"math_id": 67,
"text": "\\{i,j,k\\}=\\{1,2,3\\}"
},
{
"math_id": 68,
"text": "f(U_i,U_i)=0"
},
{
"math_id": 69,
"text": "\\delta f(U_i,U_j,U_i)=f(U_j,U_i)-f(U_i,U_i)+f(U_i,U_j)=0 \\implies f(U_j,U_i)=-f(U_i,U_j)."
},
{
"math_id": 70,
"text": "f(U_0,U_1),f(U_0,U_2),f(U_1,U_2)"
},
{
"math_id": 71,
"text": "Z^1(\\mathcal{U},\\mathbb{R})=\\{f \\in C^1(\\mathcal{U},\\mathbb{R}) : f(U_i,U_i)=0, f(U_j,U_i)=-f(U_i,U_j)\\} \\cong \\mathbb{R}^3."
},
{
"math_id": 72,
"text": "f = \\delta g"
},
{
"math_id": 73,
"text": "\\begin{cases}\nf(U_i,U_i)=g(U_i)-g(U_i)=0 & (i=0,1,2); \\\\\nf(U_i,U_j)=g(U_j)-g(U_i)=-f(U_j,U_i) & (i \\ne j)\n\\end{cases}"
},
{
"math_id": 74,
"text": "f(U_0,U_1)+f(U_1,U_2)=f(U_0,U_2)"
},
{
"math_id": 75,
"text": "f(U_0,U_1)"
},
{
"math_id": 76,
"text": "f(U_1,U_2)"
},
{
"math_id": 77,
"text": "\\begin{align}\nB^1(\\mathcal{U},\\mathbb{R})=\\{f \\in C^1(\\mathcal{U},\\mathbb{R}) : \\ & f(U_i,U_i)=0, f(U_j,U_i)=-f(U_i,U_j), \\\\\n&f(U_0,U_2)=f(U_0,U_1)+f(U_1,U_2)\\} \\cong \\mathbb{R}^2.\n\\end{align}"
},
{
"math_id": 78,
"text": "\\check{H}^1(\\mathcal{U},\\mathbb{R})=Z^1(\\mathcal{U},\\mathbb{R})/B^1(\\mathcal{U},\\mathbb{R}) \\cong \\mathbb{R}"
},
{
"math_id": 79,
"text": "\\check{H}^1(X,\\mathbb{R}) \\cong \\mathbb{R}"
},
{
"math_id": 80,
"text": "\\Omega^1"
},
{
"math_id": 81,
"text": "\\mathbb{P}^1_\\mathbb{C}"
},
{
"math_id": 82,
"text": "\\mathcal{U} = \\{ U_1 = \\text{Spec}(\\Complex[y]), U_2 = \\text{Spec}(\\Complex[y^{-1}]) \\}"
},
{
"math_id": 83,
"text": "\\begin{align}\n&\\Omega^1(U_1) = \\Complex[y]dy \\\\\n&\\Omega^1(U_2) = \\Complex \\left [y^{-1} \\right ]dy^{-1}\n\\end{align}"
},
{
"math_id": 84,
"text": "dy^{-1} = -(1/y^2)dy"
},
{
"math_id": 85,
"text": "0 \\to \\Complex[y]dy \\oplus \\Complex \\left [y^{-1} \\right ]dy^{-1} \\xrightarrow{d^0} \\Complex \\left [y,y^{-1} \\right ]dy \\to 0"
},
{
"math_id": 86,
"text": "d^0"
},
{
"math_id": 87,
"text": "y^{-1}dy"
},
{
"math_id": 88,
"text": "\\begin{align}\n&H^1(\\mathbb{P}_{\\Complex}^1,\\Omega^1) \\cong \\Complex \\\\\n&H^k(\\mathbb{P}_{\\Complex}^1,\\Omega^1) \\cong 0 \\text{ for } k \\neq 1\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=856614 |
856626 | 155 (number) | Natural number
155 (one hundred [and] fifty-five) is the natural number following 154 and preceding 156.
In mathematics.
155 is:
There are 155 primitive permutation groups of degree 81. OEIS:
If one adds up all the primes from the least through the greatest prime factors of 155, that is, 5 and 31, the result is 155. (sequence in the OEIS) Only three other "small" semiprimes (10, 39, and 371) share this attribute.
In other fields.
155 is also:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "10011011_2"
}
] | https://en.wikipedia.org/wiki?curid=856626 |
8568030 | Performance prediction | In computer science, performance prediction means to estimate the execution time or other performance factors (such as cache misses) of a program on a given computer. It is being widely used for computer architects to evaluate new computer designs, for compiler writers to explore new optimizations, and also for advanced developers to tune their programs.
There are many approaches to predict program 's performance on computers. They can be roughly divided into three major categories:
Simulation-based prediction.
Performance data can be directly obtained from computer simulators, within which each instruction of the target program is actually dynamically executed given a particular input data set. Simulators can predict program's performance very accurately, but takes considerable time to handle large programs. Examples include the PACE and Wisconsin Wind Tunnel simulators as well as the more recent WARPP simulation toolkit, which attempts to significantly reduce the time required for parallel system simulation.
Another approach, based on trace-based simulation does not run every instruction, but runs a trace file which store important program events only. This approach loses some flexibility and accuracy compared to cycle-accurate simulation mentioned above but can be much faster. The generation of traces often consumes considerable amounts of storage space and can severely impact the runtime of applications if large amount of data are recorded during execution.
Profile-based prediction.
The classic approach of performance prediction treats a program as a set of basic blocks connected by execution path. Thus the execution time of the whole program is the sum of execution time of each basic block multiplied by its execution frequency, as shown in the following formula:
formula_0
The execution frequencies of basic blocks are generated from a profiler, which is why this method is called profile-based prediction. The execution time of a basic block is usually obtained from a simple instruction scheduler.
Classic profile-based prediction worked well for early single-issue, in-order execution processors, but fails to accurately predict the performance of modern processors. The major reason is that modern processors can issue and execute several instructions at the same time, sometimes out of the original order and cross the boundary of basic blocks.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nT_{program} = \\sum_{i=1}^{n}{(T_{BB_i}*F_{BB_i})}\n"
}
] | https://en.wikipedia.org/wiki?curid=8568030 |
856808 | Mean arterial pressure | Average blood pressure in an individual during a single cardiac cycle
In medicine, the mean arterial pressure (MAP) is an average calculated blood pressure in an individual during a single cardiac cycle. Although methods of estimating MAP vary, a common calculation is to take one-third of the pulse pressure (the difference between the systolic and diastolic pressures), and add that amount to the diastolic pressure. A normal MAP is about 90 mmHg.
Mean arterial pressure
diastolic blood pressure +
MAP is altered by cardiac output and systemic vascular resistance. It is used clinically to estimate the risk of cardiovascular diseases, where a MAP of 90 mmHg or less is low risk, and a MAP of greater than 96 mmHg represents "stage one hypertension" with increased risk.
Testing.
Mean arterial pressure can be measured directly or estimate from systolic and diastolic blood pressure by using a formula. The least invasive method is the use of a blood pressure cuff which gives the values to calculate an estimate of the mean pressure. A similar method is to use a oscillometric blood pressure device that works by a cuff only method where a microprocessor determines the systolic and diastolic blood pressure. Invasively, an arterial catheter with a transducer is placed and the mean pressure is determined by the subsequent waveform.
Estimating MAP.
While MAP can only be measured directly by invasive monitoring, it can be estimated by using a formula in which the lower (diastolic) blood pressure is doubled and added to the higher (systolic) blood pressure and that composite sum then is divided by 3 to estimate MAP.
Thus, a common way to estimate mean arterial pressure is to take one-third of the pulse pressure added to the diastolic pressure:
formula_0
where:
Systolic pressure minus diastolic pressure equals the pulse pressure which may be substituted in.
Another way to find the MAP is to use the systemic vascular resistance equated (formula_1), which is represented mathematically by the formula
formula_2
where formula_3 is the change in pressure across the systemic circulation from its beginning to its end and formula_4 is the flow through the vasculature (equal to cardiac output).
In other words:
formula_5
Therefore, MAP can be determined by rearranging the equation to:
formula_6
where:
This is only valid at normal resting heart rates during which formula_10 can be approximated using the measured systolic (formula_11) and diastolic (formula_12) blood pressures:
Elevated heart rate.
At high heart rates formula_10 is more closely approximated by the arithmetic mean of systolic and diastolic pressures because of the change in shape of the arterial pressure pulse.
For a more accurate formula of formula_10 for elevated heart rates use:
formula_13
Where
Most accurate.
The version of the MAP equation multiplying 0.412 by pulse pressure and adding diastolic blood is indicated to correlate better than other versions of the equation with left ventricular hypertrophy, carotid wall thickness and aortic stiffness. It is expressed:
formula_14
where:
Young patients.
For young patients with congenital heart disease a slight alteration to the factor used found to be more precise. This was written as:
formula_15
where:
This added precision means cerebral blood flow can be more accurately maintained in uncontrolled hypertension.
Neonates.
For neonates, because of their altered physiology, a different formula has been proposed for a more precise reading:
formula_16
where:
It has also been suggested that when getting readings from a neonates radial arterial line, mean arterial pressure can be approximated by averaging the systolic and diastolic pressure.
Other formula versions.
Other formulas used to estimate mean arterial pressure are:
formula_17
or
formula_18
or
formula_19
or
formula_20
Clinical significance.
Mean arterial pressure is a major determinant of the perfusion pressure seen by organs in the body. MAP levels greater than 90 mmHg increase the risk stepwise of having higher risk of cardiovascular diseases, such as stroke, and mortality.
Hypotension.
When assessing hypotension, the context of the baseline blood pressure needs to be considered. Acute decreases in mean arterial pressure of around 25% put people at increased risk for organ damage and potential mortality. Even one minute at a MAP of 50 mmHg, or accumulative effects over short periods, increases the risk of mortality by 5%, and can result in organ failure or complications.
In people hospitalized with shock, a MAP of 65 mmHg lasting for more than two hours was associated with higher mortality. In people with sepsis, the vasopressor dosage may be titrated on the basis of estimated MAP.
MAP may be used like systolic blood pressure in monitoring and treating target blood pressure. Both are used as targets for assessing sepsis, major trauma, stroke, and intracranial bleeding.
Hypertension.
In younger people, elevated MAP is used more commonly than pulse pressure in the prediction of stroke. However in older people, MAP is less predictive of stroke and a better predictor of cardiovascular disease.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "MAP \\approx DP+1/3(SP-DP)"
},
{
"math_id": 1,
"text": "R"
},
{
"math_id": 2,
"text": "R = \\Delta P/Q"
},
{
"math_id": 3,
"text": "\\Delta P"
},
{
"math_id": 4,
"text": "Q"
},
{
"math_id": 5,
"text": "SVR = (MAP - CVP) / CO"
},
{
"math_id": 6,
"text": "MAP = (CO \\cdot SVR) + CVP"
},
{
"math_id": 7,
"text": "CO"
},
{
"math_id": 8,
"text": "SVR"
},
{
"math_id": 9,
"text": "CVP"
},
{
"math_id": 10,
"text": "MAP"
},
{
"math_id": 11,
"text": "SP"
},
{
"math_id": 12,
"text": "DP"
},
{
"math_id": 13,
"text": "MAP \\simeq DP + 0.01 \\times \\exp(4.14 - 40.74 / HR) \\times PP"
},
{
"math_id": 14,
"text": "MAP=DBP +(0.412\\times PP)"
},
{
"math_id": 15,
"text": "MAP=DBP +(0.475\\times PP)"
},
{
"math_id": 16,
"text": "MAP=DBP +(0.466\\times PP)"
},
{
"math_id": 17,
"text": "MAP=DBP+ (0.33 PP) +5 "
},
{
"math_id": 18,
"text": "MAP=DBP+[0.33+(0.0012 \\times HR)]\\times PP"
},
{
"math_id": 19,
"text": "MAP=DAP + PP/3"
},
{
"math_id": 20,
"text": "MAP = DAP+(PP/3)+5mmHg "
}
] | https://en.wikipedia.org/wiki?curid=856808 |
8568920 | Dupuit–Forchheimer assumption | The Dupuit–Forchheimer assumption holds that groundwater flows horizontally in an unconfined aquifer and that the groundwater discharge is proportional to the saturated aquifer thickness. It was formulated by Jules Dupuit and Philipp Forchheimer in the late 1800s to simplify groundwater flow equations for analytical solutions.
The Dupuit–Forchheimer assumption requires that the water table be relatively flat and that the groundwater be hydrostatic (that is, that the equipotential lines are vertical):
formula_0
where formula_1 is the vertical pressure gradient, formula_2 is the specific weight, formula_3 is the density of water, formula_4 is the standard gravity, and formula_5 is the vertical hydraulic gradient. | [
{
"math_id": 0,
"text": "\\begin{align}\n\\frac{\\partial P}{\\partial z} &= -\\gamma = -\\rho \\, g \\\\[0.5em]\n\\frac{\\partial h}{\\partial z} &= 0\n\\end{align}"
},
{
"math_id": 1,
"text": "\\partial P/\\partial z"
},
{
"math_id": 2,
"text": "\\gamma"
},
{
"math_id": 3,
"text": "\\rho"
},
{
"math_id": 4,
"text": "g"
},
{
"math_id": 5,
"text": "\\partial h/\\partial z"
}
] | https://en.wikipedia.org/wiki?curid=8568920 |
8569325 | Mishnat ha-Middot | Hebrew work on geometry
The Mishnat ha-Middot (, lit. 'Treatise of Measures') is the earliest known Hebrew treatise on geometry, composed of 49 "mishnayot" in six chapters. Scholars have dated the work to either the Mishnaic period or the early Islamic era.
History.
Date of composition.
Moritz Steinschneider dated the "Mishnat ha-Middot" to between 800 and 1200 CE. Sarfatti and Langermann have advanced Steinschneider's claim of Arabic influence on the work's terminology, and date the text to the early ninth century.
On the other hand, Hermann Schapira argued that the treatise dates from an earlier era, most likely the Mishnaic period, as its mathematical terminology differs from that of the Hebrew mathematicians of the Arab period. Solomon Gandz conjectured that the text was compiled no later than CE (possibly by Rabbi Nehemiah) and intended to be a part of the Mishnah, but was excluded from its final canonical edition because the work was regarded as too secular. The content resembles both the work of Hero of Alexandria (c. CE) and that of al-Khwārizmī (c. CE) and the proponents of the earlier dating therefore see the "Mishnat ha-Middot" linking Greek and Islamic mathematics.
Modern history.
The "Mishnat ha-Middot" was discovered in MS 36 of the Munich Library by Moritz Steinschneider in 1862. The manuscript, copied in Constantinople in 1480, goes as far as the end of Chapter V. According to the colophon, the copyist believed the text to be complete. Steinschneider published the work in 1864, in honour of the seventieth birthday of Leopold Zunz. The text was edited and published again by mathematician Hermann Schapira in 1880.
After the discovery by Otto Neugebauer of a genizah-fragment in the Bodleian Library containing Chapter VI, Solomon Gandz published a complete version of the "Mishnat ha-Middot" in 1932, accompanied by a thorough philological analysis. A third manuscript of the work was found among uncatalogued material in the Archives of the Jewish Museum of Prague in 1965.
Contents.
Although primarily a practical work, the "Mishnat ha-Middot" attempts to define terms and explain both geometric application and theory. The book begins with a discussion that defines "aspects" for the different kinds of plane figures (quadrilateral, triangle, circle, and segment of a circle) in Chapter I (§1–5), and with the basic principles of measurement of areas (§6–9). In Chapter II, the work introduces concise rules for the measurement of plane figures (§1–4), as well as a few problems in the calculation of volume (§5–12). In Chapters III–V, the "Mishnat ha-Middot" explains again in detail the measurement of the four types of plane figures, with reference to numerical examples. The text concludes with a discussion of the proportions of the Tabernacle in Chapter VI.
The treatise argues against the common belief that the Tanakh defines the geometric ratio π as being exactly equal to 3 and defines it as <templatestyles src="Fraction/styles.css" />22⁄7 instead. The book arrives at this approximation by calculating the area of a circle according to the formulae
formula_0 and formula_1.II§3, V§3
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A=d^2-\\tfrac{d^2}{7}-\\tfrac{d^2}{14}"
},
{
"math_id": 1,
"text": "A=\\tfrac{c}{2}\\cdot\\tfrac{d}{2}"
}
] | https://en.wikipedia.org/wiki?curid=8569325 |
8569383 | Groundwater discharge | Volumetric flow rate of groundwater through an aquifer
Groundwater discharge is the volumetric flow rate of groundwater through an aquifer.
Total groundwater discharge, as reported through a specified area, is similarly expressed as:
formula_0
where
"Q" is the total groundwater discharge ([L3·T−1]; m3/s),
"K" is the hydraulic conductivity of the aquifer ([L·T−1]; m/s),
"dh/dl" is the hydraulic gradient ([L·L−1]; unitless), and
"A" is the area which the groundwater is flowing through ([L2]; m2)
For example, this can be used to determine the flow rate of water flowing along a plane with known geometry.
The discharge potential.
The discharge potential is a potential in groundwater mechanics which links the physical properties, hydraulic head, with a mathematical formulation for the energy as a function of position. The discharge potential, formula_1 [L3·T−1], is defined in such way that its gradient equals the discharge vector.
formula_2
formula_3
Thus the hydraulic head may be calculated in terms of the discharge potential, for confined flow as
formula_4
and for unconfined shallow flow as
formula_5
where
formula_6 is the thickness of the aquifer [L],
formula_7 is the hydraulic head [L], and
formula_8 is an arbitrary constant [L3·T−1] given by the boundary conditions.
As mentioned the discharge potential may also be written in terms of position. The discharge potential is a function of the Laplace's equation
formula_9
which solution is a linear differential equation. Because the solution is a linear differential equation for which superposition principle holds, it may be combined with other solutions for the discharge potential, e.g. uniform flow, multiple wells, analytical elements (analytic element method).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Q = \\frac{dh}{dl}KA"
},
{
"math_id": 1,
"text": "\\Phi"
},
{
"math_id": 2,
"text": "Q_x = -\\frac{\\partial \\Phi}{\\partial x}"
},
{
"math_id": 3,
"text": "Q_y = -\\frac{\\partial \\Phi}{\\partial y}"
},
{
"math_id": 4,
"text": "\\Phi = KH\\phi"
},
{
"math_id": 5,
"text": "\\Phi = \\frac{1}{2}K\\phi^2+C"
},
{
"math_id": 6,
"text": "H"
},
{
"math_id": 7,
"text": "\\phi"
},
{
"math_id": 8,
"text": "C"
},
{
"math_id": 9,
"text": "\\frac{\\partial^2 \\Phi}{\\partial x^2} + \\frac{\\partial^2 \\Phi}{\\partial y^2} = 0"
}
] | https://en.wikipedia.org/wiki?curid=8569383 |
8570238 | Bracketing (linguistics) | In linguistics, particularly linguistic morphology, bracketing is a term of art that refers to how an utterance can be represented as a hierarchical tree of constituent parts. Analysis techniques based on bracketing are used at different levels of grammar, but are particularly associated with morphologically complex words.
To give an example of bracketing in English, consider the word "uneventful". This word is made of three parts, the prefix "un-", the root "event", and the suffix "-ful". An English speaker should have no trouble parsing this word as "lacking in significant events". However, imagine a foreign linguist with access to a dictionary of English roots and affixes, but only a superficial understanding of English grammar. Conceivably, he or she could understand "uneventful" as one of:
We can represent these two understandings of "uneventful" with the "bracketings" formula_0 and formula_1, respectively. Here, bracketing gives the linguist a convenient technique for representing the different ways to parse the word, and for forming hypotheses about why the word is parsed the way it is by speakers of the language.
Since bracketing represents a hierarchical tree, it is associated to some extent with generative grammar. Some theories in cognitive linguistics rely on the idea that bracketing represents to some degree of accuracy how listeners parse complex utterances (e.g. level ordering). In computational linguistics, rules for how a program should parse a word can be represented in terms of possible bracketings.
It is not completely clear that bracketing accurately represents the structure of utterances. In particular, there are bracketing paradoxes that challenge this idea. However, there is some evidence for bracketing, such as the creation of new words via "rebracketing".
Rebracketing.
"Rebracketing" is a type of folk etymology that can result in the creation of new words. An often cited example in English is certain common nicknames that begin with "N", where the given name does not begin with "N" (e.g. "Ned" for "Edward", "Nelly" for "Ellen"). In Old English, the first person possessive pronoun was "mīn". Old English speakers commonly addressed family and close friends with "min <Name>", for example, "min Ed". Over time, the pronoun shifted from "min" to "mi" and children learning the language rebracketed the utterance /mined/ from the original "min Ed" (formula_2) to "mi Ned" (formula_3). A similar process is responsible for the word "nickname".
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left[ [ \\mbox{un-} ] \\left[ [ \\mbox{event} ] [ \\mbox{-ful} ] \\right] \\right]"
},
{
"math_id": 1,
"text": "\\left[ \\left[ [ \\mbox{un-} ] [ \\mbox{event} ] \\right] [ \\mbox{-ful} ] \\right]"
},
{
"math_id": 2,
"text": "\\left[ [ \\mbox{min} ] [ \\mbox{ed} ] \\right]"
},
{
"math_id": 3,
"text": "\\left[ [ \\mbox{mi} ] [ \\mbox{ned} ] \\right]"
}
] | https://en.wikipedia.org/wiki?curid=8570238 |
857110 | Centered triangular number | Centered figurate number that represents a triangle with a dot in the center
A centered (or centred) triangular number is a centered figurate number that represents an equilateral triangle with a dot in the center and all its other dots surrounding the center in successive equilateral triangular layers.
This is also the number of points of a hexagonal lattice with nearest-neighbor coupling whose distance from a given point
is less than or equal to formula_0.
The following image shows the building of the centered triangular numbers by using the associated figures: at each step, the previous triangle (shown in red) is surrounded by a triangular layer of new dots (in blue).
formula_1
formula_2
Properties.
Relationship with centered square numbers.
The centered triangular numbers can be expressed in terms of the centered square numbers:
formula_3
where
formula_4
Lists of centered triangular numbers.
The first centered triangular numbers ("C"3,"n" < 3000) are:
1, 4, 10, 19, 31, 46, 64, 85, 109, 136, 166, 199, 235, 274, 316, 361, 409, 460, 514, 571, 631, 694, 760, 829, 901, 976, 1054, 1135, 1219, 1306, 1396, 1489, 1585, 1684, 1786, 1891, 1999, 2110, 2224, 2341, 2461, 2584, 2710, 2839, 2971, … (sequence in the OEIS).
The first simultaneously triangular and centered triangular numbers ("C"3,"n" = "T""N" < 109) are:
1, 10, 136, 1 891, 26 335, 366 796, 5 108 806, 71 156 485, 991 081 981, … (sequence in the OEIS).
The generating function.
If the centered triangular numbers are treated as the coefficients of
the McLaurin series of a function, that function converges for all formula_5, in which case it can be expressed as the meromorphic generating function
formula_6 | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "C_{3,n+1} - C_{3,n} = 3(n+1)."
},
{
"math_id": 2,
"text": "C_{3,n} = 1 + 3 \\frac{n(n+1)}{2} = \\frac{3n^2 + 3n + 2}{2}."
},
{
"math_id": 3,
"text": "C_{3,n} = \\frac{3C_{4,n} + 1}{4},"
},
{
"math_id": 4,
"text": "C_{4,n} = n^{2} + (n+1)^{2}."
},
{
"math_id": 5,
"text": " |x| < 1"
},
{
"math_id": 6,
"text": " 1 + 4x + 10x^2 + 19x^3 + 31x^4 +~... = \\frac{1-x^3}{(1-x)^4} = \\frac{x^2+x+1}{(1-x)^3} ~."
}
] | https://en.wikipedia.org/wiki?curid=857110 |
857187 | Representation theory of Hopf algebras | In abstract algebra, a representation of a Hopf algebra is a representation of its underlying associative algebra. That is, a representation of a Hopf algebra "H" over a field "K" is a "K"-vector space "V" with an action "H" × "V" → "V" usually denoted by juxtaposition ( that is, the image of ("h","v") is written "hv" ). The vector space "V" is called an "H"-module.
Properties.
The module structure of a representation of a Hopf algebra "H" is simply its structure as a module for the underlying associative algebra. The main use of considering the additional structure of a Hopf algebra is when considering all "H"-modules as a category. The additional structure is also used to define invariant elements of an "H"-module "V". An element "v" in "V" is invariant under "H" if for all "h" in "H", "hv" = ε("h")"v", where ε is the counit of "H". The subset of all invariant elements of "V" forms a submodule of "V".
Categories of representations as a motivation for Hopf algebras.
For an associative algebra "H", the tensor product "V"1 ⊗ "V"2 of two "H"-modules "V"1 and "V"2 is a vector space, but not necessarily an "H"-module. For the tensor product to be a functorial product operation on "H"-modules, there must be a linear binary operation Δ : "H" → "H" ⊗ "H" such that for any "v" in "V"1 ⊗ "V"2 and any "h" in "H",
formula_0
and for any "v" in "V"1 ⊗ "V"2 and "a" and "b" in "H",
formula_1
using sumless Sweedler's notation, which is somewhat like an index free form of Einstein's summation convention. This is satisfied if there is a Δ such that Δ("ab") = Δ("a")Δ("b") for all "a", "b" in "H".
For the category of "H"-modules to be a strict monoidal category with respect to ⊗, formula_2 and formula_3 must be equivalent and there must be unit object ε"H", called the trivial module, such that ε"H" ⊗ "V", "V" and "V" ⊗ ε"H" are equivalent.
This means that for any "v" in
formula_4
and for "h" in "H",
formula_5
This will hold for any three "H"-modules if Δ satisfies
formula_6
The trivial module must be one-dimensional, and so an algebra homomorphism ε : "H" → "F" may be defined such that "hv" = ε("h")"v" for all "v" in ε"H". The trivial module may be identified with "F", with 1 being the element such that 1 ⊗ "v" = "v" = "v" ⊗ 1 for all "v". It follows that for any "v" in any "H"-module "V", any "c" in ε"H" and any "h" in "H",
formula_7
The existence of an algebra homomorphism ε satisfying
formula_8
is a sufficient condition for the existence of the trivial module.
It follows that in order for the category of "H"-modules to be a monoidal category with respect to the tensor product, it is sufficient for "H" to have maps Δ and ε satisfying these conditions. This is the motivation for the definition of a bialgebra, where Δ is called the comultiplication and ε is called the counit.
In order for each "H"-module "V" to have a dual representation "V" such that the underlying vector spaces are dual and the operation * is functorial over the monoidal category of "H"-modules, there must be a linear map "S" : "H" → "H" such that for any "h" in "H", "x" in "V" and "y" in "V*",
formula_9
where formula_10 is the usual pairing of dual vector spaces. If the map formula_11 induced by the pairing is to be an "H"-homomorphism, then for any "h" in "H", "x" in "V" and "y" in "V*",
formula_12
which is satisfied if
formula_13
for all "h" in "H".
If there is such a map "S", then it is called an "antipode", and "H" is a Hopf algebra. The desire for a monoidal category of modules with functorial tensor products and dual representations is therefore one motivation for the concept of a Hopf algebra.
Representations on an algebra.
A Hopf algebra also has representations which carry additional structure, namely they are algebras.
Let "H" be a Hopf algebra. If "A" is an algebra with the product operation μ : "A" ⊗ "A" → "A", and ρ : "H" ⊗ "A" → "A" is a representation of "H" on "A", then ρ is said to be a representation of "H" on an algebra if μ is "H"-equivariant. As special cases, Lie algebras, Lie superalgebras and groups can also have representations on an algebra. | [
{
"math_id": 0,
"text": "hv=\\Delta h(v_{(1)}\\otimes v_{(2)})=h_{(1)}v_{(1)}\\otimes h_{(2)}v_{(2)},"
},
{
"math_id": 1,
"text": "\\Delta(ab)(v_{(1)}\\otimes v_{(2)})=(ab)v=a[b[v]]=\\Delta a[\\Delta b(v_{(1)}\\otimes v_{(2)})]=(\\Delta a )(\\Delta b)(v_{(1)}\\otimes v_{(2)})."
},
{
"math_id": 2,
"text": "V_1\\otimes(V_2\\otimes V_3)"
},
{
"math_id": 3,
"text": "(V_1\\otimes V_2)\\otimes V_3"
},
{
"math_id": 4,
"text": "V_1\\otimes(V_2\\otimes V_3)=(V_1\\otimes V_2)\\otimes V_3"
},
{
"math_id": 5,
"text": "((\\operatorname{id}\\otimes \\Delta)\\Delta h)(v_{(1)}\\otimes v_{(2)}\\otimes v_{(3)})=h_{(1)}v_{(1)}\\otimes h_{(2)(1)}v_{(2)}\\otimes h_{(2)(2)}v_{(3)}=hv=((\\Delta\\otimes \\operatorname{id}) \\Delta h) (v_{(1)}\\otimes v_{(2)}\\otimes v_{(3)})."
},
{
"math_id": 6,
"text": "(\\operatorname{id}\\otimes \\Delta)\\Delta A=(\\Delta \\otimes \\operatorname{id})\\Delta A."
},
{
"math_id": 7,
"text": "(\\varepsilon(h_{(1)})h_{(2)})cv=h_{(1)}c\\otimes h_{(2)}v=h(c\\otimes v)=h(cv)=(h_{(1)}\\varepsilon(h_{(2)}))cv."
},
{
"math_id": 8,
"text": "\\varepsilon(h_{(1)})h_{(2)} = h = h_{(1)}\\varepsilon(h_{(2)})"
},
{
"math_id": 9,
"text": "\\langle y, S(h)x\\rangle = \\langle hy, x \\rangle."
},
{
"math_id": 10,
"text": "\\langle\\cdot,\\cdot\\rangle"
},
{
"math_id": 11,
"text": "\\varphi:V\\otimes V^*\\rightarrow \\varepsilon_H"
},
{
"math_id": 12,
"text": "\\varphi\\left(h(x\\otimes y)\\right)=\\varphi\\left(x\\otimes S(h_{(1)})h_{(2)}y\\right)=\\varphi\\left(S(h_{(2)})h_{(1)}x\\otimes y\\right)=h\\varphi(x\\otimes y)=\\varepsilon(h)\\varphi(x\\otimes y),"
},
{
"math_id": 13,
"text": "S(h_{(1)})h_{(2)}=\\varepsilon(h)=h_{(1)}S(h_{(2)})"
}
] | https://en.wikipedia.org/wiki?curid=857187 |
857235 | Equivalence principle | The hypothesis that inertial and gravitational masses are equivalent
The equivalence principle is the hypothesis that the observed equivalence of gravitational and inertial mass is a consequence of nature. The weak form, known for centuries, relates to masses of any composition in free fall taking the same trajectories and landing at identical times. The extended form by Albert Einstein requires special relativity to also hold in free fall and requires the weak equivalence to be valid everywhere. This form was a critical input for the development of the theory of general relativity. The strong form requires Einstein's form to work for stellar objects. Highly precise experimental tests of the principle limit possible deviations from equivalence to be very small.
Concept.
In classical mechanics, Newton's equation of motion in a gravitational field, written out in full, is:
inertial mass × acceleration = gravitational mass × intensity of the gravitational field
Very careful experiments have shown that the inertial mass on the left side and gravitational mass on the right side are numerically equal and independent of the material composing the masses. The equivalence principle is the hypothesis that this numerical equality of inertial and gravitational mass is a consequence of their fundamental identity.
The equivalence principle can be considered an extension of the principle of relativity, the principle that the laws of physics are invariant under uniform motion. An observer in a windowless room cannot distinguish between being on the surface of the Earth and being in a spaceship in deep space accelerating at 1"g" and the laws of physics are unable to distinguish these cases.
History.
Galileo compared different materials experimentally to determine that the acceleration due to gravitation is independent of the amount of mass being accelerated.
Newton, just 50 years after Galileo, developed the idea that gravitational and inertial mass were different concepts and compared the periods of pendulums composed of different materials to verify that these masses are the same. This form of the equivalence principle became known as "weak equivalence".
A version of the equivalence principle consistent with special relativity was introduced by Albert Einstein in 1907, when he observed that identical physical laws are observed in two systems, one subject to a constant gravitational field causing acceleration and the other subject to constant acceleration like a rocket far from any gravitational field. Since the physical laws are the same, Einstein assumed the gravitational field and the acceleration were "physically equivalent". Einstein stated this hypothesis as:
<templatestyles src="Template:Blockquote/styles.css" />we ... assume the complete physical equivalence of a gravitational field and a corresponding acceleration of the reference system.
In 1911 Einstein demonstrated the power of the equivalence principle by using it to predict that clocks run at different rates in a gravitational potential, and light rays bend in a gravitational field. He connected the equivalence principle to his earlier principle of special relativity:
<templatestyles src="Template:Blockquote/styles.css" />This assumption of exact physical equivalence makes it impossible for us to speak of the absolute acceleration of the system of reference, just as the usual theory of relativity forbids us to talk of the absolute velocity of a system; and it makes the equal falling of all bodies in a gravitational field seem a matter of course.
Immediately after completing his work on a theory of gravity (known as general relativity) and in later years Einstein recalled the role of the equivalence principle:
<templatestyles src="Template:Blockquote/styles.css" />The breakthrough came suddenly one day. I was sitting on a chair in my patent office in Bern. Suddenly a thought struck me: If a man falls
freely, he would not feel his weight. I was taken aback. This simple thought experiment made a deep impression on me. This led me to the theory of gravity.
Since Einstein developed general relativity, there was a need to develop a framework to test the theory against other possible theories of gravity compatible with special relativity. This was developed by Robert Dicke as part of his program to test general relativity. Two new principles were suggested, the so-called Einstein equivalence principle and the strong equivalence principle, each of which assumes the weak equivalence principle as a starting point. These are discussed below.
Definitions.
Three main forms of the equivalence principle are in current use: weak (Galilean), Einsteinian, and strong. Some studies also create finer divisions or slight alternative.
Weak equivalence principle.
The weak equivalence principle, also known as the universality of free fall or the Galilean equivalence principle can be stated in many ways. The strong equivalence principle, a generalization of the weak equivalence principle, includes astronomic bodies with gravitational self-binding energy. Instead, the weak equivalence principle assumes falling bodies are self-bound by non-gravitational forces only (e.g. a stone). Either way:
Uniformity of the gravitational field eliminates measurable tidal forces originating from a radial divergent gravitational field (e.g., the Earth) upon finite sized physical bodies.
Einstein equivalence principle.
What is now called the "Einstein equivalence principle" states that the weak equivalence principle holds, and that:
<templatestyles src="Block indent/styles.css"/>"the outcome of any local, non-gravitational test experiment is independent of the experimental apparatus' velocity relative to the gravitational field and is independent of where and when in the gravitational field the experiment is performed."
Here "local" means that experimental setup must be small compared to variations in the gravitational field, called tidal forces. The "test" experiment must be small enough so that its gravitational potential does not alter the result.
The two additional constraints added to the weak principle to get the Einstein form − (1) the independence of the outcome on relative velocity (local Lorentz invariance) and (2) independence of "where" known as (local positional invariance) − have far reaching consequences. With these constraints alone Einstein was able to predict the gravitational redshift. Theories of gravity that obey the Einstein equivalence principle must be "metric theories", meaning that trajectories of freely falling bodies are geodesics of symmetric metric.
Around 1960 Leonard I. Schiff conjectured that any complete and consistent theory of gravity that embodies the weak equivalence principle implies the Einstein equivalence principle; the conjecture can't be proven but has several plausibility arguments in its favor. Nonetheless, the two principles are tested with very different kinds of experiments.
The Einstein equivalence principle has been criticized as imprecise, because there is no universally accepted way to distinguish gravitational from non-gravitational experiments (see for instance Hadley and Durand).
Strong equivalence principle.
The strong equivalence principle applies the same constraints as the Einstein equivalence principle, but allows the freely falling bodies to be massive gravitating objects as well as test particles.
Thus this is a version of the equivalence principle that applies to objects that exert a gravitational force on themselves, such as stars, planets, black holes or Cavendish experiments. It requires that the gravitational constant be the same everywhere in the universe and is incompatible with a fifth force. It is much more restrictive than the Einstein equivalence principle.
Like the Einstein equivalence principle, the strong equivalence principle requires gravity is geometrical by nature, but in addition it forbids any extra fields, so the metric alone determines all of the effects of gravity. If an observer measures a patch of space to be flat, then the strong equivalence principle suggests that it is absolutely equivalent to any other patch of flat space elsewhere in the universe. Einstein's theory of general relativity (including the cosmological constant) is thought to be the only theory of gravity that satisfies the strong equivalence principle. A number of alternative theories, such as Brans–Dicke theory and the Einstein-aether theory add additional fields.
Active, passive, and inertial masses.
Some of the tests of the equivalence principle use names for the different ways mass appears in physical formulae. In nonrelativistic physics three kinds of mass can be distinguished:
By definition of active and passive gravitational mass, the force on formula_0 due to the gravitational field of formula_1 is:
formula_2
Likewise the force on a second object of arbitrary mass2 due to the gravitational field of mass0 is:
formula_3
By definition of inertial mass:formula_4if formula_5 and formula_6 are the same distance formula_7 from formula_8 then, by the weak equivalence principle, they fall at the same rate (i.e. their accelerations are the same).
formula_9
Hence:
formula_10
Therefore:
formula_11
In other words, passive gravitational mass must be proportional to inertial mass for objects, independent of their material composition if the weak equivalence principle is obeyed.
The dimensionless "Eötvös-parameter" or "Eötvös ratio" formula_12 is the difference of the ratios of gravitational and inertial masses divided by their average for the two sets of test masses "A" and "B".
formula_13
Values of this parameter are used to compare tests of the equivalence principle.
A similar parameter can be used to compare passive and active mass.
By Newton's third law of motion:
formula_2
must be equal and opposite to
formula_14
It follows that:
formula_15
In words, passive gravitational mass must be proportional to active gravitational mass for all objects. The difference,
formula_16
is used to quantify differences between passive and active mass.
Experimental tests.
Tests of the weak equivalence principle.
Tests of the weak equivalence principle are those that verify the equivalence of gravitational mass and inertial mass. An obvious test is dropping different objects and verifying that they land at the same time. Historically this was the first approach, though probably not by Galileo's Leaning Tower of Pisa experiment but earlier by
Simon Stevin who dropped lead balls of different masses off the Delft churchtower and listened for the sound they made on a wooden plank.
Isaac Newton measured the period of pendulums made with different materials as an alternative test giving the first precision measurements.
Loránd Eötvös's approach in 1908 used a very sensitive torsion balance to give precision approaching 1 in a billion. Modern experiments have improved this by another factor of a million.
A popular exposition of this measurement was done on the Moon by David Scott in 1971. He dropped a falcon feather and a hammer at the same time, showing on video that they landed at the same time.
Experiments are still being performed at the University of Washington which have placed limits on the differential acceleration of objects towards the Earth, the Sun and towards dark matter in the Galactic Center. Future satellite experiments – Satellite Test of the Equivalence Principle and Galileo Galilei – will test the weak equivalence principle in space, to much higher accuracy.
With the first successful production of antimatter, in particular anti-hydrogen, a new approach to test the weak equivalence principle has been proposed. Experiments to compare the gravitational behavior of matter and antimatter are currently being developed.
Proposals that may lead to a quantum theory of gravity such as string theory and loop quantum gravity predict violations of the weak equivalence principle because they contain many light scalar fields with long Compton wavelengths, which should generate fifth forces and variation of the fundamental constants. Heuristic arguments suggest that the magnitude of these equivalence principle violations could be in the 10−13 to 10−18 range.
Currently envisioned tests of the weak equivalence principle are approaching a degree of sensitivity such that "non-discovery" of a violation would be just as profound a result as discovery of a violation. Non-discovery of equivalence principle violation in this range would suggest that gravity is so fundamentally different from other forces as to require a major reevaluation of current attempts to unify gravity with the other forces of nature. A positive detection, on the other hand, would provide a major guidepost towards unification.
Tests of the Einstein equivalence principle.
In addition to the tests of the weak equivalence principle, the Einstein equivalence principle requires testing the local Lorentz invariance and local positional invariance conditions.
Testing local Lorentz invariance amounts to testing special relativity, a theory with vast number of existing tests. Nevertheless, attempts to look for quantum gravity require even more precise tests. The modern tests include looking for directional variations in the speed of light (called "clock anisotropy tests") and new forms of the Michelson-Morley experiment. The anisotropy measures less than one part in 10−20.
Testing local positional invariance divides in to tests in space and in time. Space-based tests use measurements of the gravitational redshift, the classic is the Pound–Rebka experiment in the 1960s. The most precise measurement was done in 1976 by flying a hydrogen maser and comparing it to one on the ground. The Global positioning system requires compensation for this redshift to give accurate position values.
Time-based tests search for variation of dimensionless constants and mass ratios. For example, Webb et al. reported detection of variation (at the 10−5 level) of the fine-structure constant from measurements of distant quasars. Other researchers dispute these findings.
The present best limits on the variation of the fundamental constants have mainly been set by studying the naturally occurring Oklo natural nuclear fission reactor, where nuclear reactions similar to ones we observe today have been shown to have occurred underground approximately two billion years ago. These reactions are extremely sensitive to the values of the fundamental constants.
Tests of the strong equivalence principle.
The strong equivalence principle can be tested by 1) finding orbital variations in massive bodies (Sun-Earth-Moon), 2) variations in the gravitational constant ("G") depending on nearby sources of gravity or on motion, or 3) searching for a variation of Newton's gravitational constant over the life of the universe
Orbital variations due to gravitational self-energy should cause a "polarization" of solar system orbits called the Nordtvedt effect. This effect has been sensitively tested by the Lunar Laser Ranging Experiment. Up to the limit of one part in 1013 there is no Nordtvedt effect.
A tight bound on the effect of nearby gravitational fields on the strong equivalence principle comes from modeling the orbits of binary stars and comparing the results to pulsar timing data. In 2014, astronomers discovered a stellar triple system containing a millisecond pulsar PSR J0337+1715 and two white dwarfs orbiting it. The system provided them a chance to test the strong equivalence principle in a strong gravitational field with high accuracy.
Most alternative theories of gravity predict a change in the gravity constant over time. Studies of Big Bang nucleosynthesis, analysis of pulsars, and the lunar laser ranging data have shown that "G" cannot have varied by more than 10% since the creation of the universe. The best data comes from studies of the ephemeris of Mars, based on three successive NASA missions, Mars Global Surveyor, Mars Odyssey, and Mars Reconnaissance Orbiter.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "M_1"
},
{
"math_id": 1,
"text": "M_0"
},
{
"math_id": 2,
"text": "F_1 = \\frac{M_0^\\mathrm{act} M_1^\\mathrm{pass}}{r^2}"
},
{
"math_id": 3,
"text": "F_2 = \\frac{M_0^\\mathrm{act} M_2^\\mathrm{pass}}{r^2}"
},
{
"math_id": 4,
"text": "F = m^\\mathrm{inert} a"
},
{
"math_id": 5,
"text": "m_1"
},
{
"math_id": 6,
"text": "m_2"
},
{
"math_id": 7,
"text": "r"
},
{
"math_id": 8,
"text": "m_0"
},
{
"math_id": 9,
"text": "a_1 = \\frac{F_1}{m_1^\\mathrm{inert}} = a_2 = \\frac{F_2}{m_2^\\mathrm{inert}}"
},
{
"math_id": 10,
"text": "\\frac{M_0^\\mathrm{act} M_1^\\mathrm{pass}}{r^2 m_1^\\mathrm{inert}} = \\frac{M_0^\\mathrm{act} M_2^\\mathrm{pass}}{r^2 m_2^\\mathrm{inert}}"
},
{
"math_id": 11,
"text": "\\frac{M_1^\\mathrm{pass}}{m_1^\\mathrm{inert}} = \\frac{M_2^\\mathrm{pass}}{m_2^\\mathrm{inert}}"
},
{
"math_id": 12,
"text": "\\eta(A,B)"
},
{
"math_id": 13,
"text": "\\eta(A,B)=2\\frac{ \\left(\\frac{m_{\\textrm pass}}{m_{\\textrm inert}}\\right)_A-\\left(\\frac{m_{\\textrm pass}}{m_{\\textrm inert}}\\right)_B }{\\left(\\frac{m_{\\textrm pass}}{m_{\\textrm inert}}\\right)_A+\\left(\\frac{m_{\\textrm pass}}{m_{\\textrm inert}}\\right)_B}."
},
{
"math_id": 14,
"text": "F_0 = \\frac{M_1^\\mathrm{act} M_0^\\mathrm{pass}}{r^2}"
},
{
"math_id": 15,
"text": "\\frac{M_0^\\mathrm{act}}{M_0^\\mathrm{pass}} = \\frac{M_1^\\mathrm{act}}{M_1^\\mathrm{pass}}"
},
{
"math_id": 16,
"text": "S_{0,1} = \\frac{M_0^\\mathrm{act}}{M_0^\\mathrm{pass}} - \\frac{M_1^\\mathrm{act}}{M_1^\\mathrm{pass}}"
}
] | https://en.wikipedia.org/wiki?curid=857235 |
857255 | Workforce | Labor pool in employment
In macroeconomics, the labor force is the sum of those either working (i.e., the employed) or looking for work (i.e., the unemployed):
formula_0
Those neither working in the marketplace nor looking for work are out of the labor force.
The sum of the labor force and out of the labor force results in the noninstitutional civilian population, that is, the number of people who (1) work (i.e., the employed), (2) can work but don't, although they are looking for a job (i.e., the unemployed), or (3) can work but don't, and are not looking for a job (i.e., out of the labor force). Stated otherwise, the noninstitutional civilian population is the total population minus people that could not work (children, elders, soldiers, incarcerated). The noninstitutional civilian population is the number of people potentially available for civilian employment.
formula_1.
The labor force participation rate is defined as the ratio of the labor force to the noninstitutional civilian population.
formula_2.
Instead, within a company, its value can be labelled as its "Workforce in Place".
Formal and informal.
Formal labour is any sort of employment that is structured and paid in a formal way. They are paid formally using payrolls paper, electronic card and alike. Unlike the informal sector of the economy, formal labour within a country contributes to that country's gross national product. Informal labour is labour that falls short of being a formal arrangement in law or in practice. Labour inherit may come as formal or non-formal, an employee old enough but below retirement age bracket passing on to his children. It can be paid or unpaid and it is always unstructured and unregulated. Formal employment is more reliable than informal employment. Generally, the former yields higher income and greater benefits and securities for both men and women.
Informal labour.
The contribution of informal labourers is immense. Informal labour is expanding globally, most significantly in developing countries. According to a study done by Jacques Charmes, in the year 2000 informal labour made up 57% of non-agricultural employment, 40% of urban employment, and 83% of the new jobs in Latin America. That same year, informal labour made up 78% of non-agricultural employment, 61% of urban employment, and 93% of the new jobs in Africa. Particularly after an economic crisis, labourers tend to shift from the formal sector to the informal sector. This trend was seen after the Asian economic crisis which began in 1997.
Informal labour and gender.
Gender is frequently associated with informal labour. Women are employed more often informally than they are formally, and informal labour is an overall larger source of employment for females than it is for males. Women frequent the informal sector of the economy through occupations like home-based workers and street vendors. The Penguin Atlas of Women in the World shows that in the 1990s, 81% of women in Benin were street vendors, 55% in Guatemala, 44% in Mexico, 33% in Kenya, and 14% in India. Overall, 60% of women workers in the developing world are employed in the informal sector.
The specific percentages are 84% and 58% for women in Sub-Saharan Africa and Latin America respectively. The percentages for men in both of these areas of the world are lower, amounting to 63% and 48% respectively. In Asia, 65% of women workers and 65% of men workers are employed in the informal sector. Globally, a large percentage of women that are formally employed also work in the informal sector behind the scenes. These women make up the hidden work force.
According to a 2021 FAO study, currently, 85 per cent of economic activity in Africa is conducted in the informal sector where women account for nearly 90 per cent of the informal labour force. According to the ILO's 2016 employment analysis, 64 per cent of informal employment is in agriculture (relative to industry and services) in sub-Saharan Africa. Women have higher rates of informal employment than men with 92 per cent of women workers in informal employment versus 86 per cent of men.
Formal and informal labour can be divided into the subcategories of agricultural work and non-agricultural work. Martha Chen "et al." believe these four categories of labour are closely related to one another. A majority of agricultural work is informal, which the Penguin Atlas for Women in the World defines as unregistered or unstructured. Non-agricultural work can also be informal. According to Martha Chen "et al.", informal labour makes up 48% of non-agricultural work in North Africa, 51% in Latin America, 65% in Asia, and 72% in Sub-Saharan Africa.
Agriculture and informal economic activity are among some of the most important sources of livelihood for women. Women are estimated to account for approximately 70 per cent of informal cross-border traders and are also prevalent among owners of micro, small, or medium-sized enterprises (MSMEs). MSMEs are more vulnerable to market shocks and market disruptions. For women-owned MSMEs this is often compounded by their lack of access to credit and financial liquidity compared to larger businesses. However, MSMEs are often more vulnerable to market shocks and market disruptions. For women-owned MSMEs, this is often compounded by their lack of access to credit and financial liquidity compared to larger businesses.
Paid and unpaid.
Paid and unpaid work are also closely related with formal and informal labour. Some informal work is unpaid, or paid under the table. Unpaid work can be work that is done at home to sustain a family, like child care work, or actual habitual daily labour that is not monetarily rewarded, like working the fields. Unpaid workers have zero earnings, and although their work is valuable, it is hard to estimate its true value. The controversial debate still stands. Men and women tend to work in different areas of the economy, regardless of whether their work is paid or unpaid. Women focus on the service sector, while men focus on the industrial sector.
Unpaid work and gender.
Women usually work fewer hours in income generating jobs than men do. Often it is housework that is unpaid. Worldwide, women and girls are responsible for a great amount of household work.
The Penguin Atlas of Women in the World, published in 2008, stated that in Madagascar, women spend 20 hours per week on housework, while men spend only two. In Mexico, women spend 33 hours and men spend 5 hours. In Mongolia the housework hours amount to 27 and 12 for women and men respectively. In Spain, women spend 26 hours on housework and men spend 4 hours. Only in the Netherlands do men spend 10% more time than women do on activities within the home or for the household.
The Penguin Atlas of Women in the World also stated that in developing countries, women and girls spend a significant amount of time fetching water for the week, while men do not. For example, in Malawi women spend 6.3 hours per week fetching water, while men spend 43 minutes. Girls in Malawi spend 3.3 hours per week fetching water, and boys spend 1.1 hours. Even if women and men both spend time on household work and other unpaid activities, this work is also gendered.
Sick leave and gender.
In the United Kingdom in 2014, two-thirds of workers on long-term sick leave were women, despite women only constituting half of the workforce, even after excluding maternity leave.
Globalisation of the labour market.
The global supply of labor almost doubled in absolute numbers between the 1980s and early 2000s, with half of that growth coming from Asia. At the same time, the rate at which new workers entered the workforce in the Western world began to decline. The growing pool of global labor is accessed by employers in more advanced economies through various methods, including imports of goods, offshoring of production, and immigration. Global labor arbitrage, the practice of accessing the lowest-cost workers from all parts of the world, is partly a result of this enormous growth in the workforce. While most of the absolute increase in this global labor supply consisted of less-educated workers (those without higher education), the relative supply of workers with higher education increased by about 50 percent during the same period. From 1980 to 2010, the global workforce grew from 1.2 to 2.9 billion people. According to a 2012 report by the McKinsey Global Institute, this was caused mostly by developing nations, where there was a "farm to factory" transition. Non-farming jobs grew from 54 percent in 1980 to almost 73 percent in 2010. This industrialization took an estimated 620 million people out of poverty and contributed to the economic development of China, India and others.
Under the "old" international division of labor, until around 1970, underdeveloped areas were incorporated into the world economy principally as suppliers of minerals and agricultural commodities. However, as developing economies are merged into the world economy, more production takes place in these economies. This has led to a trend of transference, or what is also known as the "global industrial shift ", in which production processes are relocated from developed countries (such as the US, European countries, and Japan) to developing countries in Asia (such as China, Vietnam, and India), Mexico and Central America. This is because companies search for the cheapest locations to manufacture and assemble components, so low-cost labor-intensive parts of the manufacturing process are shifted to the developing world where costs are substantially lower.
But not only manufacturing processes are shifted to the developing world. The growth of offshore outsourcing of IT-enabled services (such as offshore custom software development and business process outsourcing) is linked to the availability of large amounts of reliable and affordable communication infrastructure following the telecommunication and Internet expansion of the late 1990s.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Sources.
This article incorporates text from a free content work. Licensed under CC BY-SA 3.0 (license statement/permission). Text taken from "Seizing the opportunities of the African Continental Free Trade Area for the economic empowerment of women in agriculture", FAO, FAO. | [
{
"math_id": 0,
"text": "\\text{Labour force} = \\text{Employed} + \\text{Unemployed}"
},
{
"math_id": 1,
"text": "\n\\begin{align}\n\\text{Noninstitutional civilian population} &= \\text{Labor force} + \\text{Out of the labor force} \\\\\n &= \\text{Employed} + \\text{Unemployed} + \\text{Out of the labor force} \\\\\n &= \\text{Total Population} - \\text{People who can not work}\n\\end{align}\n"
},
{
"math_id": 2,
"text": "\\text{Labor force participation rate} = \\dfrac{\\text{Labor force}}{\\text{Noninstitutional civilian population}}"
}
] | https://en.wikipedia.org/wiki?curid=857255 |
85746 | Stoma | In plants, a variable pore between paired guard cells
In botany, a stoma (pl.: stomata, from Greek "στόμα", "mouth"), also called a stomate (pl.: stomates), is a pore found in the epidermis of leaves, stems, and other organs, that controls the rate of gas exchange between the internal air spaces of the leaf and the atmosphere. The pore is bordered by a pair of specialized parenchyma cells known as guard cells that regulate the size of the stomatal opening.
The term is usually used collectively to refer to the entire stomatal complex, consisting of the paired guard cells and the pore itself, which is referred to as the stomatal aperture. Air, containing oxygen, which is used in respiration, and carbon dioxide, which is used in photosynthesis, passes through stomata by gaseous diffusion. Water vapour diffuses through the stomata into the atmosphere as part of a process called transpiration.
Stomata are present in the sporophyte generation of the vast majority of land plants, with the exception of liverworts, as well as some mosses and hornworts. In vascular plants the number, size and distribution of stomata varies widely. Dicotyledons usually have more stomata on the lower surface of the leaves than the upper surface. Monocotyledons such as onion, oat and maize may have about the same number of stomata on both leaf surfaces. In plants with floating leaves, stomata may be found only on the upper epidermis and submerged leaves may lack stomata entirely. Most tree species have stomata only on the lower leaf surface. Leaves with stomata on both the upper and lower leaf surfaces are called "amphistomatous" leaves; leaves with stomata only on the lower surface are "hypostomatous", and leaves with stomata only on the upper surface are epistomatous or "hyperstomatous". Size varies across species, with end-to-end lengths ranging from 10 to 80 μm and width ranging from a few to 50 μm.
Function.
CO2 gain and water loss.
Carbon dioxide, a key reactant in photosynthesis, is present in the atmosphere at a concentration of about 400 ppm. Most plants require the stomata to be open during daytime. The air spaces in the leaf are saturated with water vapour, which exits the leaf through the stomata in a process known as transpiration. Therefore, plants cannot gain carbon dioxide without simultaneously losing water vapour.
Alternative approaches.
Ordinarily, carbon dioxide is fixed to ribulose 1,5-bisphosphate (RuBP) by the enzyme RuBisCO in mesophyll cells exposed directly to the air spaces inside the leaf. This exacerbates the transpiration problem for two reasons: first, RuBisCo has a relatively low affinity for carbon dioxide, and second, it fixes oxygen to RuBP, wasting energy and carbon in a process called photorespiration. For both of these reasons, RuBisCo needs high carbon dioxide concentrations, which means wide stomatal apertures and, as a consequence, high water loss.
Narrower stomatal apertures can be used in conjunction with an intermediary molecule with a high carbon dioxide affinity, phosphoenolpyruvate carboxylase (PEPcase). Retrieving the products of carbon fixation from PEPCase is an energy-intensive process, however. As a result, the PEPCase alternative is preferable only where water is limiting but light is plentiful, or where high temperatures increase the solubility of oxygen relative to that of carbon dioxide, magnifying RuBisCo's oxygenation problem.
C.A.M. plants.
A group of mostly desert plants called "C.A.M." plants (crassulacean acid metabolism, after the family Crassulaceae, which includes the species in which the CAM process was first discovered) open their stomata at night (when water evaporates more slowly from leaves for a given degree of stomatal opening), use PEPcase to fix carbon dioxide and store the products in large vacuoles. The following day, they close their stomata and release the carbon dioxide fixed the previous night into the presence of RuBisCO. This saturates RuBisCO with carbon dioxide, allowing minimal photorespiration. This approach, however, is severely limited by the capacity to store fixed carbon in the vacuoles, so it is preferable only when water is severely limited.
Opening and closing.
However, most plants do not have CAM and must therefore open and close their stomata during the daytime, in response to changing conditions, such as light intensity, humidity, and carbon dioxide concentration. When conditions are conducive to stomatal opening (e.g., high light intensity and high humidity), a proton pump drives protons (H+) from the guard cells. This means that the cells' electrical potential becomes increasingly negative. The negative potential opens potassium voltage-gated channels and so an uptake of potassium ions (K+) occurs. To maintain this internal negative voltage so that entry of potassium ions does not stop, negative ions balance the influx of potassium. In some cases, chloride ions enter, while in other plants the organic ion malate is produced in guard cells. This increase in solute concentration lowers the water potential inside the cell, which results in the diffusion of water into the cell through osmosis. This increases the cell's volume and turgor pressure. Then, because of rings of cellulose microfibrils that prevent the width of the guard cells from swelling, and thus only allow the extra turgor pressure to elongate the guard cells, whose ends are held firmly in place by surrounding epidermal cells, the two guard cells lengthen by bowing apart from one another, creating an open pore through which gas can diffuse.
When the roots begin to sense a water shortage in the soil, abscisic acid (ABA) is released. ABA binds to receptor proteins in the guard cells' plasma membrane and cytosol, which first raises the pH of the cytosol of the cells and cause the concentration of free Ca2+ to increase in the cytosol due to influx from outside the cell and release of Ca2+ from internal stores such as the endoplasmic reticulum and vacuoles. This causes the chloride (Cl−) and organic ions to exit the cells. Second, this stops the uptake of any further K+ into the cells and, subsequently, the loss of K+. The loss of these solutes causes an increase in water potential, which results in the diffusion of water back out of the cell by osmosis. This makes the cell plasmolysed, which results in the closing of the stomatal pores.
Guard cells have more chloroplasts than the other epidermal cells from which guard cells are derived. Their function is controversial.
Inferring stomatal behavior from gas exchange.
The degree of stomatal resistance can be determined by measuring leaf gas exchange of a leaf. The transpiration rate is dependent on the diffusion resistance provided by the stomatal pores and also on the humidity gradient between the leaf's internal air spaces and the outside air. Stomatal resistance (or its inverse, stomatal conductance) can therefore be calculated from the transpiration rate and humidity gradient. This allows scientists to investigate how stomata respond to changes in environmental conditions, such as light intensity and concentrations of gases such as water vapor, carbon dioxide, and ozone. Evaporation ("E") can be calculated as
formula_0
where "e"i and "e"a are the partial pressures of water in the leaf and in the ambient air respectively, "P" is atmospheric pressure, and "r" is stomatal resistance.
The inverse of "r" is conductance to water vapor ("g"), so the equation can be rearranged to
formula_1
and solved for "g":
formula_2
Photosynthetic CO2 assimilation ("A") can be calculated from
formula_3
where "C"a and "C"i are the atmospheric and sub-stomatal partial pressures of CO2 respectively. The rate of evaporation from a leaf can be determined using a photosynthesis system. These scientific instruments measure the amount of water vapour leaving the leaf and the vapor pressure of the ambient air. Photosynthetic systems may calculate water use efficiency ("A"/"E"), "g", intrinsic water use efficiency ("A"/"g"), and "C"i. These scientific instruments are commonly used by plant physiologists to measure CO2 uptake and thus measure photosynthetic rate.
Evolution.
There is little evidence of the evolution of stomata in the fossil record, but they had appeared in land plants by the middle of the Silurian period. They may have evolved by the modification of conceptacles from plants' alga-like ancestors.
However, the evolution of stomata must have happened at the same time as the waxy cuticle was evolving – these two traits together constituted a major advantage for early terrestrial plants.
Development.
There are three major epidermal cell types which all ultimately derive from the outermost (L1) tissue layer of the shoot apical meristem, called protodermal cells: trichomes, pavement cells and guard cells, all of which are arranged in a non-random fashion.
An asymmetrical cell division occurs in protodermal cells resulting in one large cell that is fated to become a pavement cell and a smaller cell called a meristemoid that will eventually differentiate into the guard cells that surround a stoma. This meristemoid then divides asymmetrically one to three times before differentiating into a guard mother cell. The guard mother cell then makes one symmetrical division, which forms a pair of guard cells. Cell division is inhibited in some cells so there is always at least one cell between stomata.
Stomatal patterning is controlled by the interaction of many signal transduction components such as "EPF" (Epidermal Patterning Factor), "ERL" (ERecta Like) and "YODA" (a putative MAP kinase kinase kinase). Mutations in any one of the genes which encode these factors may alter the development of stomata in the epidermis. For example, a mutation in one gene causes more stomata that are clustered together, hence is called Too Many Mouths ("TMM"). Whereas, disruption of the "SPCH" (SPeecCHless) gene prevents stomatal development all together. Inhibition of stomatal production can occur by the activation of EPF1, which activates TMM/ERL, which together activate YODA. YODA inhibits SPCH, causing SPCH activity to decrease, preventing asymmetrical cell division that initiates stomata formation. Stomatal development is also coordinated by the cellular peptide signal called stomagen, which signals the activation of the SPCH, resulting in increased number of stomata.
Environmental and hormonal factors can affect stomatal development. Light increases stomatal development in plants; while, plants grown in the dark have a lower amount of stomata. Auxin represses stomatal development by affecting their development at the receptor level like the ERL and TMM receptors. However, a low concentration of auxin allows for equal division of a guard mother cell and increases the chance of producing guard cells.
Most angiosperm trees have stomata only on their lower leaf surface. Poplars and willows have them on both surfaces. When leaves develop stomata on both leaf surfaces, the stomata on the lower surface tend to be larger and more numerous, but there can be a great degree of variation in size and frequency about species and genotypes. White ash and white birch leaves had fewer stomata but larger in size. On the other hand sugar maple and silver maple had small stomata that were more numerous.
Types.
Different classifications of stoma types exist. One that is widely used is based on the types that Julien Joseph Vesque introduced in 1889, was further developed by Metcalfe and Chalk, and later complemented by other authors. It is based on the size, shape and arrangement of the subsidiary cells that surround the two guard cells.
They distinguish for dicots:
In monocots, several different types of stomata occur such as:
In ferns, four different types are distinguished:
Stomatal crypts.
Stomatal crypts are sunken areas of the leaf epidermis which form a chamber-like structure that contains one or more stomata and sometimes trichomes or accumulations of wax. Stomatal crypts can be an adaption to drought and dry climate conditions when the stomatal crypts are very pronounced. However, dry climates are not the only places where they can be found. The following plants are examples of species with stomatal crypts or antechambers: "Nerium oleander", conifers, "Hakea" and "Drimys winteri" which is a species of plant found in the cloud forest.
Stomata as pathogenic pathways.
Stomata are holes in the leaf by which pathogens can enter unchallenged. However, stomata can sense the presence of some, if not all, pathogens. However, pathogenic bacteria applied to "Arabidopsis" plant leaves can release the chemical coronatine, which induce the stomata to reopen.
Stomata and climate change.
Response of stomata to environmental factors.
Photosynthesis, plant water transport (xylem) and gas exchange are regulated by stomatal function which is important in the functioning of plants.
Stomata are responsive to light with blue light being almost 10 times as effective as red light in causing stomatal response. Research suggests this is because the light response of stomata to blue light is independent of other leaf components like chlorophyll. Guard cell protoplasts swell under blue light provided there is sufficient availability of potassium. Multiple studies have found support that increasing potassium concentrations may increase stomatal opening in the mornings, before the photosynthesis process starts, but that later in the day sucrose plays a larger role in regulating stomatal opening. Zeaxanthin in guard cells acts as a blue light photoreceptor which mediates the stomatal opening. The effect of blue light on guard cells is reversed by green light, which isomerizes zeaxanthin.
Stomatal density and aperture (length of stomata) varies under a number of environmental factors such as atmospheric CO2 concentration, light intensity, air temperature and photoperiod (daytime duration).
Decreasing stomatal density is one way plants have responded to the increase in concentration of atmospheric CO2 ([CO2]atm). Although changes in [CO2]atm response is the least understood mechanistically, this stomatal response has begun to plateau where it is soon expected to impact transpiration and photosynthesis processes in plants.
Drought inhibits stomatal opening, but research on soybeans suggests moderate drought does not have a significant effect on stomatal closure of its leaves. There are different mechanisms of stomatal closure. Low humidity stresses guard cells causing turgor loss, termed hydropassive closure. Hydroactive closure is contrasted as the whole leaf affected by drought stress, believed to be most likely triggered by abscisic acid.
Future adaptations during climate change.
It is expected that [CO2]atm will reach 500–1000 ppm by 2100. 96% of the past 400,000 years experienced below 280 ppm CO2. From this figure, it is highly probable that genotypes of today’s plants have diverged from their pre-industrial relatives.
The gene "HIC" (high carbon dioxide) encodes a negative regulator for the development of stomata in plants. Research into the "HIC" gene using" Arabidopsis thaliana" found no increase of stomatal development in the dominant allele, but in the ‘wild type’ recessive allele showed a large increase, both in response to rising CO2 levels in the atmosphere. These studies imply the plants response to changing CO2 levels is largely controlled by genetics.
Agricultural implications.
The CO2 fertiliser effect has been greatly overestimated during Free-Air Carbon dioxide Enrichment (FACE) experiments where results show increased CO2 levels in the atmosphere enhances photosynthesis, reduce transpiration, and increase water use efficiency (WUE). Increased biomass is one of the effects with simulations from experiments predicting a 5–20% increase in crop yields at 550 ppm of CO2. Rates of leaf photosynthesis were shown to increase by 30–50% in C3 plants, and 10–25% in C4 under doubled CO2 levels. The existence of a feedback mechanism results a phenotypic plasticity in response to [CO2]atm that may have been an adaptive trait in the evolution of plant respiration and function.
Predicting how stomata perform during adaptation is useful for understanding the productivity of plant systems for both natural and agricultural systems. Plant breeders and farmers are beginning to work together using evolutionary and participatory plant breeding to find the best suited species such as heat and drought resistant crop varieties that could naturally evolve to the change in the face of food security challenges.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E = \\frac{e_\\text{i} - e_\\text{a}}{Pr},"
},
{
"math_id": 1,
"text": "E = (e_\\text{i} - e_\\text{a})g / P"
},
{
"math_id": 2,
"text": "g = \\frac{EP}{e_\\text{i} - e_\\text{a}}."
},
{
"math_id": 3,
"text": "A = \\frac{(C_\\text{a} - C_\\text{i})g}{1.6P},"
}
] | https://en.wikipedia.org/wiki?curid=85746 |
85747 | Reduced mass | Effective inertial mass
In physics, reduced mass is a measure of the effective inertial mass of a system with two or more particles when the particles are interacting with each other. Reduced mass allows the two-body problem to be solved as if it were a one-body problem. Note, however, that the mass determining the gravitational force is "not" reduced. In the computation, one mass "can" be replaced with the reduced mass, if this is compensated by replacing the other mass with the sum of both masses. The reduced mass is frequently denoted by formula_0 (mu), although the standard gravitational parameter is also denoted by formula_0 (as are a number of other physical quantities). It has the dimensions of mass, and SI unit kg.
Reduced mass is particularly useful in classical mechanics.
Equation.
Given two bodies, one with mass "m"1 and the other with mass "m"2, the equivalent one-body problem, with the position of one body with respect to the other as the unknown, is that of a single body of mass
formula_1
where the force on this mass is given by the force between the two bodies.
Properties.
The reduced mass is always less than or equal to the mass of each body:
formula_2
and has the reciprocal additive property:
formula_3
which by re-arrangement is equivalent to half of the harmonic mean.
In the special case that formula_4:
formula_5
If formula_6, then formula_7.
Derivation.
The equation can be derived as follows.
Newtonian mechanics.
Using Newton's second law, the force exerted by a body (particle 2) on another body (particle 1) is:
formula_8
The force exerted by particle 1 on particle 2 is:
formula_9
According to Newton's third law, the force that particle 2 exerts on particle 1 is equal and opposite to the force that particle 1 exerts on particle 2:
formula_10
Therefore:
formula_11
The relative acceleration arel between the two bodies is given by:
formula_12
Note that (since the derivative is a linear operator) the relative acceleration formula_13 is equal to the acceleration of the separation formula_14 between the two particles.
formula_15
This simplifies the description of the system to one force (since formula_10), one coordinate formula_14, and one mass formula_16. Thus we have reduced our problem to a single degree of freedom, and we can conclude that particle 1 moves with respect to the position of particle 2 as a single particle of mass equal to the reduced mass, formula_16.
Lagrangian mechanics.
Alternatively, a Lagrangian description of the two-body problem gives a Lagrangian of
formula_17
where formula_18 is the position vector of mass formula_19 (of particle "formula_20"). The potential energy "V" is a function as it is only dependent on the absolute distance between the particles. If we define
formula_21
and let the centre of mass coincide with our origin in this reference frame, i.e.
formula_22,
then
formula_23
Then substituting above gives a new Lagrangian
formula_24
where
formula_25
is the reduced mass. Thus we have reduced the two-body problem to that of one body.
Applications.
Reduced mass can be used in a multitude of two-body problems, where classical mechanics is applicable.
Moment of inertia of two point masses in a line.
In a system with two point masses formula_26 and formula_27 such that they are co-linear, the two distances formula_28 and formula_29 to the rotation axis may be found with
formula_30
formula_31
where formula_32 is the sum of both distances formula_33.
This holds for a rotation around the center of mass.
The moment of inertia around this axis can be then simplified to
formula_34
Collisions of particles.
In a collision with a coefficient of restitution "e", the change in kinetic energy can be written as
formula_35,
where "v"rel is the relative velocity of the bodies before collision.
For typical applications in nuclear physics, where one particle's mass is much larger than the other the reduced mass can be approximated as the smaller mass of the system. The limit of the reduced mass formula as one mass goes to infinity is the smaller mass, thus this approximation is used to ease calculations, especially when the larger particle's exact mass is not known.
Motion of two massive bodies under their gravitational attraction.
In the case of the gravitational potential energy
formula_36
we find that the position of the first body with respect to the second is governed by the same differential equation as the position of a body with the reduced mass orbiting a body with a mass equal to the sum of the two masses, because
formula_37
Non-relativistic quantum mechanics.
Consider the electron (mass "m"e) and proton (mass "m"p) in the hydrogen atom. They orbit each other about a common centre of mass, a two body problem. To analyze the motion of the electron, a one-body problem, the reduced mass replaces the electron mass
formula_38
and the proton mass becomes the sum of the two masses
formula_39
This idea is used to set up the Schrödinger equation for the hydrogen atom.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mu "
},
{
"math_id": 1,
"text": "\\mu = m_1 \\parallel m_2 = \\cfrac{1}{\\cfrac{1}{m_1}+\\cfrac{1}{m_2}} = \\cfrac{m_1 m_2}{m_1 + m_2},"
},
{
"math_id": 2,
"text": "\\mu \\leq m_1, \\quad \\mu \\leq m_2"
},
{
"math_id": 3,
"text": "\\frac{1}{\\mu} = \\frac{1}{m_1} + \\frac{1}{m_2}"
},
{
"math_id": 4,
"text": "m_1 = m_2"
},
{
"math_id": 5,
"text": "{\\mu} = \\frac{m_1}{2} = \\frac{m_2}{2}"
},
{
"math_id": 6,
"text": "m_1 \\gg m_2"
},
{
"math_id": 7,
"text": "\\mu \\approx m_2"
},
{
"math_id": 8,
"text": "\\mathbf{F}_{12} = m_1 \\mathbf{a}_1"
},
{
"math_id": 9,
"text": "\\mathbf{F}_{21} = m_2 \\mathbf{a}_2"
},
{
"math_id": 10,
"text": "\\mathbf{F}_{12} = - \\mathbf{F}_{21}"
},
{
"math_id": 11,
"text": "m_1 \\mathbf{a}_1 = - m_2 \\mathbf{a}_2 \\;\\; \\Rightarrow \\;\\; \\mathbf{a}_2=-{m_1 \\over m_2} \\mathbf{a}_1"
},
{
"math_id": 12,
"text": "\\mathbf{a}_{\\rm rel} := \\mathbf{a}_1-\\mathbf{a}_2 = \\left(1+\\frac{m_1}{m_2}\\right) \\mathbf{a}_1 = \\frac{m_2+m_1}{m_1 m_2} m_1 \\mathbf{a}_1 = \\frac{\\mathbf{F}_{12}}{\\mu}"
},
{
"math_id": 13,
"text": "\\mathbf{a}_{\\rm rel}"
},
{
"math_id": 14,
"text": "\\mathbf{x}_{\\rm rel}"
},
{
"math_id": 15,
"text": "\\mathbf{a}_{\\rm rel} = \\mathbf{a}_1-\\mathbf{a}_2 = \\frac{d^2\\mathbf{x}_1}{dt^2} - \\frac{d^2\\mathbf{x}_2}{dt^2} = \\frac{d^2}{dt^2}(\\mathbf{x}_1 - \\mathbf{x}_2) = \\frac{d^2\\mathbf{x}_{\\rm rel}}{dt^2}"
},
{
"math_id": 16,
"text": "\\mu"
},
{
"math_id": 17,
"text": " \\mathcal{L} = {1 \\over 2} m_1 \\mathbf{\\dot{r}}_1^2 + {1 \\over 2} m_2 \\mathbf{\\dot{r}}_2^2 - V(| \\mathbf{r}_1 - \\mathbf{r}_2 | ) "
},
{
"math_id": 18,
"text": "{\\mathbf{r}}_{i}"
},
{
"math_id": 19,
"text": "m_{i}"
},
{
"math_id": 20,
"text": "i"
},
{
"math_id": 21,
"text": "\\mathbf{r} = \\mathbf{r}_1 - \\mathbf{r}_2 "
},
{
"math_id": 22,
"text": " m_1 \\mathbf{r}_1 + m_2 \\mathbf{r}_2 = 0 "
},
{
"math_id": 23,
"text": " \\mathbf{r}_1 = \\frac{m_2 \\mathbf{r}}{m_1 + m_2} , \\; \\mathbf{r}_2 = -\\frac{m_1 \\mathbf{r}}{m_1 + m_2}."
},
{
"math_id": 24,
"text": " \\mathcal{L} = {1 \\over 2}\\mu \\mathbf{\\dot{r}}^2 - V(r), "
},
{
"math_id": 25,
"text": "\\mu = \\frac{m_1 m_2}{m_1 + m_2} "
},
{
"math_id": 26,
"text": "m_1"
},
{
"math_id": 27,
"text": "m_2"
},
{
"math_id": 28,
"text": "r_1"
},
{
"math_id": 29,
"text": "r_2"
},
{
"math_id": 30,
"text": "r_1 = R \\frac{m_2 }{m_1+m_2}"
},
{
"math_id": 31,
"text": "r_2 = R \\frac{m_1 }{m_1+m_2}"
},
{
"math_id": 32,
"text": " R"
},
{
"math_id": 33,
"text": "R = r_1 + r_2 "
},
{
"math_id": 34,
"text": " I = m_1 r_1^2 + m_2 r_2^2 = R^2 \\frac{m_1 m_2^2}{(m_1+m_2)^2} + R^2 \\frac{m_1^2 m_2}{(m_1+m_2)^2} = \\mu R^2."
},
{
"math_id": 35,
"text": "\\Delta K = \\frac{1}{2}\\mu v^2_{\\rm rel}(e^2-1)"
},
{
"math_id": 36,
"text": "V(| \\mathbf{r}_1 - \\mathbf{r}_2 | ) = - \\frac{G m_1 m_2}{| \\mathbf{r}_1 - \\mathbf{r}_2 |} \\, ,"
},
{
"math_id": 37,
"text": "m_1 m_2 = (m_1+m_2) \\mu"
},
{
"math_id": 38,
"text": "m_\\text{e} \\rightarrow \\frac{m_\\text{e}m_\\text{p}}{m_\\text{e}+m_\\text{p}} "
},
{
"math_id": 39,
"text": "m_\\text{p} \\rightarrow m_\\text{e} + m_\\text{p} "
}
] | https://en.wikipedia.org/wiki?curid=85747 |
85752 | Lah number | Mathematical sequence
In mathematics, the (signed and unsigned) Lah numbers are coefficients expressing rising factorials in terms of falling factorials and vice versa. They were discovered by Ivo Lah in 1954. Explicitly, the unsigned Lah numbers formula_0 are given by the formula involving the binomial coefficient
formula_1
for formula_2.
Unsigned Lah numbers have an interesting meaning in combinatorics: they count the number of ways a set of "formula_3" elements can be partitioned into "formula_4" nonempty linearly ordered subsets. Lah numbers are related to Stirling numbers.
For formula_5, the Lah number formula_6 is equal to the factorial formula_7 in the interpretation above, the only partition of formula_8 into 1 set can have its set ordered in 6 ways:formula_9formula_10 is equal to 6, because there are six partitions of formula_8 into two ordered parts:formula_11formula_12 is always 1 because the only way to partition formula_13 into formula_14 non-empty subsets results in subsets of size 1, that can only be permuted in one way.
In the more recent literature, Karamata–Knuth style notation has taken over. Lah numbers are now often written asformula_15
Table of values.
Below is a table of values for the Lah numbers:
The row sums are formula_16 (sequence in the OEIS).
Rising and falling factorials.
Let formula_17 represent the rising factorial formula_18 and let formula_19 represent the falling factorial formula_20. The Lah numbers are the coefficients that express each of these families of polynomials in terms of the other. Explicitly,formula_21andformula_22For example,formula_23andformula_24
where the coefficients 6, 6, and 1 are exactly the Lah numbers formula_25, formula_26, and formula_27.
Identities and relations.
The Lah numbers satisfy a variety of identities and relations.
In Karamata–Knuth notation for Stirling numbersformula_28where formula_29 are the Stirling numbers of the first kind and formula_30 are the Stirling numbers of the second kind.
formula_31
formula_32
formula_33, for formula_34.
Recurrence relations.
The Lah numbers satisfy the recurrence relationsformula_35where formula_36, the Kronecker delta, and formula_37 for all formula_38.
formula_39
Derivative of exp(1/"x").
The "n"-th derivative of the function formula_40 can be expressed with the Lah numbers, as followsformula_41For example,
formula_42
formula_43
formula_44
Link to Laguerre polynomials.
Generalized Laguerre polynomials formula_45 are linked to Lah numbers upon setting formula_46formula_47This formula is the default Laguerre polynomial in Umbral calculus convention.
Practical application.
In recent years, Lah numbers have been used in steganography for hiding data in images. Compared to alternatives such as DCT, DFT and DWT, it has lower complexity of calculation—formula_48—of their integer coefficients.
The Lah and Laguerre transforms naturally arise in the perturbative description of the chromatic dispersion.
In Lah-Laguerre optics, such an approach tremendously speeds up optimization problems. | [
{
"math_id": 0,
"text": "L(n, k)"
},
{
"math_id": 1,
"text": " L(n,k) = {n-1 \\choose k-1} \\frac{n!}{k!}"
},
{
"math_id": 2,
"text": "n \\geq k \\geq 1"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "k"
},
{
"math_id": 5,
"text": "n \\geq 1"
},
{
"math_id": 6,
"text": "L(n, 1)"
},
{
"math_id": 7,
"text": "n!"
},
{
"math_id": 8,
"text": "\\{1, 2, 3 \\}"
},
{
"math_id": 9,
"text": "\\{(1, 2, 3)\\}, \\{(1, 3, 2)\\}, \\{(2, 1, 3)\\}, \\{(2, 3, 1)\\}, \\{(3, 1, 2)\\}, \\{(3, 2, 1)\\}"
},
{
"math_id": 10,
"text": "L(3, 2)"
},
{
"math_id": 11,
"text": "\\{1, (2, 3) \\}, \\{1, (3, 2) \\}, \\{2, (1, 3) \\}, \\{2, (3, 1) \\}, \\{3, (1, 2) \\}, \\{3, (2, 1) \\}"
},
{
"math_id": 12,
"text": "L(n, n)"
},
{
"math_id": 13,
"text": "\\{1, 2, \\ldots, n\\}"
},
{
"math_id": 14,
"text": "n"
},
{
"math_id": 15,
"text": "L(n,k) = \\left\\lfloor {n \\atop k} \\right\\rfloor"
},
{
"math_id": 16,
"text": "1, 1, 3, 13, 73, 501, 4051, 37633, \\dots"
},
{
"math_id": 17,
"text": "x^{(n)}"
},
{
"math_id": 18,
"text": "x(x+1)(x+2) \\cdots (x+n-1)"
},
{
"math_id": 19,
"text": "(x)_n"
},
{
"math_id": 20,
"text": "x(x-1)(x-2) \\cdots (x-n+1)"
},
{
"math_id": 21,
"text": "x^{(n)} = \\sum_{k=0}^n L(n,k) (x)_k"
},
{
"math_id": 22,
"text": "(x)_n = \\sum_{k=0}^n (-1)^{n-k} L(n,k)x^{(k)}."
},
{
"math_id": 23,
"text": "x(x+1)(x+2) = {\\color{red}6}x + {\\color{red}6}x(x-1) + {\\color{red}1}x(x-1)(x-2)"
},
{
"math_id": 24,
"text": "x(x-1)(x-2) = {\\color{red}6}x - {\\color{red}6}x(x+1) + {\\color{red}1}x(x+1)(x+2),"
},
{
"math_id": 25,
"text": "L(3, 1)"
},
{
"math_id": 26,
"text": "L(3, 2)"
},
{
"math_id": 27,
"text": "L(3, 3)"
},
{
"math_id": 28,
"text": " L(n,k) = \\sum_{j=k}^n \\left[{n\\atop j}\\right] \\left\\{{j\\atop k}\\right\\}"
},
{
"math_id": 29,
"text": "\\left[{n\\atop j}\\right]"
},
{
"math_id": 30,
"text": "\\left\\{{j\\atop k}\\right\\}"
},
{
"math_id": 31,
"text": " L(n,k) = {n-1 \\choose k-1} \\frac{n!}{k!} = {n \\choose k} \\frac{(n-1)!}{(k-1)!} = {n \\choose k} {n-1 \\choose k-1} (n-k)!"
},
{
"math_id": 32,
"text": " L(n,k) = \\frac{n!(n-1)!}{k!(k-1)!}\\cdot\\frac{1}{(n-k)!} = \\left (\\frac{n!}{k!} \\right )^2\\frac{k}{n(n-k)!}"
},
{
"math_id": 33,
"text": " k(k+1) L(n,k+1) = (n-k) L(n,k)"
},
{
"math_id": 34,
"text": "k>0"
},
{
"math_id": 35,
"text": "\n\\begin{align} L(n+1,k) &= (n+k) L(n,k) + L(n,k-1) \\\\\n&= k(k+1) L(n, k+1) + 2k L(n, k) + L(n, k-1)\n\\end{align}\n"
},
{
"math_id": 36,
"text": "L(n,0)=\\delta_n"
},
{
"math_id": 37,
"text": "L(n,k)=0"
},
{
"math_id": 38,
"text": "k > n"
},
{
"math_id": 39,
"text": "\\sum_{n\\geq k} L(n,k)\\frac{x^n}{n!} = \\frac{1}{k!}\\left( \\frac{x}{1-x} \\right)^k"
},
{
"math_id": 40,
"text": "e^\\frac1{x}"
},
{
"math_id": 41,
"text": " \\frac{\\textrm d^n}{\\textrm dx^n} e^\\frac1x = (-1)^n \\sum_{k=1}^n \\frac{L(n,k)}{x^{n+k}} \\cdot e^\\frac1x."
},
{
"math_id": 42,
"text": " \\frac{\\textrm d}{\\textrm dx} e^\\frac1x = - \\frac{1}{x^2} \\cdot e^{\\frac1x}"
},
{
"math_id": 43,
"text": " \\frac{\\textrm d^2}{\\textrm dx^2}e^\\frac1{x} = \\frac{\\textrm d}{\\textrm dx} \\left(-\\frac1{x^2} e^{\\frac1x} \\right)= -\\frac{-2}{x^3} \\cdot e^{\\frac1x} - \\frac1{x^2} \\cdot \\frac{-1}{x^2} \\cdot e^{\\frac1x}= \\left(\\frac2{x^3} + \\frac1{x^4}\\right) \\cdot e^{\\frac1x}"
},
{
"math_id": 44,
"text": " \\frac{\\textrm d^3}{\\textrm dx^3} e^\\frac1{x} = \\frac{\\textrm d}{\\textrm dx} \\left( \\left(\\frac2{x^3} + \\frac1{x^4}\\right) \\cdot e^{\\frac1x} \\right) = \\left(\\frac{-6}{x^4} + \\frac{-4}{x^5}\\right) \\cdot e^{\\frac1x} + \\left(\\frac2{x^3} + \\frac1{x^4}\\right) \\cdot \\frac{-1}{x^2} \\cdot e^{\\frac1x} =-\\left(\\frac6{x^4} + \\frac6{x^5} + \\frac1{x^6}\\right) \\cdot e^{\\frac{1}{x}}"
},
{
"math_id": 45,
"text": "L^{(\\alpha)}_n(x)"
},
{
"math_id": 46,
"text": "\\alpha = -1"
},
{
"math_id": 47,
"text": " n! L_n^{(-1)}(x) =\\sum_{k=0}^n L(n,k) (-x)^k"
},
{
"math_id": 48,
"text": "O(n \\log n)"
}
] | https://en.wikipedia.org/wiki?curid=85752 |
8575327 | Action learning | Type of approach to problem solving
Action Learning is an approach to problem solving that involves taking action and reflecting upon the results. This method is purported to help improve the problem-solving process and simplify the solutions developed as a result. The theory of Action Learning and its epistemological position were originally developed by Reg Revans, who applied the method to support organizational and business development initiatives and improve on problem solving efforts.
Action Learning is effective in developing a number of individual leadership and team problem-solving skills, and has become a component in many corporate and organizational leadership development programs. The strategy is advertised as being different from the "one size fits all" curricula that are characteristic of many training and development programs.
Overview.
Action Learning is ideologically a cycle of "doing" and "reflecting" stages. In most forms of action learning, a coach is included and responsible for promoting and facilitating learning, as well as encouraging the team to be self-managing.
The Action Learning process includes:
History and Development.
The action learning approach was originated by Reg Revans. Formative influences for Revan included his time working as a physicist at the University of Cambridge, wherein he noted the importance of each scientist describing their own ignorance, sharing experiences, and communally reflecting in order to learn. Revan used these experiences to further develop the method in the 1940s while working for the United Kingdom's National Coal Board, where he encouraged managers to meet together in small groups to share their experiences and ask each other questions about what they saw and heard. From these experiences Regev felt that conventional instructional methods were largely ineffective, and that individuals needed to be aware of their lack of relevant knowledge and be prepared to explore that ignorance with suitable questions and help from other people in similar positions.
Formula.
Revans makes the pedagogical approach of Action Learning more precise in the opening chapter of his book which describes that "learning" is the result of combining "programmed knowledge" and "questioning", frequently abbreviated by the formula: formula_0
In this paradigm, "questioning" is intended to create insight into what people see, hear or feel, and may be divided into multiple categories of question, including open and closed questions. Although "questioning" is considered the cornerstone of the method, more relaxed formulations have enabled Action Learning to gain use in many countries all over the world, including the United States, Canada, Latin America, the Middle East, Africa, and Asia-Pacific.
The International Management Centres Association and Michael Marquardt have both proposed an extension to this formula with the addition of "R" for "reflection": formula_1.
This additional element emphasizes the point that "great questions" should evoke thoughtful reflections while considering the current problem, the desired goal, designing strategies, developing action or implementation plans, or executing action steps that are components of the implementation plan.
"Questioning" in Action Learning.
Action Learning purports that one of the keys to effective problem solving is asking the 'right question'. When asked to the right people at the right time, these questions help obtaining the necessary information. The Action Learning process, which primarily uses a questioning approach, can be more helpful than offering advice because it assumes that each person has the capacity to find their own answers.
Action-based learning questions are questions that are based on the approach of action learning where one solves real-life problems that involve taking action and reflecting upon the results. As opposed to asking a question to gain information, in Action Learning the purpose of questioning is to help someone else explore new options and perspectives, and reflect in order to make better decisions.
Types of questions.
Closed questions.
Closed questions do not allow the respondents to develop their response, generally by limiting respondents with a limited set of possible answers. Answers to closed questions are often monosyllabic words or short phrases, including ""yes" and "no"".
While closed questions typically have simple answers, they should not be interpreted as simple questions. Closed questions can range widely in complexity, and may force the respondent to think significantly before answering. The purposes of closed questions include obtaining facts, initiating the conversation, and maintaining conversational control for the questioner."Examples of closed questions:"
Open questions.
Open questions allow the respondent to expand or explore in their response, and do not have a single correct response. In the framework of Action Learning, this gives the respondent the freedom to discover new ideas, consider different possibilities, and decide on the course of action which is right for them.
Open-ended questions are not always long, and shorter questions often have equal or greater impact than longer ones. When using the Action Learning approach, it is important to be aware of one's tone and language. The goal is usually to ask challenging questions, or to challenge the respondent's perspective. The purposes of open questions include encouraging discussion and reflection, expanding upon a closed question, and giving control of the conversation to the respondent.Examples of open questions:
Use in organizations.
It is applied by using the Action Learning question method to support organizational development. Action Learning is practiced by a wide community of businesses, governments, non-profits, and educational institutions. Organizations may also use Action Learning in the virtual environment. This is a cost-effective solution that enables the widespread use of Action Learning at all levels of an organization. Action e-Learning provides a viable alternative for organizations interested in adapting the action learning process for online delivery with groups where the members are not co-located.
Robert Kramer pioneered the use of Action Learning for officials in the United States government, and at the European Commission in Brussels and Luxembourg. He also introduced Action Learning to scientists at the European Environment Agency in Copenhagen, to officials of the Estonian government at the State Chancellery in Tallinn, Estonia, and to students of communication and media studies at Corvinus University of Budapest.
Models of Action Learning.
The influence of Revans's Action Learning Formula can be seen today in many leadership and organization development initiatives in corporate training and executive education institutes. Since the 1940s, several developments to Revan's original training model have been created. As with other pedagogical approaches, practitioners have built on Revans' original work and adapted tenets to accommodate their specific needs.
Action Reflection Learning and the MiL model.
One such branch of Action Learning is Action Reflection Learning (ARL), which originated in Sweden among educators and consultants under the guidance of Lennart Rohlin of the MiL Institute in the 1970s. Using the "MiL model," ARL gained momentum in the field of Leadership in International Management.
The main differences between Revans' approach to action learning and the 'MiL Model' in the 1980s are:
The MiL model and ARL evolved as practitioners responded to diverse needs and restrictions—MiL practitioners varied the number and duration of the sessions, the type of project selected, the role of the Learning Coach and the style of their interventions. In 2004, Isabel Rimanoczy researched and codified the ARL methodology, identifying 16 elements and 10 underlying principles.
The World Institute for Action Learning model.
The World Institute for Action Learning (WIAL) model was developed by Michael Marquardt, Skipton Leonard, Bea Carson and Arthur Freedman. The model starts with two simple "ground rules" that ensure that statements are related to questions, and grant authority to the coach in order to promote learning. Team members may develop additional ground rules, norms, and roles as they deem necessary or advantageous. Addressing Revans' concern that a coach's over-involvement in the problem-solving process will engender dependency, WIAL coaches only ask questions that encourage team members to reflect on the team's behavior (what is working, can be improved, or done differently) in efforts to improve learning and, ultimately, performance.
Executive Action Learning (EAL) Model.
The action learning model has evolved from an organizational development tool led by learning and development (L&D) managers to organizational alignment and performance tool led by executives, where CEOs and their executive teams facilitate action-learning sessions to align the organizational objectives at various organizational levels and departments. One such example is the Executive Action-Learning (EAL) Model which originated in the United States in 2005.
The EAL model differs from the traditional organizational training methods by shifting the focus from professor-led, general knowledge memorization and presentations to executive-led and project-based experiential reflection and problem-solving as the major learning tool.
EAL makes the following executive education paradigm focus shifts:
"Unlearning" as a prerequisite for "learning".
The process of learning more creative ways of thinking, feeling, and being is achieved in Action Learning by reflecting on what is working now and on actions that can be improved. Action Learning is consistent with the principles of positive psychology and appreciative inquiry by encouraging team members to build on strengths and learn from challenges. In Action Learning, reflecting on what has and has not worked helps team members unlearn what doesn't work and develop new and improved ways to increase productivity moving forward.
Robert Kramer applies the theory of art, creativity and "unlearning" of the psychologist Otto Rank to his practice of Action Learning. In Kramer's work, Action Learning questions allow group members to "step out of the frame of the prevailing ideology,"<ref name="Rank 1932/1989">Rank, O. 1932/1989. Art and Artist: Creative Urge and Personality Development. W.W. Norton.</ref> reflect on their assumptions and beliefs, and re-frame their choices. Through the lens of Otto Rank's work on understanding art and artists, Action Learning can be seen as the never-completed process of learning how to "step out of the frame" of the ruling mindset, and learning how to unlearn.
Role of Facilitator in Action Learning.
An ongoing challenge of Action Learning has been to take productive action as well as to take the time necessary to capture the learning that result from reflecting on the results of taking action. Usually, the urgency of the problem or task decreases or eliminates the reflective time necessary for learning. As a consequence, more and more organizations have recognized the critical importance of an Action Learning coach or facilitator in the process, someone who has the authority and responsibility of creating time and space for the group to learn at the individual, group and organizational level.
There is controversy, however, about the need for an Action Learning coach. Revans was skeptical about the use of learning coaches and, in general, of interventionist approaches. He believed the Action Learning set Action Learning on its own. He also had a major concern that too much process facilitation would lead a group to become dependent on a coach or facilitator. Nevertheless, later in his development of the Action Learning method, Revans experimented with including a role that he described as a "supernumerary" that had many similarities to that of a facilitator or coach. Pedler distills Revans' thinking about the key role of the action learning facilitator as follows:(i) The initiator or "accoucheur": "No organisation is likely to embrace action learning unless there is some person within it ready to fight on its behalf. ...This useful intermediary we may call the accoucheur—the managerial midwife who sees that their organisation gives birth to a new idea...".
(ii) The set facilitator or "combiner":
"there may be a need when it (the set) is first formed for some supernumerary
brought into speed the integration of the set ..." but "Such a combiner ...must contrive that it (the set) achieves independence of them at the earliest possible moment...".
(iii) The facilitator of organizational learning or the "learning community" organiser:
"The most precious asset of any organization is the one most readily overlooked: its capacity to build upon its lived experience, to learn from its challenges and to turn in a better performance by inviting all and sundry to work out for themselves what that performance ought to be."Hale suggested that the facilitator role developed by Revans be incorporated into any standards for Action Learning facilitation accreditation. Hale also suggests the Action Learning facilitator role includes the functions of mobilizer, learning set adviser, and learning catalyst. To increase the reflective, learning aspect of Action Learning, many groups now adopt the practice or norm of focusing on questions rather than statements while working on the problem and developing strategies and actions.
Self-managed action learning is a variant of Action Learning that dispenses with the need for a facilitator of the action learning set, including in virtual and hybrid settings. There are a number of problems, however, with purely self-managed teams (i.e., with no coach). It has been noted that self-managing teams (such as task forces) seldom take the time to reflect on what they are doing or make efforts to identify key lessons learned from the process. Without reflection, team members are likely to import organizational or sub-unit cultural norms and familiar problem solving practices into the problem-solving process without explicitly testing their validity and utility. Team members employ assumptions, mental models, and beliefs about methods or processes that are seldom openly challenged, much less tested. As a result, teams often apply traditional problem solving methods to non-traditional, urgent, critical, and discontinuous problems. In addition, team members often "leap" from the initial problem statement to some form of brainstorming that they assume will produce a viable solution. These suggested solutions typically provoke objections, doubts, concerns, or reservations from other team members who advocate their own preferred solutions. The conflicts that ensue are generally both unproductive and time-consuming. As a result, self-managed teams, tend to split or fragment rather than develop into a cohesive, high-performing team.
Because of these typical characteristics of self-managing teams, many theorists and practitioners have argued that real and effective self-management in action learning requires coaches with the authority to intervene whenever they perceive an opportunity to promote learning or improve team performance. Without this facilitator role, there is no assurance that the team will make the time needed for the periodic, systemic, and strategic inquiry and reflection that is necessary for effective individual, team, and organizational learning.
Organizations and Community.
A number of organizations sponsor events focusing on the implementation and improvement of Action Learning, including "The Journal of Action Learning: Research & Practice", the World Institute of Action Learning Global Forum, the Global Forum on Executive Development and Business Driven Action Learning, and the Action Learning, Action Research Association World Congress. There are also LinkedIn interest groups devoted to Action Learning include WIAL Network, Action Learning Forum, International Foundation for Action Learning, Global Forum on Business Driven Action Learning and Executive Development, Learning Thru Action, and Action Research and Learning in Organizations.
Notes.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "L = P + Q"
},
{
"math_id": 1,
"text": "L = P + Q + R"
}
] | https://en.wikipedia.org/wiki?curid=8575327 |
85754 | Phonon | Quasiparticle of mechanical vibrations
A phonon is a collective excitation in a periodic, elastic arrangement of atoms or molecules in condensed matter, specifically in solids and some liquids. A type of quasiparticle in physics, a phonon is an excited state in the quantum mechanical quantization of the modes of vibrations for elastic structures of interacting particles. Phonons can be thought of as quantized sound waves, similar to photons as quantized light waves.
The study of phonons is an important part of condensed matter physics. They play a major role in many of the physical properties of condensed matter systems, such as thermal conductivity and electrical conductivity, as well as in models of neutron scattering and related effects.
The concept of phonons was introduced in 1930 by Soviet physicist Igor Tamm. The name "phonon" was suggested by Yakov Frenkel. It comes from the Greek word (), which translates to "sound" or "voice", because long-wavelength phonons give rise to sound. The name emphasizes the analogy to the word "photon", in that phonons represent wave-particle duality for sound waves in the same way that photons represent wave-particle duality for light waves. Solids with more than one atom in the smallest unit cell exhibit both acoustic and optical phonons.
Definition.
A phonon is the quantum mechanical description of an elementary vibrational motion in which a lattice of atoms or molecules uniformly oscillates at a single frequency. In classical mechanics this designates a normal mode of vibration. Normal modes are important because any arbitrary lattice vibration can be considered to be a superposition of these "elementary" vibration modes (cf. Fourier analysis). While normal modes are wave-like phenomena in classical mechanics, phonons have particle-like properties too, in a way related to the wave–particle duality of quantum mechanics.
Lattice dynamics.
The equations in this section do not use axioms of quantum mechanics but instead use relations for which there exists a direct correspondence in classical mechanics.
For example: a rigid regular, crystalline (not amorphous) lattice is composed of "N" particles. These particles may be atoms or molecules. "N" is a large number, say of the order of 1023, or on the order of the Avogadro number for a typical sample of a solid. Since the lattice is rigid, the atoms must be exerting forces on one another to keep each atom near its equilibrium position. These forces may be Van der Waals forces, covalent bonds, electrostatic attractions, and others, all of which are ultimately due to the electric force. Magnetic and gravitational forces are generally negligible. The forces between each pair of atoms may be characterized by a potential energy function "V" that depends on the distance of separation of the atoms. The potential energy of the entire lattice is the sum of all pairwise potential energies multiplied by a factor of 1/2 to compensate for double counting:
formula_0
where "ri" is the position of the "i"th atom, and "V" is the potential energy between two atoms.
It is difficult to solve this many-body problem explicitly in either classical or quantum mechanics. In order to simplify the task, two important approximations are usually imposed. First, the sum is only performed over neighboring atoms. Although the electric forces in real solids extend to infinity, this approximation is still valid because the fields produced by distant atoms are effectively screened. Secondly, the potentials "V" are treated as harmonic potentials. This is permissible as long as the atoms remain close to their equilibrium positions. Formally, this is accomplished by Taylor expanding "V" about its equilibrium value to quadratic order, giving "V" proportional to the displacement "x"2 and the elastic force simply proportional to "x". The error in ignoring higher order terms remains small if "x" remains close to the equilibrium position.
The resulting lattice may be visualized as a system of balls connected by springs. The following figure shows a cubic lattice, which is a good model for many types of crystalline solid. Other lattices include a linear chain, which is a very simple lattice which we will shortly use for modeling phonons. (For other common lattices, see crystal structure.)
The potential energy of the lattice may now be written as
formula_1
Here, "ω" is the natural frequency of the harmonic potentials, which are assumed to be the same since the lattice is regular. "Ri" is the position coordinate of the "i"th atom, which we now measure from its equilibrium position. The sum over nearest neighbors is denoted (nn).
It is important to mention that the mathematical treatment given here is highly simplified in order to make it accessible to non-experts. The simplification has been achieved by making two basic assumptions in the expression for the total potential energy of the crystal. These assumptions are that (i) the total potential energy can be written as a sum of pairwise interactions, and (ii) each atom interacts with only its nearest neighbors. These are used only sparingly in modern lattice dynamics. A more general approach is to express the potential energy in terms of force constants. See, for example, the Wiki article on multiscale Green's functions.
Lattice waves.
Due to the connections between atoms, the displacement of one or more atoms from their equilibrium positions gives rise to a set of vibration waves propagating through the lattice. One such wave is shown in the figure to the right. The amplitude of the wave is given by the displacements of the atoms from their equilibrium positions. The wavelength "λ" is marked.
There is a minimum possible wavelength, given by twice the equilibrium separation "a" between atoms. Any wavelength shorter than this can be mapped onto a wavelength longer than 2"a", due to the periodicity of the lattice. This can be thought as one consequence of Nyquist–Shannon sampling theorem, the lattice points being viewed as the "sampling points" of a continuous wave.
Not every possible lattice vibration has a well-defined wavelength and frequency. However, the normal modes do possess well-defined wavelengths and frequencies.
One-dimensional lattice.
In order to simplify the analysis needed for a 3-dimensional lattice of atoms, it is convenient to model a 1-dimensional lattice or linear chain. This model is complex enough to display the salient features of phonons.
Classical treatment.
The forces between the atoms are assumed to be linear and nearest-neighbour, and they are represented by an elastic spring. Each atom is assumed to be a point particle and the nucleus and electrons move in step (adiabatic theorem):
"n" − 1 "n" "n" + 1 ← "a" →
···o++++++o++++++o++++++o++++++o++++++o++++++o++++++o++++++o++++++o···
"u""n" − 1 "un" "u""n" + 1
where n labels the nth atom out of a total of N, a is the distance between atoms when the chain is in equilibrium, and "un" the displacement of the nth atom from its equilibrium position.
If "C" is the elastic constant of the spring and m the mass of the atom, then the equation of motion of the nth atom is
formula_2
This is a set of coupled equations.
Since the solutions are expected to be oscillatory, new coordinates are defined by a discrete Fourier transform, in order to decouple them.
Put
formula_3
Here, "na" corresponds and devolves to the continuous variable x of scalar field theory. The "Qk" are known as the "normal coordinates", continuum field modes "φk".
Substitution into the equation of motion produces the following "decoupled equations" (this requires a significant manipulation using the orthonormality and completeness relations of the discrete Fourier transform),
formula_4
These are the equations for decoupled harmonic oscillators which have the solution
formula_5
Each normal coordinate "Qk" represents an independent vibrational mode of the lattice with wavenumber k, which is known as a normal mode.
The second equation, for "ωk", is known as the dispersion relation between the angular frequency and the wavenumber.
In the continuum limit, a→0, N→∞, with "Na" held fixed, "un" → "φ"("x"), a scalar field, and formula_6. This amounts to classical free scalar field theory, an assembly of independent oscillators.
Quantum treatment.
A one-dimensional quantum mechanical harmonic chain consists of "N" identical atoms. This is the simplest quantum mechanical model of a lattice that allows phonons to arise from it. The formalism for this model is readily generalizable to two and three dimensions.
In some contrast to the previous section, the positions of the masses are not denoted by "ui", but, instead, by "x"1, "x"2..., as measured from their equilibrium positions (i.e. "xi" = 0 if particle "i" is at its equilibrium position.) In two or more dimensions, the "xi" are vector quantities. The Hamiltonian for this system is
formula_7
where "m" is the mass of each atom (assuming it is equal for all), and "xi" and "pi" are the position and momentum operators, respectively, for the "i"th atom and the sum is made over the nearest neighbors (nn). However one expects that in a lattice there could also appear waves that behave like particles. It is customary to deal with waves in Fourier space which uses normal modes of the wavevector as variables instead coordinates of particles. The number of normal modes is same as the number of particles. Still, the Fourier space is very useful given the periodicity of the system.
A set of "N" "normal coordinates" "Qk" may be introduced, defined as the discrete Fourier transforms of the "xk" and "N" "conjugate momenta" "Πk" defined as the Fourier transforms of the "pk":
formula_8
The quantity "kn" turns out to be the wavenumber of the phonon, i.e. 2π divided by the wavelength.
This choice retains the desired commutation relations in either real space or wavevector space
formula_9
From the general result
formula_10
The potential energy term is
formula_11
where
formula_12
The Hamiltonian may be written in wavevector space as
formula_13
The couplings between the position variables have been transformed away; if the "Q" and "Π" were Hermitian (which they are not), the transformed Hamiltonian would describe "N" uncoupled harmonic oscillators.
The form of the quantization depends on the choice of boundary conditions; for simplicity, "periodic" boundary conditions are imposed, defining the ("N" + 1)th atom as equivalent to the first atom. Physically, this corresponds to joining the chain at its ends. The resulting quantization is
formula_14
The upper bound to "n" comes from the minimum wavelength, which is twice the lattice spacing "a", as discussed above.
The harmonic oscillator eigenvalues or energy levels for the mode "ωk" are:
formula_15
The levels are evenly spaced at:
formula_16
where "ħω" is the zero-point energy of a quantum harmonic oscillator.
An exact amount of energy "ħω" must be supplied to the harmonic oscillator lattice to push it to the next energy level. By analogy to the photon case when the electromagnetic field is quantized, the quantum of vibrational energy is called a phonon.
All quantum systems show wavelike and particlelike properties simultaneously. The particle-like properties of the phonon are best understood using the methods of second quantization and operator techniques described later.
Three-dimensional lattice.
This may be generalized to a three-dimensional lattice. The wavenumber "k" is replaced by a three-dimensional wavevector k. Furthermore, each k is now associated with three normal coordinates.
The new indices "s" = 1, 2, 3 label the polarization of the phonons. In the one-dimensional model, the atoms were restricted to moving along the line, so the phonons corresponded to longitudinal waves. In three dimensions, vibration is not restricted to the direction of propagation, and can also occur in the perpendicular planes, like transverse waves. This gives rise to the additional normal coordinates, which, as the form of the Hamiltonian indicates, we may view as independent species of phonons.
Dispersion relation.
For a one-dimensional alternating array of two types of ion or atom of mass "m"1, "m"2 repeated periodically at a distance "a", connected by springs of spring constant "K", two modes of vibration result:
formula_17
where "k" is the wavevector of the vibration related to its wavelength by
formula_18.
The connection between frequency and wavevector, "ω" = "ω"("k"), is known as a dispersion relation. The plus sign results in the so-called "optical" mode, and the minus sign to the "acoustic" mode. In the optical mode two adjacent different atoms move against each other, while in the acoustic mode they move together.
The speed of propagation of an acoustic phonon, which is also the speed of sound in the lattice, is given by the slope of the acoustic dispersion relation, (see group velocity.) At low values of "k" (i.e. long wavelengths), the dispersion relation is almost linear, and the speed of sound is approximately "ωa", independent of the phonon frequency. As a result, packets of phonons with different (but long) wavelengths can propagate for large distances across the lattice without breaking apart. This is the reason that sound propagates through solids without significant distortion. This behavior fails at large values of "k", i.e. short wavelengths, due to the microscopic details of the lattice.
For a crystal that has at least two atoms in its primitive cell, the dispersion relations exhibit two types of phonons, namely, optical and acoustic modes corresponding to the upper blue and lower red curve in the diagram, respectively. The vertical axis is the energy or frequency of phonon, while the horizontal axis is the wavevector. The boundaries at − and are those of the first Brillouin zone. A crystal with "N" ≥ 2 different atoms in the primitive cell exhibits three acoustic modes: one longitudinal acoustic mode and two transverse acoustic modes. The number of optical modes is 3"N" – 3. The lower figure shows the dispersion relations for several phonon modes in GaAs as a function of wavevector k in the principal directions of its Brillouin zone.
The modes are also referred to as the branches of phonon dispersion. In general, if there are p atoms (denoted by N earlier) in the primitive unit cell, there will be 3p branches of phonon dispersion in a 3-dimensional crystal. Out of these, 3 branches correspond to acoustic modes and the remaining 3p-3 branches will correspond to optical modes. In some special directions, some branches coincide due to symmetry. These branches are called degenerate. In acoustic modes, all the p atoms vibrate in phase. So there is no change in the relative displacements of these atoms during the wave propagation.
Study of phonon dispersion is useful for modeling propagation of sound waves in solids, which is characterized by phonons. The energy of each phonon, as given earlier, is "ħω." The velocity of the wave also is given in terms of "ω" and k "." The direction of the wave vector is the direction of the wave propagation and the phonon polarization vector gives the direction in which the atoms vibrate. Actually, in general, the wave velocity in a crystal is different for different directions of k. In other words, most crystals are anisotropic for phonon propagation.
A wave is longitudinal if the atoms vibrate in the same direction as the wave propagation. In a transverse wave, the atoms vibrate perpendicular to the wave propagation. However, except for isotropic crystals, waves in a crystal are not exactly longitudinal or transverse. For general anisotropic crystals, the phonon waves are longitudinal or transverse only in certain special symmetry directions. In other directions, they can be nearly longitudinal or nearly transverse. It is only for labeling convenience, that they are often called longitudinal or transverse but are actually quasi-longitudinal or quasi-transverse. Note that in the three-dimensional case, there are two directions perpendicular to a straight line at each point on the line. Hence, there are always two (quasi) transverse waves for each (quasi) longitudinal wave.
Many phonon dispersion curves have been measured by inelastic neutron scattering.
The physics of sound in fluids differs from the physics of sound in solids, although both are density waves: sound waves in fluids only have longitudinal components, whereas sound waves in solids have longitudinal and transverse components. This is because fluids cannot support shear stresses (but see viscoelastic fluids, which only apply to high frequencies).
Interpretation of phonons using second quantization techniques.
The above-derived Hamiltonian may look like a classical Hamiltonian function, but if it is interpreted as an operator, then it describes a quantum field theory of non-interacting bosons.
The second quantization technique, similar to the ladder operator method used for quantum harmonic oscillators, is a means of extracting energy eigenvalues without directly solving the differential equations. Given the Hamiltonian, formula_19, as well as the conjugate position, formula_20, and conjugate momentum formula_21 defined in the quantum treatment section above, we can define creation and annihilation operators:
formula_22 and formula_23
The following commutators can be easily obtained by substituting in the canonical commutation relation:
formula_24
Using this, the operators "bk"† and "bk" can be inverted to redefine the conjugate position and momentum as:
formula_25 and formula_26
Directly substituting these definitions for formula_20 and formula_27 into the wavevector space Hamiltonian, as it is defined above, and simplifying then results in the Hamiltonian taking the form:
formula_28
This is known as the second quantization technique, also known as the occupation number formulation, where "nk" = "bk"†"bk" is the occupation number. This can be seen to be a sum of N independent oscillator Hamiltonians, each with a unique wave vector, and compatible with the methods used for the quantum harmonic oscillator (note that "nk" is hermitian). When a Hamiltonian can be written as a sum of commuting sub-Hamiltonians, the energy eigenstates will be given by the products of eigenstates of each of the separate sub-Hamiltonians. The corresponding energy spectrum is then given by the sum of the individual eigenvalues of the sub-Hamiltonians.
As with the quantum harmonic oscillator, one can show that "bk"† and "bk" respectively create and destroy a single field excitation, a phonon, with an energy of "ħωk".
Three important properties of phonons may be deduced from this technique. First, phonons are bosons, since any number of identical excitations can be created by repeated application of the creation operator "bk"†. Second, each phonon is a "collective mode" caused by the motion of every atom in the lattice. This may be seen from the fact that the creation and annihilation operators, defined here in momentum space, contains sums over the position and momentum operators of every atom when written in position space (See position and momentum space). Finally, using the "position–position correlation function", it can be shown that phonons act as waves of lattice displacement.
This technique is readily generalized to three dimensions, where the Hamiltonian takes the form:
formula_29
Which can be interpreted as the sum of 3N independent oscillator Hamiltonians, one for each wave vector and polarization.
Acoustic and optical phonons.
Solids with more than one atom in the smallest unit cell exhibit two types of phonons: acoustic phonons and optical phonons.
Acoustic phonons are coherent movements of atoms of the lattice out of their equilibrium positions. If the displacement is in the direction of propagation, then in some areas the atoms will be closer, in others farther apart, as in a sound wave in air (hence the name acoustic). Displacement perpendicular to the propagation direction is comparable to waves on a string. If the wavelength of acoustic phonons goes to infinity, this corresponds to a simple displacement of the whole crystal, and this costs zero deformation energy. Acoustic phonons exhibit a linear relationship between frequency and phonon wave-vector for long wavelengths. The frequencies of acoustic phonons tend to zero with longer wavelength. Longitudinal and transverse acoustic phonons are often abbreviated as LA and TA phonons, respectively.
Optical phonons are out-of-phase movements of the atoms in the lattice, one atom moving to the left, and its neighbor to the right. This occurs if the lattice basis consists of two or more atoms. They are called "optical" because in ionic crystals, such as sodium chloride, fluctuations in displacement create an electrical polarization that couples to the electromagnetic field. Hence, they can be excited by infrared radiation, the electric field of the light will move every positive sodium ion in the direction of the field, and every negative chloride ion in the other direction, causing the crystal to vibrate.
Optical phonons have a non-zero frequency at the Brillouin zone center and show no dispersion near that long wavelength limit. This is because they correspond to a mode of vibration where positive and negative ions at adjacent lattice sites swing against each other, creating a time-varying electrical dipole moment. Optical phonons that interact in this way with light are called "infrared active". Optical phonons that are "Raman active" can also interact indirectly with light, through Raman scattering. Optical phonons are often abbreviated as LO and TO phonons, for the longitudinal and transverse modes respectively; the splitting between LO and TO frequencies is often described accurately by the Lyddane–Sachs–Teller relation.
When measuring optical phonon energy experimentally, optical phonon frequencies are sometimes given in spectroscopic wavenumber notation, where the symbol "ω" represents ordinary frequency (not angular frequency), and is expressed in units of cm−1. The value is obtained by dividing the frequency by the speed of light in vacuum. In other words, the wave-number in cm−1 units corresponds to the inverse of the wavelength of a photon in vacuum that has the same frequency as the measured phonon.
Crystal momentum.
By analogy to photons and matter waves, phonons have been treated with wavevector "k" as though it has a momentum "ħk"; however, this is not strictly correct, because "ħk" is not actually a physical momentum; it is called the "crystal momentum" or "pseudomomentum". This is because "k" is only determined up to addition of constant vectors (the reciprocal lattice vectors and integer multiples thereof). For example, in the one-dimensional model, the normal coordinates "Q" and "Π" are defined so that
formula_30
where
formula_31
for any integer "n". A phonon with wavenumber "k" is thus equivalent to an infinite family of phonons with wavenumbers "k" ± , "k" ± , and so forth. Physically, the reciprocal lattice vectors act as additional chunks of momentum which the lattice can impart to the phonon. Bloch electrons obey a similar set of restrictions.
It is usually convenient to consider phonon wavevectors "k" which have the smallest magnitude |"k"| in their "family". The set of all such wavevectors defines the "first Brillouin zone". Additional Brillouin zones may be defined as copies of the first zone, shifted by some reciprocal lattice vector.
Thermodynamics.
The thermodynamic properties of a solid are directly related to its phonon structure. The entire set of all possible phonons that are described by the phonon dispersion relations combine in what is known as the phonon density of states which determines the heat capacity of a crystal. By the nature of this distribution, the heat capacity is dominated by the high-frequency part of the distribution, while thermal conductivity is primarily the result of the low-frequency region.
At absolute zero temperature, a crystal lattice lies in its ground state, and contains no phonons. A lattice at a nonzero temperature has an energy that is not constant, but fluctuates randomly about some mean value. These energy fluctuations are caused by random lattice vibrations, which can be viewed as a gas of phonons. Because these phonons are generated by the temperature of the lattice, they are sometimes designated thermal phonons.
Thermal phonons can be created and destroyed by random energy fluctuations. In the language of statistical mechanics this means that the chemical potential for adding a phonon is zero. This behavior is an extension of the harmonic potential into the anharmonic regime. The behavior of thermal phonons is similar to the photon gas produced by an electromagnetic cavity, wherein photons may be emitted or absorbed by the cavity walls. This similarity is not coincidental, for it turns out that the electromagnetic field behaves like a set of harmonic oscillators, giving rise to black-body radiation. Both gases obey the Bose–Einstein statistics: in thermal equilibrium and within the harmonic regime, the probability of finding phonons or photons in a given state with a given angular frequency is:
formula_32
where "ω""k","s" is the frequency of the phonons (or photons) in the state, "k"B is the Boltzmann constant, and "T" is the temperature.
Phonon tunneling.
Phonons have been shown to exhibit quantum tunneling behavior (or "phonon tunneling") where, across gaps up to a nanometer wide, heat can flow via phonons that "tunnel" between two materials. This type of heat transfer works between distances too large for conduction to occur but too small for radiation to occur and therefore cannot be explained by classical heat transfer models.
Operator formalism.
The phonon Hamiltonian is given by
formula_33
In terms of the creation and annihilation operators, these are given by
formula_34
Here, in expressing the Hamiltonian in operator formalism, we have not taken into account the "ħωq" term as, given a continuum or infinite lattice, the "ħωq" terms will add up yielding an infinite term. Because the difference in energy is what we measure and not the absolute value of it, the constant term "ħωq" can be ignored without changing the equations of motion. Hence, the "ħωq" factor is absent in the operator formalized expression for the Hamiltonian.
The ground state, also called the "vacuum state", is the state composed of no phonons. Hence, the energy of the ground state is 0. When a system is in the state |"n"1"n"2"n"3…⟩, we say there are "nα" phonons of type "α", where "nα" is the occupation number of the phonons. The energy of a single phonon of type "α" is given by "ħωq" and the total energy of a general phonon system is given by "n"1"ħω"1 + "n"2"ħω"2 +... As there are no cross terms (e.g. "n"1"ħω"2), the phonons are said to be non-interacting. The action of the creation and annihilation operators is given by:
formula_35
and,
formula_36
The creation operator, "aα"† creates a phonon of type "α" while "aα" annihilates one. Hence, they are respectively the creation and annihilation operators for phonons. Analogous to the quantum harmonic oscillator case, we can define particle number operator as
formula_37
The number operator commutes with a string of products of the creation and annihilation operators if and only if the number of creation operators is equal to number of annihilation operators.
It can be shown that phonons are symmetric under exchange (i.e. |"α","β"⟩ = |"β","α"⟩), so therefore they are considered bosons.
Nonlinearity.
As well as photons, phonons can interact via parametric down conversion and form squeezed coherent states.
Predicted properties.
Recent research has shown that phonons and rotons may have a non-negligible mass and be affected by gravity just as standard particles are. In particular, phonons are predicted to have a kind of negative mass and negative gravity. This can be explained by how phonons are known to travel faster in denser materials. Because the part of a material pointing towards a gravitational source is closer to the object, it becomes denser on that end. From this, it is predicted that phonons would deflect away as it detects the difference in densities, exhibiting the qualities of a negative gravitational field. Although the effect would be too small to measure, it is possible that future equipment could lead to successful results.
Superconductivity.
Superconductivity is a state of electronic matter in which electrical resistance vanishes and magnetic fields are expelled from the material. In a superconductor, electrons are bound together into Cooper pairs by a weak attractive force. In a conventional superconductor, this attraction is caused by an exchange of phonons between the electrons. The evidence that phonons, the vibrations of the ionic lattice, are relevant for superconductivity is provided by the isotope effect, the dependence of the superconducting critical temperature on the mass of the ions.
Other research.
In 2019, researchers were able to isolate individual phonons without destroying them for the first time.
They have been also shown to form “phonon winds” where an electric current in a graphene surface is generated by a liquid flow above it due to the viscous forces at the liquid–solid interface.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac12\\sum_{i \\neq j} V\\left(r_i - r_j\\right)"
},
{
"math_id": 1,
"text": "\\sum_{\\{ij\\} (\\mathrm{nn})} \\tfrac12 m \\omega^2 \\left(R_i - R_j\\right)^2."
},
{
"math_id": 2,
"text": "-2Cu_n + C\\left(u_{n+1} + u_{n-1}\\right) = m\\frac{d^2u_n}{dt^2} ."
},
{
"math_id": 3,
"text": "u_n = \\sum_{Nak/2\\pi=1}^N Q_k e^{ikna}."
},
{
"math_id": 4,
"text": " 2C(\\cos {ka-1})Q_k = m\\frac{d^2Q_k}{dt^2}."
},
{
"math_id": 5,
"text": "Q_k=A_ke^{i\\omega_kt};\\qquad \\omega_k=\\sqrt{ \\frac{2C}{m}(1-\\cos{ka})}."
},
{
"math_id": 6,
"text": " \\omega(k) \\propto k a"
},
{
"math_id": 7,
"text": "\\mathcal{H} = \\sum_{i=1}^N \\frac{p_i^2}{2m} + \\frac{1}{2} m\\omega^2 \\sum_{\\{ij\\} (\\mathrm{nn})} \\left(x_i - x_j\\right)^2"
},
{
"math_id": 8,
"text": "\\begin{align} Q_k &= \\frac{1}\\sqrt{N} \\sum_{l} e^{ikal} x_l \\\\ \\Pi_{k} &= \\frac{1}\\sqrt{N} \\sum_{l} e^{-ikal} p_l. \\end{align}"
},
{
"math_id": 9,
"text": " \\begin{align} \n\\left[x_l , p_m \\right]&=i\\hbar\\delta_{l,m} \\\\ \n\\left[ Q_k , \\Pi_{k'} \\right] &=\\frac{1}N \\sum_{l,m} e^{ikal} e^{-ik'am} \\left[x_l , p_m \\right] \\\\\n &= \\frac{i \\hbar}N \\sum_{l} e^{ial\\left(k-k'\\right)} = i\\hbar\\delta_{k,k'} \\\\\n\\left[ Q_k , Q_{k'} \\right] &= \\left[ \\Pi_k , \\Pi_{k'} \\right] = 0\n\\end{align}"
},
{
"math_id": 10,
"text": " \\begin{align} \n\\sum_{l}x_l x_{l+m}&=\\frac{1}N\\sum_{kk'}Q_k Q_{k'}\\sum_{l} e^{ial\\left(k+k'\\right)}e^{iamk'}= \\sum_{k}Q_k Q_{-k}e^{iamk} \\\\ \n\\sum_{l}{p_l}^2 &= \\sum_{k}\\Pi_k \\Pi_{-k}\n\\end{align}"
},
{
"math_id": 11,
"text": " \\tfrac12 m \\omega^2 \\sum_{j} \\left(x_j - x_{j+1}\\right)^2= \\tfrac12 m\\omega^2\\sum_{k}Q_k Q_{-k}(2-e^{ika}-e^{-ika})= \\tfrac12 \\sum_{k}m{\\omega_k}^2Q_k Q_{-k}"
},
{
"math_id": 12,
"text": "\\omega_k = \\sqrt{2 \\omega^2 \\left( 1 - \\cos{ka} \\right)} = 2\\omega\\left|\\sin\\frac{ka}2\\right|"
},
{
"math_id": 13,
"text": "\\mathcal{H} = \\frac{1}{2m}\\sum_k \\left( \\Pi_k\\Pi_{-k} + m^2 \\omega_k^2 Q_k Q_{-k} \\right)"
},
{
"math_id": 14,
"text": "k=k_n = \\frac{2\\pi n}{Na} \\quad \\mbox{for } n = 0, \\pm1, \\pm2, \\ldots \\pm \\frac{N}2 .\\ "
},
{
"math_id": 15,
"text": "E_n = \\left(\\tfrac12+n\\right)\\hbar\\omega_k \\qquad n=0,1,2,3 \\ldots"
},
{
"math_id": 16,
"text": "\\tfrac12\\hbar\\omega , \\ \\tfrac32\\hbar\\omega ,\\ \\tfrac52\\hbar\\omega \\ \\cdots"
},
{
"math_id": 17,
"text": "\\omega_\\pm^2 = K\\left(\\frac{1}{m_1} +\\frac{1}{m_2}\\right) \\pm K \\sqrt{\\left(\\frac{1}{m_1} +\\frac{1}{m_2}\\right)^2-\\frac{4\\sin^2\\frac{ka}{2}}{m_1 m_2}} ,"
},
{
"math_id": 18,
"text": "k = \\tfrac{2 \\pi}{\\lambda}"
},
{
"math_id": 19,
"text": "\\mathcal{H}"
},
{
"math_id": 20,
"text": "Q_k"
},
{
"math_id": 21,
"text": "\\Pi_{k}"
},
{
"math_id": 22,
"text": "b_k=\\sqrt\\frac{m\\omega_k}{2\\hbar}\\left(Q_k+\\frac{i}{m\\omega_k}\\Pi_{-k}\\right)"
},
{
"math_id": 23,
"text": "{b_k}^\\dagger=\\sqrt\\frac{m\\omega_k}{2\\hbar}\\left(Q_{-k}-\\frac{i}{m\\omega_k}\\Pi_{k}\\right)"
},
{
"math_id": 24,
"text": "\\left[b_k , {b_{k'}}^\\dagger \\right] = \\delta_{k,k'} ,\\quad \\Big[b_k , b_{k'} \\Big] = \\left[{b_k}^\\dagger , {b_{k'}}^\\dagger \\right] = 0"
},
{
"math_id": 25,
"text": "Q_k=\\sqrt{\\frac{\\hbar}{2m\\omega_k}}\\left({b_k}^\\dagger+b_{-k}\\right)"
},
{
"math_id": 26,
"text": "\\Pi_k=i\\sqrt{\\frac{\\hbar m\\omega_k}{2}}\\left({b_k}^\\dagger-b_{-k}\\right)"
},
{
"math_id": 27,
"text": "\\Pi_k"
},
{
"math_id": 28,
"text": "\\mathcal{H} =\\sum_k \\hbar\\omega_k \\left({b_k}^\\dagger b_k+\\tfrac12\\right)"
},
{
"math_id": 29,
"text": "\\mathcal{H} = \\sum_k \\sum_{s = 1}^3 \\hbar \\, \\omega_{k,s} \\left( {b_{k,s}}^\\dagger b_{k,s} + \\tfrac12 \\right)."
},
{
"math_id": 30,
"text": "Q_k \\stackrel{\\mathrm{def}}{=} Q_{k+K} ;\\quad \\Pi_k \\stackrel{\\mathrm{def}}{=} \\Pi_{k + K}"
},
{
"math_id": 31,
"text": "K = \\frac{2n\\pi}{a}"
},
{
"math_id": 32,
"text": "n\\left(\\omega_{k,s}\\right) = \\frac{1}{\\exp\\left(\\dfrac{\\hbar\\omega_{k,s}}{k_\\mathrm{B}T}\\right) - 1}"
},
{
"math_id": 33,
"text": "\\mathcal{H} = \\tfrac12 \\sum_\\alpha\\left(p_\\alpha^2 + \\omega^2_\\alpha q_\\alpha^2 - \\hbar\\omega_\\alpha\\right)"
},
{
"math_id": 34,
"text": "\\mathcal{H} = \\sum_\\alpha\\hbar\\omega_\\alpha {a_\\alpha}^\\dagger a_\\alpha"
},
{
"math_id": 35,
"text": "{a_\\alpha}^\\dagger\\Big|n_1\\ldots n_{\\alpha -1}n_\\alpha n_{\\alpha +1}\\ldots\\Big\\rangle = \\sqrt{n_\\alpha +1}\\Big|n_1\\ldots,n_{\\alpha -1}, (n_\\alpha+1), n_{\\alpha+1}\\ldots\\Big\\rangle"
},
{
"math_id": 36,
"text": "a_\\alpha\\Big|n_1\\ldots n_{\\alpha -1}n_\\alpha n_{\\alpha +1}\\ldots\\Big\\rangle = \\sqrt{n_\\alpha}\\Big|n_1\\ldots,n_{\\alpha -1},(n_\\alpha-1),n_{\\alpha+1},\\ldots\\Big\\rangle"
},
{
"math_id": 37,
"text": "N = \\sum_\\alpha {a_\\alpha}^\\dagger a_\\alpha."
}
] | https://en.wikipedia.org/wiki?curid=85754 |
857564 | Discrete wavelet transform | Transform in numerical harmonic analysis
In numerical analysis and functional analysis, a discrete wavelet transform (DWT) is any wavelet transform for which the wavelets are discretely sampled. As with other wavelet transforms, a key advantage it has over Fourier transforms is temporal resolution: it captures both frequency "and" location information (location in time).
Examples.
Haar wavelets.
The first DWT was invented by Hungarian mathematician Alfréd Haar. For an input represented by a list of formula_0 numbers, the Haar wavelet transform may be considered to pair up input values, storing the difference and passing the sum. This process is repeated recursively, pairing up the sums to prove the next scale, which leads to formula_1 differences and a final sum.
Daubechies wavelets.
The most commonly used set of discrete wavelet transforms was formulated by the Belgian mathematician Ingrid Daubechies in 1988. This formulation is based on the use of recurrence relations to generate progressively finer discrete samplings of an implicit mother wavelet function; each resolution is twice that of the previous scale. In her seminal paper, Daubechies derives a family of wavelets, the first of which is the Haar wavelet. Interest in this field has exploded since then, and many variations of Daubechies' original wavelets were developed.
The dual-tree complex wavelet transform (DCWT).
The dual-tree complex wavelet transform (formula_2WT) is a relatively recent enhancement to the discrete wavelet transform (DWT), with important additional properties: It is nearly shift invariant and directionally selective in two and higher dimensions. It achieves this with a redundancy factor of only formula_3, substantially lower than the undecimated DWT. The multidimensional (M-D) dual-tree formula_2WT is nonseparable but is based on a computationally efficient, separable filter bank (FB).
Others.
Other forms of discrete wavelet transform include the Le Gall–Tabatabai (LGT) 5/3 wavelet developed by Didier Le Gall and Ali J. Tabatabai in 1988 (used in JPEG 2000 or JPEG XS ), the Binomial QMF developed by Ali Naci Akansu in 1990, the set partitioning in hierarchical trees (SPIHT) algorithm developed by Amir Said with William A. Pearlman in 1996, the non- or undecimated wavelet transform (where downsampling is omitted), and the Newland transform (where an orthonormal basis of wavelets is formed from appropriately constructed top-hat filters in frequency space). Wavelet packet transforms are also related to the discrete wavelet transform. Complex wavelet transform is another form.
Properties.
The Haar DWT illustrates the desirable properties of wavelets in general. First, it can be performed in formula_4 operations; second, it captures not only a notion of the frequency content of the input, by examining it at different scales, but also temporal content, i.e. the times at which these frequencies occur. Combined, these two properties make the Fast wavelet transform (FWT) an alternative to the conventional fast Fourier transform (FFT).
Time issues.
Due to the rate-change operators in the filter bank, the discrete WT is not time-invariant but actually very sensitive to the alignment of the signal in time. To address the time-varying problem of wavelet transforms, Mallat and Zhong proposed a new algorithm for wavelet representation of a signal, which is invariant to time shifts. According to this algorithm, which is called a TI-DWT, only the scale parameter is sampled along the dyadic sequence 2^j (j∈Z) and the wavelet transform is calculated for each point in time.
Applications.
The discrete wavelet transform has a huge number of applications in science, engineering, mathematics and computer science. Most notably, it is used for signal coding, to represent a discrete signal in a more redundant form, often as a preconditioning for data compression. Practical applications can also be found in signal processing of accelerations for gait analysis, image processing, in digital communications and many others.
It is shown that discrete wavelet transform (discrete in scale and shift, and continuous in time) is successfully implemented as analog filter bank in biomedical signal processing for design of low-power pacemakers and also in ultra-wideband (UWB) wireless communications.
Example in image processing.
Wavelets are often used to denoise two dimensional signals, such as images. The following example provides three steps to remove unwanted white Gaussian noise from the noisy image shown. Matlab was used to import and filter the image.
The first step is to choose a wavelet type, and a level N of decomposition. In this case biorthogonal 3.5 wavelets were chosen with a level N of 10. Biorthogonal wavelets are commonly used in image processing to detect and filter white Gaussian noise, due to their high contrast of neighboring pixel intensity values. Using these wavelets a wavelet transformation is performed on the two dimensional image.
Following the decomposition of the image file, the next step is to determine threshold values for each level from 1 to N. Birgé-Massart strategy is a fairly common method for selecting these thresholds. Using this process individual thresholds are made for N = 10 levels. Applying these thresholds are the majority of the actual filtering of the signal.
The final step is to reconstruct the image from the modified levels. This is accomplished using an inverse wavelet transform. The resulting image, with white Gaussian noise removed is shown below the original image. When filtering any form of data it is important to quantify the signal-to-noise-ratio of the result. In this case, the SNR of the noisy image in comparison to the original was 30.4958%, and the SNR of the denoised image is 32.5525%. The resulting improvement of the wavelet filtering is a SNR gain of 2.0567%.
Choosing other wavelets, levels, and thresholding strategies can result in different types of filtering. In this example, white Gaussian noise was chosen to be removed. Although, with different thresholding, it could just as easily have been amplified.
To illustrate the differences and similarities between the discrete wavelet transform with the discrete Fourier transform, consider the DWT and DFT of the following sequence: (1,0,0,0), a unit impulse.
The DFT has orthogonal basis (DFT matrix):
formula_5
while the DWT with Haar wavelets for length 4 data has orthogonal basis in the rows of:
formula_6
Preliminary observations include:
formula_7
The DWT demonstrates the localization: the (1,1,1,1) term gives the average signal value, the (1,1,–1,–1) places the signal in the left side of the domain, and the
(1,–1,0,0) places it at the left side of the left side, and truncating at any stage yields a downsampled version of the signal:
formula_8
The DFT, by contrast, expresses the sequence by the interference of waves of various frequencies – thus truncating the series yields a low-pass filtered version of the series:
formula_9
Notably, the middle approximation (2-term) differs. From the frequency domain perspective, this is a better approximation, but from the time domain perspective it has drawbacks – it exhibits undershoot – one of the values is negative, though the original series is non-negative everywhere – and ringing, where the right side is non-zero, unlike in the wavelet transform. On the other hand, the Fourier approximation correctly shows a peak, and all points are within formula_10 of their correct value, though all points have error. The wavelet approximation, by contrast, places a peak on the left half, but has no peak at the first point, and while it is exactly correct for half the values (reflecting location), it has an error of formula_11 for the other values.
This illustrates the kinds of trade-offs between these transforms, and how in some respects the DWT provides preferable behavior, particularly for the modeling of transients.
Definition.
One level of the transform.
The DWT of a signal formula_12 is calculated by passing it through a series of filters. First the samples are passed through a low-pass filter with impulse response formula_13 resulting in a convolution of the two:
formula_14
The signal is also decomposed simultaneously using a high-pass filter formula_15. The outputs give the detail coefficients (from the high-pass filter) and approximation coefficients (from the low-pass). It is important that the two filters are related to each other and they are known as a quadrature mirror filter.
However, since half the frequencies of the signal have now been removed, half the samples can be discarded according to Nyquist's rule. The filter output of the low-pass filter formula_13 in the diagram above is then subsampled by 2 and further processed by passing it again through a new low-pass filter formula_13 and a high- pass filter formula_15 with half the cut-off frequency of the previous one, i.e.:
formula_16
formula_17
This decomposition has halved the time resolution since only half of each filter output characterises the signal. However, each output has half the frequency band of the input, so the frequency resolution has been doubled.
With the subsampling operator formula_18
formula_19
the above summation can be written more concisely.
formula_20
formula_21
However computing a complete convolution formula_22 with subsequent downsampling would waste computation time.
The Lifting scheme is an optimization where these two computations are interleaved.
Cascading and filter banks.
This decomposition is repeated to further increase the frequency resolution and the approximation coefficients decomposed with high- and low-pass filters and then down-sampled. This is represented as a binary tree with nodes representing a sub-space with a different time-frequency localisation. The tree is known as a filter bank.
At each level in the above diagram the signal is decomposed into low and high frequencies. Due to the decomposition process the input signal must be a multiple of formula_0 where formula_23 is the number of levels.
For example a signal with 32 samples, frequency range 0 to formula_24 and 3 levels of decomposition, 4 output scales are produced:
Relationship to the mother wavelet.
The filterbank implementation of wavelets can be interpreted as computing the wavelet coefficients of a discrete set of child wavelets for a given mother wavelet formula_25. In the case of the discrete wavelet transform, the mother wavelet is shifted and scaled by powers of two
formula_26
where formula_27 is the scale parameter and formula_28 is the shift parameter, both of which are integers.
Recall that the wavelet coefficient formula_29 of a signal formula_30 is the projection of formula_30 onto a wavelet, and let formula_30 be a signal of length formula_31. In the case of a child wavelet in the discrete family above,
formula_32
Now fix formula_27 at a particular scale, so that formula_33 is a function of formula_28 only. In light of the above equation, formula_34 can be viewed as a convolution of formula_30 with a dilated, reflected, and normalized version of the mother wavelet, formula_35, sampled at the points formula_36. But this is precisely what the detail coefficients give at level formula_27 of the discrete wavelet transform. Therefore, for an appropriate choice of formula_37 and formula_38, the detail coefficients of the filter bank correspond exactly to a wavelet coefficient of a discrete set of child wavelets for a given mother wavelet formula_25.
As an example, consider the discrete Haar wavelet, whose mother wavelet is formula_39. Then the dilated, reflected, and normalized version of this wavelet is formula_40, which is, indeed, the highpass decomposition filter for the discrete Haar wavelet transform.
Time complexity.
The filterbank implementation of the Discrete Wavelet Transform takes only O("N") in certain cases, as compared to O("N" log "N") for the fast Fourier transform.
Note that if formula_38 and formula_37 are both a constant length (i.e. their length is independent of N), then formula_41 and formula_42 each take O("N") time. The wavelet filterbank does each of these two O("N") convolutions, then splits the signal into two branches of size N/2. But it only recursively splits the upper branch convolved with formula_38 (as contrasted with the FFT, which recursively splits both the upper branch and the lower branch). This leads to the following recurrence relation
formula_43
which leads to an O("N") time for the entire operation, as can be shown by a geometric series expansion of the above relation.
As an example, the discrete Haar wavelet transform is linear, since in that case formula_37 and formula_38 are constant length 2.
formula_44
The locality of wavelets, coupled with the O("N") complexity, guarantees that the transform can be computed online (on a streaming basis). This property is in sharp contrast to FFT, which requires access to the entire signal at once. It also applies to the multi-scale transform and also to the multi-dimensional transforms (e.g., 2-D DWT).
Code example.
In its simplest form, the DWT is remarkably easy to compute.
The Haar wavelet in Java:
public static int[] discreteHaarWaveletTransform(int[] input) {
// This function assumes that input.length=2^n, n>1
int[] output = new int[input.length];
for (int length = input.length / 2; ; length = length / 2) {
// length is the current length of the working area of the output array.
// length starts at half of the array size and every iteration is halved until it is 1.
for (int i = 0; i < length; ++i) {
int sum = input[i * 2] + input[i * 2 + 1];
int difference = input[i * 2] - input[i * 2 + 1];
output[i] = sum;
output[length + i] = difference;
if (length == 1) {
return output;
//Swap arrays to do next iteration
System.arraycopy(output, 0, input, 0, length);
Complete Java code for a 1-D and 2-D DWT using Haar, Daubechies, Coiflet, and Legendre wavelets is available from the open source project: JWave.
Furthermore, a fast lifting implementation of the discrete biorthogonal CDF 9/7 wavelet transform in C, used in the JPEG 2000 image compression standard can be found here (archived 5 March 2012).
Example of above code.
This figure shows an example of applying the above code to compute the Haar wavelet coefficients on a sound waveform. This example highlights two key properties of the wavelet transform:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2^n"
},
{
"math_id": 1,
"text": "2^n-1"
},
{
"math_id": 2,
"text": "\\mathbb{C}"
},
{
"math_id": 3,
"text": "2^d"
},
{
"math_id": 4,
"text": "O(n)"
},
{
"math_id": 5,
"text": "\n\\begin{bmatrix}\n1 & 1 & 1 & 1\\\\\n1 & -i & -1 & i\\\\\n1 & -1 & 1 & -1\\\\\n1 & i & -1 & -i\n\\end{bmatrix}\n"
},
{
"math_id": 6,
"text": "\n\\begin{bmatrix}\n1 & 1 & 1 & 1\\\\\n1 & 1 & -1 & -1\\\\\n1 & -1 & 0 & 0\\\\\n0 & 0 & 1 & -1\n\\end{bmatrix}\n"
},
{
"math_id": 7,
"text": "\\begin{align}\n(1,0,0,0) &= \\frac{1}{4}(1,1,1,1) + \\frac{1}{4}(1,1,-1,-1) + \\frac{1}{2}(1,-1,0,0) \\qquad \\text{Haar DWT}\\\\\n(1,0,0,0) &= \\frac{1}{4}(1,1,1,1) + \\frac{1}{4}(1,i,-1,-i) + \\frac{1}{4}(1,-1,1,-1) + \\frac{1}{4}(1,-i,-1,i) \\qquad \\text{DFT}\n\\end{align}"
},
{
"math_id": 8,
"text": "\\begin{align}\n&\\left(\\frac{1}{4},\\frac{1}{4},\\frac{1}{4},\\frac{1}{4}\\right)\\\\\n&\\left(\\frac{1}{2},\\frac{1}{2},0,0\\right)\\qquad\\text{2-term truncation}\\\\\n&\\left(1,0,0,0\\right)\n\\end{align}"
},
{
"math_id": 9,
"text": "\\begin{align}\n&\\left(\\frac{1}{4},\\frac{1}{4},\\frac{1}{4},\\frac{1}{4}\\right)\\\\\n&\\left(\\frac{3}{4},\\frac{1}{4},-\\frac{1}{4},\\frac{1}{4}\\right)\\qquad\\text{2-term truncation}\\\\\n&\\left(1,0,0,0\\right)\n\\end{align}"
},
{
"math_id": 10,
"text": "1/4"
},
{
"math_id": 11,
"text": "1/2"
},
{
"math_id": 12,
"text": "x"
},
{
"math_id": 13,
"text": "g"
},
{
"math_id": 14,
"text": "y[n] = (x * g)[n] = \\sum\\limits_{k = - \\infty }^\\infty {x[k] g[n - k]} "
},
{
"math_id": 15,
"text": "h"
},
{
"math_id": 16,
"text": "y_{\\mathrm{low}} [n] = \\sum\\limits_{k = - \\infty }^\\infty {x[k] g[2 n - k]} "
},
{
"math_id": 17,
"text": "y_{\\mathrm{high}} [n] = \\sum\\limits_{k = - \\infty }^\\infty {x[k] h[2 n - k]} "
},
{
"math_id": 18,
"text": "\\downarrow"
},
{
"math_id": 19,
"text": "(y \\downarrow k)[n] = y[k n] "
},
{
"math_id": 20,
"text": "y_{\\mathrm{low}} = (x*g)\\downarrow 2 "
},
{
"math_id": 21,
"text": "y_{\\mathrm{high}} = (x*h)\\downarrow 2 "
},
{
"math_id": 22,
"text": "x*g"
},
{
"math_id": 23,
"text": "n"
},
{
"math_id": 24,
"text": "f_n"
},
{
"math_id": 25,
"text": "\\psi(t)"
},
{
"math_id": 26,
"text": " \\psi_{j,k}(t)= \\frac{1}{\\sqrt{2^j}} \\psi \\left( \\frac{t - k 2^j}{2^j} \\right) "
},
{
"math_id": 27,
"text": "j"
},
{
"math_id": 28,
"text": "k"
},
{
"math_id": 29,
"text": "\\gamma"
},
{
"math_id": 30,
"text": "x(t)"
},
{
"math_id": 31,
"text": "2^N"
},
{
"math_id": 32,
"text": " \\gamma_{jk} = \\int_{-\\infty}^{\\infty} x(t) \\frac{1}{\\sqrt{2^j}} \\psi \\left( \\frac{t - k 2^j}{2^j} \\right) dt "
},
{
"math_id": 33,
"text": " \\gamma_{jk} "
},
{
"math_id": 34,
"text": "\\gamma_{jk}"
},
{
"math_id": 35,
"text": "h(t) = \\frac{1}{\\sqrt{2^j}} \\psi \\left( \\frac{-t}{2^j} \\right) "
},
{
"math_id": 36,
"text": "1, 2^j, 2\\cdot{2^j}, ..., 2^{N}"
},
{
"math_id": 37,
"text": "h[n]"
},
{
"math_id": 38,
"text": "g[n]"
},
{
"math_id": 39,
"text": "\\psi = [1, -1]"
},
{
"math_id": 40,
"text": "h[n] = \\frac{1}{\\sqrt{2}} [-1, 1]"
},
{
"math_id": 41,
"text": "x * h"
},
{
"math_id": 42,
"text": "x * g"
},
{
"math_id": 43,
"text": "T(N) = 2N + T\\left( \\frac N 2 \\right)"
},
{
"math_id": 44,
"text": "h[n] = \\left[\\frac{-\\sqrt{2}}{2}, \\frac{\\sqrt{2}}{2}\\right] g[n] = \\left[\\frac{\\sqrt{2}}{2}, \\frac{\\sqrt{2}}{2}\\right]"
},
{
"math_id": 45,
"text": "{\\bf y} = f { {\\bf X} }"
},
{
"math_id": 46,
"text": "f"
},
{
"math_id": 47,
"text": "X"
},
{
"math_id": 48,
"text": "\\mathbb{E} X = 1"
},
{
"math_id": 49,
"text": "{\\cal W}"
},
{
"math_id": 50,
"text": "f { {\\bf X} } = f + {f ({\\bf X} -1)}"
},
{
"math_id": 51,
"text": "{\\cal W^+}"
},
{
"math_id": 52,
"text": "\n{\\cal W^+} {\\bf y} = {\\cal W^+} f + {\\cal W^+} {f ({\\bf X} -1)},\n"
},
{
"math_id": 53,
"text": " {\\cal W^+} {f ({\\bf X} -1)}"
},
{
"math_id": 54,
"text": "\n{\\cal W^\\times} {\\bf y} = \\left({\\cal W^\\times} f\\right) \\times \\left({\\cal W^\\times} { {\\bf X}}\\right).\n"
},
{
"math_id": 55,
"text": "\\alpha"
},
{
"math_id": 56,
"text": "c_{k} = \\alpha(y_{k} + y_{k-1})"
},
{
"math_id": 57,
"text": "d_{k} = \\alpha(y_{k} - y_{k-1})"
},
{
"math_id": 58,
"text": "c_{k}^\\ast = (y_{k} \\times y_{k-1})^\\alpha"
},
{
"math_id": 59,
"text": "d_{k}^\\ast = \\left(\\frac{y_{k}}{y_{k-1}}\\right)^\\alpha"
},
{
"math_id": 60,
"text": "{\\cal W^\\times}"
},
{
"math_id": 61,
"text": "[2^{N-j}, 2^{N-j+1}]"
},
{
"math_id": 62,
"text": " \\left[ \\frac{\\pi}{2^j}, \\frac{\\pi}{2^{j-1}} \\right]"
}
] | https://en.wikipedia.org/wiki?curid=857564 |
857766 | Annual percentage rate | Interest rate for a whole year
The term annual percentage rate of charge (APR), corresponding sometimes to a nominal APR and sometimes to an effective APR (EAPR), is the interest rate for a whole year (annualized), rather than just a monthly fee/rate, as applied on a loan, mortgage loan, credit card, etc. It is a finance charge expressed as an annual rate. Those terms have formal, legal definitions in some countries or legal jurisdictions, but in the United States:
* The "nominal APR" is the simple-interest rate (for a year).
* The "effective APR" is the fee+compound interest rate (calculated across a year).
In some areas, the "annual percentage rate" (APR) is the simplified counterpart to the effective interest rate that the borrower will pay on a loan. In many countries and jurisdictions, lenders (such as banks) are required to disclose the "cost" of borrowing in some standardized way as a form of consumer protection. The (effective) APR has been intended to make it easier to compare lenders and loan options.
Multiple definitions of effective APR.
The nominal APR is calculated as: the rate, for a payment period, multiplied by the number of payment periods in a year. However, the exact legal definition of "effective APR", or EAR, can vary greatly in each jurisdiction, depending on the type of fees included, such as participation fees, loan origination fees, monthly service charges, or late fees. The effective APR has been called the "mathematically-true" interest rate for each year.
The computation for the effective APR, as the fee + compound interest rate, can also vary depending on whether the up-front fees, such as origination or participation fees, are added to the entire amount, or treated as a short-term loan due in the first payment. When start-up fees are paid as first payment(s), the balance due might accrue more interest, as being delayed by the extra payment period(s).
There are at least three ways of computing effective annual percentage rate:
For example, consider a $100 loan which must be repaid after one month, plus 5%, plus a $10 fee. If the fee is not considered, this loan has an effective APR of approximately 80% (1.0512 = 1.7959, which is approximately an 80% increase). If the $10 fee were considered, the monthly interest increases by 10% ($10/$100), and the effective APR becomes approximately 435% (1.1512 = 5.3503, which equals a 435% increase). Hence there are at least two possible "effective APRs": 80% and 435%. Laws vary as to whether fees must be included in APR calculations.
United States.
In the U.S., the calculation and disclosure of APR is governed by the Truth in Lending Act (which is implemented by the Consumer Financial Protection Bureau (CFPB) in Regulation Z of the Act). In general, APR in the United States is expressed as the periodic (for instance, monthly) interest rate times the number of compounding periods in a year (also known as the nominal interest rate); since the APR must include certain non-interest charges and fees, it requires more detailed calculation. The APR must be disclosed to the borrower within 3 days of applying for a mortgage. This information is typically mailed to the borrower and the APR is found on the truth in lending disclosure statement, which also includes an amortization schedule.
On July 30, 2009, provisions of the Mortgage Disclosure Improvement Act of 2008 (MDIA) came into effect. A specific clause of this act refers directly to APR disclosure on mortgages. It states, if the final annual percentage rate APR is off by more than 0.125% from the initial GFE disclosure, then the lender must re-disclose and wait another three business days before closing on the transaction.
The calculation for "close-ended credit" (such as a home mortgage or auto loan) can be found here. For a fixed-rate mortgage, the APR is thus equal to its internal rate of return (or yield) under an assumption of zero prepayment and zero default. For an adjustable-rate mortgage the APR will also depend on the particular assumption regarding the prospective trajectory of the index rate.
The calculation for "open-ended credit" (such as a credit card, home equity loan or other line of credit) can be found here.
European Union.
In the EU, the focus of APR standardization is heavily on transparency and consumer rights: «a comprehensible set of information to be given to consumers in good time before the contract is concluded and also as part of the credit agreement [...] every creditor has to use this form when marketing a consumer credit in any Member State» so marketing different figures is not allowed.
The EU regulations were reinforced with directives 2008/48/EC and 2011/90/EU, fully in force in all member states since 2013.
However, in the UK the EU directive has been interpreted as the Representative APR.
A single method of calculating the APR was introduced in 1998 (directive 98/7/EC) and is required to be published for the major part of loans. Using the improved notation of directive 2008/48/EC, the basic equation for calculation of APR in the EU is:
formula_0
where:
"M" is the total number of drawdowns paid by the lender
"N" is the total number of repayments paid by the borrower
"i" is the sequence number of a drawdown paid by the lender
"j" is the sequence number of a repayment paid by the borrower
"Ci" is the cash flow amount for drawdown number "i"
"Dj" is the cash flow amount for repayment number "j"
"ti" is the interval, expressed in years and fractions of a year, between the date of the first drawdown* and the date of drawdown "i"
"sj" is the interval, expressed in years and fractions of a year, between the date of the first drawdown* and the date of repayment "j".
In this equation the left side is the present value of the drawdowns made by the lender and the right side is the present value of the repayments made by the borrower. In both cases the present value is defined given the APR as the interest rate. So the present value of the drawdowns is equal to the present value of the repayments, given the APR as the interest rate.
Note that neither the amounts nor the periods between transactions are necessarily equal. For the purposes of this calculation, a year is presumed to have 365 days (366 days for leap years), 52 weeks or 12 equal months. As per the standard: "An equal month is presumed to have 30.41666 days (i.e. 365/12) regardless of whether or not it is a leap year." The result is to be expressed to at least one decimal place. This algorithm for APR is required for some but not all forms of consumer debt in the EU. For example, this EU directive is limited to agreements of €50,000 and below and excludes all mortgages.
In the Netherlands the formula above is also used for mortgages. In many cases the mortgage is not always paid back completely at the end of period "N", but for instance when the borrower sells his house or dies. In addition, there is usually only one payment of the lender to the borrower: in the beginning of the loan. In that case the formula becomes:
formula_1
where:
"S" is the borrowed amount or principal amount.
"A" is the prepaid onetime fee
"R" the rest debt, the amount that remains as an interest-only loan after the last cash flow.
If the length of the periods are equal (monthly payments) then the summations can be simplified using the formula for a geometric series. Either way, the APR can be solved iteratively only from the formulas above, apart from trivial cases such as "N=1".
Rate format.
An effective annual interest rate of 10% can also be expressed in several ways:
These rates are all equivalent, but to a consumer who is not trained in the mathematics of finance, this can be confusing. APR helps to standardize how interest rates are compared, so that a 10% loan is not made to look cheaper by calling it a loan at "9.1% annually in advance".
The APR does not necessarily convey the total amount of interest paid over the course of a year: if one pays part of the interest prior to the end of the year, the total amount of interest paid is less.
In the case of a loan with no fees, the amortization schedule would be worked out by taking the principal left at the end of each month, multiplying by the monthly rate and then subtracting the monthly payment.
This can be expressed mathematically by
formula_2
where:
"p" is the payment made each period
"P0" is the initial principal
"r" is the percentage rate used each payment
"n" is the number of payments
This also explains why a 15-year mortgage and a 30-year mortgage with the same APR would have different monthly payments and a different total amount of interest paid. There are many more periods over which to spread the principal, which makes the payment smaller, but there are just as many periods over which to charge interest at the same rate, which makes the total amount of interest paid much greater. For example, $100,000 mortgaged (without fees, since they add into the calculation in a different way) over 15 years costs a total of $193,429.80 (interest is 93.430% of principal), but over 30 years, costs a total of $315,925.20 (interest is 215.925% of principal).
In addition the APR takes costs into account. Suppose for instance that $100,000 is borrowed with $1000 one-time fees paid in advance. If, in the second case, equal monthly payments are made of $946.01 against 9.569% compounded monthly then it takes 240 months to pay the loan back. If the $1000 one-time fees are taken into account then the yearly interest rate paid is effectively equal to 10.31%.
The APR concept can also be applied to savings accounts: imagine a savings account with 1% costs at each withdrawal and again 9.569% interest compounded monthly. Suppose that the complete amount including the interest is withdrawn after exactly one year. Then, taking this 1% fee into account, the savings effectively earned 8.9% interest that year.
Money factor.
The APR can also be represented by a money factor (also known as the lease factor, lease rate, or factor). The money factor is usually given as a decimal, for example .0030. To find the equivalent APR, the money factor is multiplied by 2400. A money factor of .0030 is equivalent to a monthly interest rate of 0.6% and an APR of 7.2%.
For a leasing arrangement with an initial capital cost of "C", a residual value at the end of the lease of "F" and a monthly interest rate of "r", monthly interest starts at "Cr" and decreases almost linearly during the term of the lease to a final value of "Fr". The total amount of interest paid over the lease term of "N" months is therefore
formula_3
and the average interest amount per month is
formula_4
This amount is called the "monthly finance fee". The factor "r"/2 is called the "money factor"
Failings in the United States.
Despite repeated attempts by regulators to establish usable and consistent standards, APR does not represent the total cost of borrowing in some jurisdictions nor does it really create a comparable standard across jurisdictions. Nevertheless, it is considered a reasonable starting point for an "ad hoc" comparison of lenders.
Nominal APR does not reflect the true cost.
Credit card holders should be aware that most U.S. credit cards are quoted in terms of nominal APR compounded monthly, which is not the same as the effective annual rate (EAR). Despite the word "annual" in APR, it is not necessarily a direct reference for the interest rate paid on a stable balance over one year. The more direct reference for the one-year rate of interest is EAR.
The general conversion factor for APR to EAR is
formula_5,
where "n" represents the number of compounding periods of the APR per EAR period.
As an example, for a common credit card quoted at 12.99% APR compounded monthly, the one year EAR is
formula_6,
or 13.7975%.
For 12.99% APR compounded daily, the EAR paid on a stable balance over one year becomes 13.87% (where the .000049 addition to the 12.99% APR is possible because the new rate does not exceed the advertised APR ). Note that a high U.S. APR of 29.99% compounded monthly carries an effective annual rate of 34.48%.
While the difference between APR and EAR may seem trivial, because of the exponential nature of interest these small differences can have a large effect over the life of a loan. For example, consider a 30-year loan of $200,000 with a stated APR of 10.00%, i.e., 10.0049% APR or the EAR equivalent of 10.4767%. The monthly payments, using APR, would be $1755.87. However, using an EAR of 10.00% the monthly payment would be $1691.78. The difference between the EAR and APR amounts to a difference of $64.09 per month. Over the life of a 30-year loan, this amounts to $23,070.86, which is over 11% of the original loan amount.
Certain fees are not considered.
Some classes of fees are deliberately not included in the calculation of APR. Because these fees are not included, some consumer advocates claim that the APR does not represent the "total" cost of borrowing. Excluded fees may include:
Lenders argue that the real estate attorney's fee, for example, is a pass-through cost, not a cost of the lending. In effect, they are arguing that the attorney's fee is a separate transaction and not a part of the loan. Consumer advocates argue that this would be true if the customer is free to select which attorney is used. If the lender insists, however, on using a specific attorney, the cost should be looked at as a component of the total cost of doing business with that lender. This area is made more complicated by the practice of contingency fees – for example, when the lender receives money from the attorney and other agents to be the one used by the lender. Because of this, U.S. regulators require all lenders to produce an affiliated business disclosure form which shows the amounts paid between the lender and the appraisal firms, attorneys, etc.
Lenders argue that including late fees and other conditional charges would require them to make assumptions about the consumer's behavior – assumptions which would bias the resulting calculation and create more confusion than clarity.
Not a comparable standard.
Even beyond the non-included cost components listed above, regulators have been unable to completely define which one-time fees must be included and which excluded from the calculation. This leaves the lender with some discretion to determine which fees will be included (or not) in the calculation.
Consumers can, of course, use the nominal interest rate and any costs on the loan (or savings account) and compute the APR themselves, for instance using one of the calculators on the internet.
In the example of a mortgage loan, the following kinds of fees are:
The discretion that is illustrated in the "sometimes included" column even in the highly regulated U.S. home mortgage environment makes it difficult to simply compare the APRs of two lenders. Note: U.S. regulators generally require a lender to use the same assumptions and definitions in their calculation of APR for each of their products even though they cannot force consistency across lenders.
With respect to items that may be sold with vendor financing, for example, automobile leasing, the notional cost of the good may effectively be hidden and the APR subsequently rendered meaningless. An example is a case where an automobile is leased to a customer based on a "manufacturer's suggested retail price" with a low APR: the vendor may be accepting a lower lease rate as a trade-off against a higher sale price. Had the customer self-financed, a discounted sales price may have been accepted by the vendor; in other words, the customer has received cheap financing in exchange for paying a higher purchase price, and the quoted APR understates the true cost of the financing. In this case, the only meaningful way to establish the "true" APR would involve arranging financing through other sources, determining the lowest-acceptable cash price and comparing the financing terms (which may not be feasible in all circumstances). For leases where the lessee has a purchase option at the end of the lease term, the cost of the APR is further complicated by this option. In effect, the lease includes a put option back to the manufacturer (or, alternatively, a call option for the consumer), and the value (or cost) of this option to the consumer is not transparent.
Dependence on loan period.
APR is dependent on the time period for which the loan is calculated. That is, the APR for a 30-year loan cannot be compared to the APR for a 20-year loan. APR "can" be used to show the relative impact of different payment schedules (such as balloon payments or biweekly payments instead of straight monthly payments), but most standard APR calculators have difficulty with those calculations.
Furthermore, most APR calculators assume that an individual will keep a particular loan until the end of the defined repayment period, resulting in the up-front fixed closing costs being amortized over the full term of the loan. If the consumer pays the loan off early, the effective interest rate achieved will be significantly higher than the APR initially calculated. This is especially problematic for mortgage loans, where typical loan repayment periods are 15 or 30 years but where many borrowers move or refinance before the loan period runs out, which increases the borrower's effective cost for any points or other origination fees.
In theory, this factor should not affect any individual consumer's ability to compare the APR of the same product (same repayment period and origination fees) across vendors. APR may not, however, be particularly helpful when attempting to compare different products, or similar products with different terms.
Interest-only loans.
Since the principal loan balance is not paid down during the interest-only term, assuming there are no set up costs, the APR will be the same as the interest rate.
Three lenders with identical information may still calculate different APRs. The calculations can be quite complex and are poorly understood even by most financial professionals. Most users depend on software packages to calculate APR and are therefore dependent on the assumptions in that particular software package. While differences between software packages will not result in large variations, there are several acceptable methods of calculating APR, each of which returns a slightly different result.
Limitations.
While the APR provides a useful means to compare the cost of borrowing across different loan and credit offers, it has several limitations that may affect its accuracy and relevance for certain types of loans.
Misleading Measures for Short-term Loans.
There are instances where APR may be misleading or an inaccurate measure of borrowing costs. It is argued that the APR can be misleading when applied to small-dollar loans, such as payday loans, because it does not accurately represent the true cost of borrowing for short-term financial products. While effective for comparing costs of longer-term loans, APR exaggerates the expense associated with short-term, small-dollar loans, thus potentially misleading consumers about the actual costs they will incur.
In a paper by Thomas W. Miller Jr. at the Mercatus Center, it is highlighted that while interest rate caps are often proposed as a means to combat "predatory" lending practices associated with high APRs on small-dollar loans, such regulatory measures overlook potential adverse effects. The analysis suggests that a 36 percent interest rate cap could lead to a scarcity of available loans, as the caps may cause demand to surpass supply and prompt lenders to redirect capital away from small-dollar lending markets. This shift could effectively result in an implicit prohibition of products like payday loans by rendering them financially unsustainable.
Inaccuracy in Mortgage Comparisons.
APR may not accurately reflect the cost of borrowing for certain mortgage types, such as those with non-standard repayment structures. APR calculations, which aim to provide a comprehensive cost measure by including interest rates and other fees, might not capture the complexities or the true costs of mortgages that deviate from traditional, fixed-rate, amortizing loans. This discrepancy arises because APR is designed under the assumption of a standard loan structure, potentially misleading consumers about the financial implications of mortgages with variable rates, interest-only periods, or other unique features.
Exclusion of Junk Fees.
APR does not encompass all fees associated with a loan, particularly "junk fees." These excluded fees can include various types of non-interest charges such as certain closing costs, which are not reflected in the APR calculation. This exclusion can mislead consumers about the true cost of borrowing, as the APR presents a narrower scope of expenses than what the borrower may eventually pay.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_{i=1}^M C_i (1 + \\mathrm{APR}/100)^{-t_i} = \\sum_{j=1}^N D_j (1 + \\mathrm{APR}/100)^{-s_j}"
},
{
"math_id": 1,
"text": "S -A = R (1 + \\mathrm{APR}/100)^{-t_N} + \\sum_{k=1}^N A_k (1 + \\mathrm{APR}/100)^{-t_k}"
},
{
"math_id": 2,
"text": " p = \\frac{P_0\\cdot r\\cdot (1+r)^n}{(1+r)^n-1} "
},
{
"math_id": 3,
"text": "\\frac{N(Cr+Fr)}{2}\\, ,"
},
{
"math_id": 4,
"text": "\\frac{(C+F)r}{2}\\, ."
},
{
"math_id": 5,
"text": " \\mathrm{EAR} = (1 + \\tfrac{\\mathrm{APR}}{n})^n - 1 "
},
{
"math_id": 6,
"text": " (1+\\tfrac{0.129949}{12})^{12} - 1 "
}
] | https://en.wikipedia.org/wiki?curid=857766 |
857780 | Entropic uncertainty | Concept in information theory
In quantum mechanics, information theory, and Fourier analysis, the entropic uncertainty or Hirschman uncertainty is defined as the sum of the temporal and spectral Shannon entropies. It turns out that Heisenberg's uncertainty principle can be expressed as a lower bound on the sum of these entropies. This is "stronger" than the usual statement of the uncertainty principle in terms of the product of standard deviations.
In 1957, Hirschman considered a function "f" and its Fourier transform "g" such that
formula_0
where the "≈" indicates convergence in L2, and normalized so that (by Plancherel's theorem),
formula_1
He showed that for any such functions the sum of the Shannon entropies is non-negative,
formula_2
A tighter bound,
formula_3
was conjectured by Hirschman and Everett, proven in 1975 by W. Beckner and in the same year interpreted as a generalized quantum mechanical uncertainty principle by and Mycielski.
The equality holds in the case of Gaussian distributions.
Note, however, that the above entropic uncertainty function is distinctly "different" from the quantum Von Neumann entropy represented in phase space.
Sketch of proof.
The proof of this tight inequality depends on the so-called ("q", "p")-norm of the Fourier transformation. (Establishing this norm is the most difficult part of the proof.)
From this norm, one is able to establish a lower bound on the sum of the (differential) Rényi entropies, "Hα(|f|²)+Hβ(|g|²)", where "1/α + 1/β"
2, which generalize the Shannon entropies. For simplicity, we consider this inequality only in one dimension; the extension to multiple dimensions is straightforward and can be found in the literature cited.
Babenko–Beckner inequality.
The ("q", "p")-norm of the Fourier transform is defined to be
formula_4 where formula_5 and formula_6
In 1961, Babenko found this norm for "even" integer values of "q". Finally, in 1975,
using Hermite functions as eigenfunctions of the Fourier transform, Beckner proved that the value of this norm (in one dimension) for all "q" ≥ 2 is
formula_7
Thus we have the Babenko–Beckner inequality that
formula_8
Rényi entropy bound.
From this inequality, an expression of the uncertainty principle in terms of the Rényi entropy can be derived.
Letting formula_9, 2"α"="p", and 2"β"="q", so that "1/α + 1/β"
2 and 1/2<"α"<1<"β", we have
formula_10
Squaring both sides and taking the logarithm, we get
formula_11
Multiplying both sides by
formula_12
reverses the sense of the inequality,
formula_13
Rearranging terms, finally yields an inequality in terms of the sum of the Rényi entropies,
formula_14
formula_15
Note that this inequality is symmetric with respect to α and β: One no longer need assume that " α<β"; only that they are positive and not both one, and that "1/α + 1/β" = 2. To see this symmetry, simply exchange the rôles of "i" and −"i" in the Fourier transform.
Shannon entropy bound.
Taking the limit of this last inequality as "α, β" → 1 yields the less general Shannon entropy inequality,
formula_16
valid for any base of logarithm, as long as we choose an appropriate unit of information, bit, nat, etc.
The constant will be different, though, for a different normalization of the Fourier transform, (such as is usually used in physics, with normalizations chosen so that "ħ"=1 ), i.e.,
formula_17
In this case, the dilation of the Fourier transform absolute squared by a factor of 2π simply adds log(2π) to its entropy.
Entropy versus variance bounds.
The Gaussian or normal probability distribution plays an important role in the relationship between variance and entropy: it is a problem of the calculus of variations to show that this distribution maximizes entropy for a given variance, and at the same time minimizes the variance for a given entropy. In fact, for any probability density function formula_18 on the real line, Shannon's entropy inequality specifies:
formula_19
where "H" is the Shannon entropy and "V" is the variance, an inequality that is saturated only in the case of a normal distribution.
Moreover, the Fourier transform of a Gaussian probability amplitude function is also Gaussian—and the absolute squares of both of these are Gaussian, too. This can then be used to derive the usual Robertson variance uncertainty inequality from the above entropic inequality, enabling "the latter to be tighter than the former". That is (for "ħ"=1), exponentiating the Hirschman inequality and using Shannon's expression above,
formula_20
Hirschman explained that entropy—his version of entropy was the negative of Shannon's—is a "measure of the concentration of [a probability distribution] in a set of small measure." Thus "a low or large negative Shannon entropy means that a considerable mass of the probability distribution is confined to a set of small measure".
Note that this set of small measure need not be contiguous; a probability distribution can have several concentrations of mass in intervals of small measure, and the entropy may still be low no matter how widely scattered those intervals are. This is not the case with the variance: variance measures the concentration of mass about the mean of the distribution, and a low variance means that a considerable mass of the probability distribution is concentrated in a "contiguous interval" of small measure.
To formalize this distinction, we say that two probability density functions formula_21 and formula_22 are equimeasurable if
formula_23
where μ is the Lebesgue measure. Any two equimeasurable probability density functions have the same Shannon entropy, and in fact the same Rényi entropy, of any order. The same is not true of variance, however. Any probability density function has a radially decreasing equimeasurable "rearrangement" whose variance is less (up to translation) than any other rearrangement of the function; and there exist rearrangements of arbitrarily high variance, (all having the same entropy.) | [
{
"math_id": 0,
"text": "g(y) \\approx \\int_{-\\infty}^\\infty \\exp (-2\\pi ixy) f(x)\\, dx,\\qquad f(x) \\approx \\int_{-\\infty}^\\infty \\exp (2\\pi ixy) g(y)\\, dy ~,"
},
{
"math_id": 1,
"text": " \\int_{-\\infty}^\\infty |f(x)|^2\\, dx = \\int_{-\\infty}^\\infty |g(y)|^2 \\,dy = 1~."
},
{
"math_id": 2,
"text": " H(|f|^2) + H(|g|^2) \\equiv - \\int_{-\\infty}^\\infty |f(x)|^2 \\log |f(x)|^2\\, dx - \\int_{-\\infty}^\\infty |g(y)|^2 \\log |g(y)|^2 \\,dy \\ge 0. "
},
{
"math_id": 3,
"text": " H(|f|^2) + H(|g|^2) \\ge \\log \\frac e 2 ~,"
},
{
"math_id": 4,
"text": "\\|\\mathcal F\\|_{q,p} = \\sup_{f\\in L^p(\\mathbb R)} \\frac{\\|\\mathcal Ff\\|_q}{\\|f\\|_p},"
},
{
"math_id": 5,
"text": "1 < p \\le 2~,"
},
{
"math_id": 6,
"text": "\\frac 1 p + \\frac 1 q = 1."
},
{
"math_id": 7,
"text": "\\|\\mathcal F\\|_{q,p} = \\sqrt{p^{1/p}/q^{1/q}}."
},
{
"math_id": 8,
"text": "\\|\\mathcal Ff\\|_q \\le \\left(p^{1/p}/q^{1/q}\\right)^{1/2} \\|f\\|_p."
},
{
"math_id": 9,
"text": "g=\\mathcal Ff"
},
{
"math_id": 10,
"text": "\\left(\\int_{\\mathbb R} |g(y)|^{2\\beta}\\,dy\\right)^{1/2\\beta}\n \\le \\frac{(2\\alpha)^{1/4\\alpha}}{(2\\beta)^{1/4\\beta}}\n \\left(\\int_{\\mathbb R} |f(x)|^{2\\alpha}\\,dx\\right)^{1/2\\alpha}.\n"
},
{
"math_id": 11,
"text": "\\frac 1\\beta \\log\\left(\\int_{\\mathbb R} |g(y)|^{2\\beta}\\,dy\\right)\n \\le \\frac 1 2 \\log\\frac{(2\\alpha)^{1/\\alpha}}{(2\\beta)^{1/\\beta}}\n + \\frac 1\\alpha \\log \\left(\\int_{\\mathbb R} |f(x)|^{2\\alpha}\\,dx\\right).\n"
},
{
"math_id": 12,
"text": "\\frac{\\beta}{1-\\beta}=-\\frac{\\alpha}{1-\\alpha}"
},
{
"math_id": 13,
"text": "\\frac {1}{1-\\beta} \\log\\left(\\int_{\\mathbb R} |g(y)|^{2\\beta}\\,dy\\right)\n \\ge \\frac\\alpha{2(\\alpha-1)}\\log\\frac{(2\\alpha)^{1/\\alpha}}{(2\\beta)^{1/\\beta}}\n - \\frac{1}{1-\\alpha} \\log \\left(\\int_{\\mathbb R} |f(x)|^{2\\alpha}\\,dx\\right) ~.\n"
},
{
"math_id": 14,
"text": "\\frac{1}{1-\\alpha} \\log \\left(\\int_{\\mathbb R} |f(x)|^{2\\alpha}\\,dx\\right)\n + \\frac {1}{1-\\beta} \\log\\left(\\int_{\\mathbb R} |g(y)|^{2\\beta}\\,dy\\right)\n \\ge \\frac\\alpha{2(\\alpha-1)}\\log\\frac{(2\\alpha)^{1/\\alpha}}{(2\\beta)^{1/\\beta}};\n"
},
{
"math_id": 15,
"text": " H_\\alpha(|f|^2) + H_\\beta(|g|^2) \\ge \\frac 1 2 \\left(\\frac{\\log\\alpha}{\\alpha-1}+\\frac{\\log\\beta}{\\beta-1}\\right) - \\log 2 ~."
},
{
"math_id": 16,
"text": "H(|f|^2) + H(|g|^2) \\ge \\log\\frac e 2,\\quad\\textrm{where}\\quad g(y) \\approx \\int_{\\mathbb R} e^{-2\\pi ixy}f(x)\\,dx~,"
},
{
"math_id": 17,
"text": "H(|f|^2) + H(|g|^2) \\ge \\log(\\pi e)\\quad\\textrm{for}\\quad g(y) \\approx \\frac 1{\\sqrt{2\\pi}}\\int_{\\mathbb R} e^{-ixy}f(x)\\,dx~."
},
{
"math_id": 18,
"text": "\\phi"
},
{
"math_id": 19,
"text": "H(\\phi) \\le \\log \\sqrt {2\\pi eV(\\phi)},"
},
{
"math_id": 20,
"text": "1/2 \\le \\exp (H(|f|^2)+H(|g|^2)) /(2e\\pi) \\le \\sqrt {V(|f|^2)V(|g|^2)}~."
},
{
"math_id": 21,
"text": "\\phi_1"
},
{
"math_id": 22,
"text": "\\phi_2"
},
{
"math_id": 23,
"text": "\\forall \\delta > 0,\\,\\mu\\{x\\in\\mathbb R|\\phi_1(x)\\ge\\delta\\} = \\mu\\{x\\in\\mathbb R|\\phi_2(x)\\ge\\delta\\},"
}
] | https://en.wikipedia.org/wiki?curid=857780 |
8578041 | Ninety-One (solitaire) | Solitaire card game
Ninety-One is a solitaire card game which is played using a deck of playing cards. The object of this game is to move cards so the top cards of the piles total to 91, hence the name.
Rules.
Thirteen piles of four cards each are dealt. Only one card can be moved at a time and the top cards of the piles are ones that are counted. Cards are even transferred without any regard to suit or value. Spot cards (cards from ace to ten) are taken at their face value, while jacks are valued at 11, queens at 12, and kings at 13.
The game is won when the top cards of the thirteen piles have a total value of 91. The easiest combination to obtain is a sequence of thirteen cards from ace to king (formula_0). But there are many other combinations that add up to 91, such as four kings, four aces, three fives, and two tens for instance (formula_1). It is up to the player how to figure out those combinations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "91=1+2+3+4+5+6+7+8+9+10+11+12+13"
},
{
"math_id": 1,
"text": "91=13*4+1*4+3*5+10*2=52+4+15+20"
}
] | https://en.wikipedia.org/wiki?curid=8578041 |
857851 | Ricker wavelet | Wavelet proportional to the second derivative of a Gaussian
In mathematics and numerical analysis, the Ricker wavelet
formula_0
is the negative normalized second derivative of a Gaussian function, i.e., up to scale and normalization, the second Hermite function. It is a special case of the family of continuous wavelets (wavelets used in a continuous wavelet transform) known as Hermitian wavelets. The Ricker wavelet is frequently employed to model seismic data, and as a broad-spectrum source term in computational electrodynamics. It is usually only referred to as the Mexican hat wavelet in the Americas, due to taking the shape of a sombrero when used as a 2D image processing kernel. It is also known as the Marr wavelet for David Marr.
formula_1
The multidimensional generalization of this wavelet is called the Laplacian of Gaussian function. In practice, this wavelet is sometimes approximated by the difference of Gaussians (DoG) function, because the DoG is separable and can therefore save considerable computation time in two or more dimensions. The scale normalized Laplacian (in formula_2-norm) is frequently used as a blob detector and for automatic scale selection in computer vision applications; see Laplacian of Gaussian and scale space. The relation between this Laplacian of the Gaussian operator and the difference-of-Gaussians operator is explained in appendix A in Lindeberg (2015). The Mexican hat wavelet can also be approximated by derivatives of cardinal B-splines.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\psi(t) = \\frac{2}{\\sqrt{3\\sigma}\\pi^{1/4}} \\left(1 - \\left(\\frac{t}{\\sigma}\\right)^2 \\right) e^{-\\frac{t^2}{2\\sigma^2}}"
},
{
"math_id": 1,
"text": "\n\\psi(x,y) = \\frac{1}{\\pi\\sigma^4}\\left(1-\\frac{1}{2} \\left(\\frac{x^2+y^2}{\\sigma^2}\\right)\\right) e^{-\\frac{x^2+y^2}{2\\sigma^2}}\n"
},
{
"math_id": 2,
"text": "L_1"
}
] | https://en.wikipedia.org/wiki?curid=857851 |
857867 | Hermitian wavelet | Family of continuous wavelets
Hermitian wavelets are a family of discrete and continuous wavelets used in the constant and discrete Hermite wavelet transforms. The formula_0 Hermitian wavelet is defined as the normalized formula_0 derivative of a Gaussian distribution for each positive formula_1:formula_2where formula_3 denotes the formula_0 probabilist's Hermite polynomial.
Each normalization coefficient formula_4 is given by formula_5 The function formula_6 is said to be an admissible Hermite wavelet if it satisfies the admissibility condition:
formula_7
where formula_8 are the terms of the Hermite transform of formula_9.
In computer vision and image processing, Gaussian derivative operators of different orders are frequently used as a basis for expressing various types of visual operations; see scale space and N-jet.
Examples.
The first three derivatives of the Gaussian function with formula_10:formula_11are:formula_12and their formula_13 norms formula_14.
Normalizing the derivatives yields three Hermitian wavelets:formula_15
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n^\\textrm{th}"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "\\Psi_{n}(x)=(2n)^{-\\frac{n}{2}}c_{n}\\operatorname{He}_{n}\\left(x\\right)e^{-\\frac{1}{2}x^{2}}, "
},
{
"math_id": 3,
"text": "\\operatorname{He}_{n}(x)"
},
{
"math_id": 4,
"text": "c_{n}"
},
{
"math_id": 5,
"text": "c_{n} = \\left(n^{\\frac{1}{2}-n}\\Gamma\\left(n+\\frac{1}{2}\\right)\\right)^{-\\frac{1}{2}} = \\left(n^{\\frac{1}{2}-n}\\sqrt{\\pi}2^{-n}(2n-1)!!\\right)^{-\\frac{1}{2}}\\quad n\\in\\mathbb{N}."
},
{
"math_id": 6,
"text": "\\Psi\\in L_{\\rho, \\mu}(-\\infty, \\infty)"
},
{
"math_id": 7,
"text": "C_\\Psi = \\sum_{n=0}^{\\infty}{\\frac{\\|\\hat\\Psi (n)\\|^2}{\\|n\\|}} < \\infty"
},
{
"math_id": 8,
"text": "\\hat \\Psi (n)"
},
{
"math_id": 9,
"text": "\\Psi"
},
{
"math_id": 10,
"text": "\\mu=0,\\;\\sigma=1"
},
{
"math_id": 11,
"text": "f(t) = \\pi^{-1/4}e^{(-t^2/2)},"
},
{
"math_id": 12,
"text": "\\begin{align}\n f'(t) & = -\\pi^{-1/4}te^{(-t^2/2)}, \\\\\n f''(t) & = \\pi^{-1/4}(t^2 - 1)e^{(-t^2/2)},\\\\\nf^{(3)}(t) & = \\pi^{-1/4}(3t - t^3)e^{(-t^2/2)},\n \\end{align}"
},
{
"math_id": 13,
"text": "L^2"
},
{
"math_id": 14,
"text": "\\lVert f' \\rVert=\\sqrt{2}/2, \\lVert f'' \\rVert=\\sqrt{3}/2, \\lVert f^{(3)} \\rVert= \\sqrt{30}/4"
},
{
"math_id": 15,
"text": "\\begin{align}\n\\Psi_{1}(t) &= \\sqrt{2}\\pi^{-1/4}te^{(-t^2/2)},\\\\\n\\Psi_{2}(t) &=\\frac{2}{3}\\sqrt{3}\\pi^{-1/4}(1-t^2)e^{(-t^2/2)},\\\\\n\\Psi_{3}(t) &= \\frac{2}{15}\\sqrt{30}\\pi^{-1/4}(t^3 - 3t)e^{(-t^2/2)}.\n\\end{align}"
},
{
"math_id": 16,
"text": "n = 2"
}
] | https://en.wikipedia.org/wiki?curid=857867 |
857896 | Complex Mexican hat wavelet | In applied mathematics, the complex Mexican hat wavelet is a low-oscillation, complex-valued, wavelet for the continuous wavelet transform. This wavelet is formulated in terms of its Fourier transform as the Hilbert analytic signal of the conventional Mexican hat wavelet:
formula_0
Temporally, this wavelet can be expressed in terms of the error function,
as:
formula_1
This wavelet has formula_2 asymptotic temporal decay in formula_3,
dominated by the discontinuity of the second derivative of formula_4 at formula_5.
This wavelet was proposed in 2002 by Addison "et al." for applications requiring high temporal precision time-frequency analysis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\hat{\\Psi}(\\omega) = \\begin{cases}\n 2\\sqrt{\\frac{2}{3}}\\pi^{-\\frac{1}{4}}\\omega^2 e^{-\\frac{1}{2}\\omega^2} & \\omega\\geq0 \\\\\n 0 & \\omega\\leq 0.\n\\end{cases}"
},
{
"math_id": 1,
"text": "\\Psi(t) = \\frac{2}{\\sqrt{3}}\\pi^{-\\frac{1}{4}}\\left(\\sqrt{\\pi}\\left(1 - t^2\\right)e^{-\\frac{1}{2}t^2} - \\left(\\sqrt{2}it + \\sqrt{\\pi}\\operatorname{erf}\\left[\\frac{i}{\\sqrt{2}}t\\right]\\left(1 - t^2\\right)e^{-\\frac{1}{2}t^2}\\right)\\right)."
},
{
"math_id": 2,
"text": "O\\left(|t|^{-3}\\right)"
},
{
"math_id": 3,
"text": "|\\Psi(t)|"
},
{
"math_id": 4,
"text": "\\hat{\\Psi}(\\omega)"
},
{
"math_id": 5,
"text": "\\omega = 0"
}
] | https://en.wikipedia.org/wiki?curid=857896 |
857897 | Time–frequency analysis | In signal processing, time–frequency analysis comprises those techniques that study a signal in both the time and frequency domains "simultaneously," using various time–frequency representations. Rather than viewing a 1-dimensional signal (a function, real or complex-valued, whose domain is the real line) and some transform (another function whose domain is the real line, obtained from the original via some transform), time–frequency analysis studies a two-dimensional signal – a function whose domain is the two-dimensional real plane, obtained from the signal via a time–frequency transform.
The mathematical motivation for this study is that functions and their transform representation are tightly connected, and they can be understood better by studying them jointly, as a two-dimensional object, rather than separately. A simple example is that the 4-fold periodicity of the Fourier transform – and the fact that two-fold Fourier transform reverses direction – can be interpreted by considering the Fourier transform as a 90° rotation in the associated time–frequency plane: 4 such rotations yield the identity, and 2 such rotations simply reverse direction (reflection through the origin).
The practical motivation for time–frequency analysis is that classical Fourier analysis assumes that signals are infinite in time or periodic, while many signals in practice are of short duration, and change substantially over their duration. For example, traditional musical instruments do not produce infinite duration sinusoids, but instead begin with an attack, then gradually decay. This is poorly represented by traditional methods, which motivates time–frequency analysis.
One of the most basic forms of time–frequency analysis is the short-time Fourier transform (STFT), but more sophisticated techniques have been developed, notably wavelets and least-squares spectral analysis methods for unevenly spaced data.
Motivation.
In signal processing, time–frequency analysis is a body of techniques and methods used for characterizing and manipulating signals whose statistics vary in time, such as transient signals.
It is a generalization and refinement of Fourier analysis, for the case when the signal frequency characteristics are varying with time. Since many signals of interest – such as speech, music, images, and medical signals – have changing frequency characteristics, time–frequency analysis has broad scope of applications.
Whereas the technique of the Fourier transform can be extended to obtain the frequency spectrum of any slowly growing locally integrable signal, this approach requires a complete description of the signal's behavior over all time. Indeed, one can think of points in the (spectral) frequency domain as smearing together information from across the entire time domain. While mathematically elegant, such a technique is not appropriate for analyzing a signal with indeterminate future behavior. For instance, one must presuppose some degree of indeterminate future behavior in any telecommunications systems to achieve non-zero entropy (if one already knows what the other person will say one cannot learn anything).
To harness the power of a frequency representation without the need of a complete characterization in the time domain, one first obtains a time–frequency distribution of the signal, which represents the signal in both the time and frequency domains simultaneously. In such a representation the frequency domain will only reflect the behavior of a temporally localized version of the signal. This enables one to talk sensibly about signals whose component frequencies vary in time.
For instance rather than using tempered distributions to globally transform the following function into the frequency domain one could instead use these methods to describe it as a signal with a time varying frequency.
formula_0
Once such a representation has been generated other techniques in time–frequency analysis may then be applied to the signal in order to extract information from the signal, to separate the signal from noise or interfering signals, etc.
Time–frequency distribution functions.
Formulations.
There are several different ways to formulate a valid time–frequency distribution function, resulting in several well-known time–frequency distributions, such as:
More information about the history and the motivation of development of time–frequency distribution can be found in the entry Time–frequency representation.
Ideal TF distribution function.
A time–frequency distribution function ideally has the following properties:
Below is a brief comparison of some selected time–frequency distribution functions.
To analyze the signals well, choosing an appropriate time–frequency distribution function is important. Which time–frequency distribution function should be used depends on the application being considered, as shown by reviewing a list of applications. The high clarity of the Wigner distribution function (WDF) obtained for some signals is due to the auto-correlation function inherent in its formulation; however, the latter also causes the cross-term problem. Therefore, if we want to analyze a single-term signal, using the WDF may be the best approach; if the signal is composed of multiple components, some other methods like the Gabor transform, Gabor-Wigner distribution or Modified B-Distribution functions may be better choices.
As an illustration, magnitudes from non-localized Fourier analysis cannot distinguish the signals:
formula_1
formula_2
But time–frequency analysis can.
TF analysis and random processes.
For a random process x(t), we cannot find the explicit value of x(t).
The value of x(t) is expressed as a probability function.
formula_4
In usual, we suppose that formula_5 for any t,
formula_6
formula_7
(alternative definition of the auto-covariance function)
formula_8
formula_10
formula_11
formula_12formula_13
formula_14
formula_15
Stationary random processes.
formula_16 for any formula_17, Therefore,
formula_18
formula_19PSD,
formula_20 White noise:
formula_21 , where formula_22 is some constant.
formula_23 , (invariant with formula_17)
formula_24
formula_25formula_26 , (nonzero only when formula_27)
formula_28
formula_29
Additive white noise.
formula_30: energy of the signal
formula_31 : area of the time frequency distribution of the signal
The PSD of the white noise is formula_32
formula_33
formula_34
then:
formula_41
formula_43
formula_44
Short-time Fourier transform.
formula_45 should be satisfied. Otherwise,
formula_46
formula_47for zero-mean random process, formula_48
Applications.
The following applications need not only the time–frequency distribution functions but also some operations to the signal. The Linear canonical transform (LCT) is really helpful. By LCTs, the shape and location on the time–frequency plane of a signal can be in the arbitrary form that we want it to be. For example, the LCTs can shift the time–frequency distribution to any location, dilate it in the horizontal and vertical direction without changing its area on the plane, shear (or twist) it, and rotate it (Fractional Fourier transform). This powerful operation, LCT, make it more flexible to analyze and apply the time–frequency distributions.
Instantaneous frequency estimation.
The definition of instantaneous frequency is the time rate of change of phase, or
formula_49
where formula_50 is the instantaneous phase of a signal. We can know the instantaneous frequency from the time–frequency plane directly if the image is clear enough. Because the high clarity is critical, we often use WDF to analyze it.
TF filtering and signal decomposition.
The goal of filter design is to remove the undesired component of a signal. Conventionally, we can just filter in the time domain or in the frequency domain individually as shown below.
The filtering methods mentioned above can’t work well for every signal which may overlap in the time domain or in the frequency domain. By using the time–frequency distribution function, we can filter in the Euclidean time–frequency domain or in the fractional domain by employing the fractional Fourier transform. An example is shown below.
Filter design in time–frequency analysis always deals with signals composed of multiple components, so one cannot use WDF due to cross-term. The Gabor transform, Gabor–Wigner distribution function, or Cohen's class distribution function may be better choices.
The concept of signal decomposition relates to the need to separate one component from the others in a signal; this can be achieved through a filtering operation which require a filter design stage. Such filtering is traditionally done in the time domain or in the frequency domain; however, this may not be possible in the case of non-stationary signals that are multicomponent as such components could overlap in both the time domain and also in the frequency domain; as a consequence, the only possible way to achieve component separation and therefore a signal decomposition is to implement a time–frequency filter.
Sampling theory.
By the Nyquist–Shannon sampling theorem, we can conclude that the minimum number of sampling points without aliasing is equivalent to the area of the time–frequency distribution of a signal. (This is actually just an approximation, because the TF area of any signal is infinite.) Below is an example before and after we combine the sampling theory with the time–frequency distribution:
It is noticeable that the number of sampling points decreases after we apply the time–frequency distribution.
When we use the WDF, there might be the cross-term problem (also called interference). On the other hand, using Gabor transform causes an improvement in the clarity and readability of the representation, therefore improving its interpretation and application to practical problems.
Consequently, when the signal we tend to sample is composed of single component, we use the WDF; however, if the signal consists of more than one component, using the Gabor transform, Gabor-Wigner distribution function, or other reduced interference TFDs may achieve better results.
The Balian–Low theorem formalizes this, and provides a bound on the minimum number of time–frequency samples needed.
Modulation and multiplexing.
Conventionally, the operation of modulation and multiplexing concentrates in time or in frequency, separately. By taking advantage of the time–frequency distribution, we can make it more efficient to modulate and multiplex. All we have to do is to fill up the time–frequency plane. We present an example as below.
As illustrated in the upper example, using the WDF is not smart since the serious cross-term problem make it difficult to multiplex and modulate.
Electromagnetic wave propagation.
We can represent an electromagnetic wave in the form of a 2 by 1 matrix
formula_51
which is similar to the time–frequency plane. When electromagnetic wave propagates through free-space, the Fresnel diffraction occurs. We can operate with the 2 by 1 matrix
formula_52
by LCT with parameter matrix
formula_53
where "z" is the propagation distance and formula_54 is the wavelength. When electromagnetic wave pass through a spherical lens or be reflected by a disk, the parameter matrix should be
formula_55
and
formula_56
respectively, where ƒ is the focal length of the lens and "R" is the radius of the disk. These corresponding results can be obtained from
formula_57
Optics, acoustics, and biomedicine.
Light is an electromagnetic wave, so time–frequency analysis applies to optics in the same way as for general electromagnetic wave propagation.
Similarly, it is a characteristic of acoustic signals, that their frequency components undergo abrupt variations in time and would hence be not well represented by a single frequency component analysis covering their entire durations.
As acoustic signals are used as speech in communication between the human-sender and -receiver, their undelayedly transmission in technical communication systems is crucial, which makes the use of simpler TFDs, such as the Gabor transform, suitable to analyze these signals in real-time by reducing computational complexity.
If frequency analysis speed is not a limitation, a detailed feature comparison with well defined criteria should be made before selecting a particular TFD. Another approach is to define a signal dependent TFD that is adapted to the data.
In biomedicine, one can use time–frequency distribution to analyze the electromyography (EMG), electroencephalography (EEG), electrocardiogram (ECG) or otoacoustic emissions (OAEs).
History.
Early work in time–frequency analysis can be seen in the Haar wavelets (1909) of Alfréd Haar, though these were not significantly applied to signal processing. More substantial work was undertaken by Dennis Gabor, such as Gabor atoms (1947), an early form of wavelets, and the Gabor transform, a modified short-time Fourier transform. The Wigner–Ville distribution (Ville 1948, in a signal processing context) was another foundational step.
Particularly in the 1930s and 1940s, early time–frequency analysis developed in concert with quantum mechanics (Wigner developed the Wigner–Ville distribution in 1932 in quantum mechanics, and Gabor was influenced by quantum mechanics – see Gabor atom); this is reflected in the shared mathematics of the position-momentum plane and the time–frequency plane – as in the Heisenberg uncertainty principle (quantum mechanics) and the Gabor limit (time–frequency analysis), ultimately both reflecting a symplectic structure.
An early practical motivation for time–frequency analysis was the development of radar – see ambiguity function. | [
{
"math_id": 0,
"text": "x(t)=\\begin{cases}\n\\cos( \\pi t); & t <10 \\\\\n\\cos(3 \\pi t); & 10 \\le t < 20 \\\\\n\\cos(2 \\pi t); & t > 20\n\\end{cases}"
},
{
"math_id": 1,
"text": "x_1 (t)=\\begin{cases}\n\\cos( \\pi t); & t <10 \\\\\n\\cos(3 \\pi t); & 10 \\le t < 20 \\\\\n\\cos(2 \\pi t); & t > 20\n\\end{cases}"
},
{
"math_id": 2,
"text": "x_2 (t)=\\begin{cases}\n\\cos( \\pi t); & t <10 \\\\\n\\cos(2 \\pi t); & 10 \\le t < 20 \\\\\n\\cos(3 \\pi t); & t > 20\n\\end{cases}"
},
{
"math_id": 3,
"text": "R_x(t,\\tau)"
},
{
"math_id": 4,
"text": "R_x(t,\\tau) = E[x(t+\\tau/2)x^*(t-\\tau/2)]"
},
{
"math_id": 5,
"text": "E[x(t)] = 0 "
},
{
"math_id": 6,
"text": "E[x(t+\\tau/2)x^*(t-\\tau/2)]"
},
{
"math_id": 7,
"text": "=\\iint x(t+\\tau/2,\\xi_1)x^*(t-\\tau/2,\\xi_2)P(\\xi_1,\\xi_2)d\\xi_1d\\xi_2"
},
{
"math_id": 8,
"text": "\\overset{\\land}{R_x}(t,\\tau)=E[x(t)x(t+\\tau)]"
},
{
"math_id": 9,
"text": "S_x(t,f)"
},
{
"math_id": 10,
"text": "S_x(t,f) = \\int_{-\\infty}^{\\infty} R_x(t,\\tau)e^{-j2\\pi f\\tau}d\\tau"
},
{
"math_id": 11,
"text": "E[W_x(t,f)] = \\int_{-\\infty}^{\\infty} E[x(t+\\tau/2)x^*(t-\\tau/2)]\\cdot e^{-j2\\pi f\\tau}\\cdot d\\tau"
},
{
"math_id": 12,
"text": "= \\int_{-\\infty}^{\\infty} R_x(t,\\tau)\\cdot e^{-j2\\pi f\\tau}\\cdot d\\tau"
},
{
"math_id": 13,
"text": "= S_x(t,f)"
},
{
"math_id": 14,
"text": "E[A_X(\\eta,\\tau)] = \\int_{-\\infty}^{\\infty} E[x(t+\\tau/2)x^*(t-\\tau/2)]e^{-j2\\pi t\\eta}dt"
},
{
"math_id": 15,
"text": "= \\int_{-\\infty}^{\\infty} R_x(t,\\tau)e^{-j2\\pi t\\eta}dt"
},
{
"math_id": 16,
"text": "R_x(t_1,\\tau) = R_x(t_2,\\tau) = R_x(\\tau)"
},
{
"math_id": 17,
"text": "t"
},
{
"math_id": 18,
"text": "R_x(\\tau) = E[x(\\tau/2)x^*(-\\tau/2)]"
},
{
"math_id": 19,
"text": "=\\iint x(\\tau/2,\\xi_1)x^*(-\\tau/2,\\xi_2)P(\\xi_1,\\xi_2)d\\xi_1d\\xi_2"
},
{
"math_id": 20,
"text": "S_x(f) = \\int_{-\\infty}^{\\infty} R_x(\\tau)e^{-j2\\pi f\\tau}d\\tau"
},
{
"math_id": 21,
"text": "S_x(f) = \\sigma"
},
{
"math_id": 22,
"text": "\\sigma"
},
{
"math_id": 23,
"text": "E[W_x(t,f)] = S_x(f)"
},
{
"math_id": 24,
"text": "E[A_x(\\eta,\\tau)] = \\int_{-\\infty}^{\\infty} R_x(\\tau)\\cdot e^{-j2\\pi t\\eta}\\cdot dt"
},
{
"math_id": 25,
"text": "= R_x(\\tau)\\int_{-\\infty}^{\\infty} e^{-j2\\pi t\\eta}\\cdot dt"
},
{
"math_id": 26,
"text": "= R_x(\\tau)\\delta(\\eta)"
},
{
"math_id": 27,
"text": "\\eta = 0"
},
{
"math_id": 28,
"text": "E[W_g(t,f)] = \\sigma"
},
{
"math_id": 29,
"text": "E[A_x(\\eta,\\tau)] = \\sigma\\delta(\\tau)\\delta(\\eta)"
},
{
"math_id": 30,
"text": "E_x"
},
{
"math_id": 31,
"text": "A"
},
{
"math_id": 32,
"text": "S_n(f) = \\sigma"
},
{
"math_id": 33,
"text": "SNR \\approx 10\\log_{10}\\frac{E_x}{\\iint\\limits_{(t,f)\\in\\text{signal part}} S_x(t,f)dtdf}"
},
{
"math_id": 34,
"text": "SNR \\approx 10\\log_{10}\\frac{E_x}{\\sigma\\Alpha}"
},
{
"math_id": 35,
"text": "E[W_x(t,f)]"
},
{
"math_id": 36,
"text": "E[A_x(\\eta,\\tau)]"
},
{
"math_id": 37,
"text": "x(t)"
},
{
"math_id": 38,
"text": "h(t) = x_1(t)+x_2(t)+x_3(t)+......+x_k(t)"
},
{
"math_id": 39,
"text": "x_n(t)"
},
{
"math_id": 40,
"text": "\\tau"
},
{
"math_id": 41,
"text": "E[x_m(t+\\tau/2)x_n^*(t-\\tau/2)] = E[x_m(t+\\tau/2)]E[x_n^*(t-\\tau/2)] = 0"
},
{
"math_id": 42,
"text": "m \\neq n"
},
{
"math_id": 43,
"text": "E[W_h(t,f)] = \\sum_{n=1}^k E[W_{x_n}(t,f)]"
},
{
"math_id": 44,
"text": "E[A_h(\\eta,\\tau)] = \\sum_{n=1}^k E[A_{x_n}(\\eta,\\tau)]"
},
{
"math_id": 45,
"text": "E[x(t)]\\neq 0"
},
{
"math_id": 46,
"text": "E[X(t,f)] = E[\\int_{t-B}^{t+B} x(\\tau)w(t-\\tau)e^{-j2\\pi f\\tau}d\\tau]"
},
{
"math_id": 47,
"text": "=\\int_{t-B}^{t+B} E[x(\\tau)]w(t-\\tau)e^{-j2\\pi f\\tau}d\\tau"
},
{
"math_id": 48,
"text": "E[X(t,f)] = 0"
},
{
"math_id": 49,
"text": "\\frac{1}{2 \\pi} \\frac{d}{dt} \\phi (t), "
},
{
"math_id": 50,
"text": "\\phi (t)"
},
{
"math_id": 51,
"text": "\\begin{bmatrix}\n x \\\\\n y\n\\end{bmatrix},"
},
{
"math_id": 52,
"text": "\\begin{bmatrix}\n x \\\\\n y\n\\end{bmatrix}"
},
{
"math_id": 53,
"text": "\\begin{bmatrix}\n a & b \\\\\n c & d\n \\end{bmatrix} =\n \\begin{bmatrix}\n 1 & \\lambda z \\\\\n 0 & 1\n \\end{bmatrix},\n"
},
{
"math_id": 54,
"text": "\\lambda "
},
{
"math_id": 55,
"text": "\\begin{bmatrix}\n a & b \\\\\n c & d\n \\end{bmatrix} =\n \\begin{bmatrix}\n 1 & 0 \\\\\n -\\frac{1}{\\lambda f} & 1\n \\end{bmatrix}\n"
},
{
"math_id": 56,
"text": "\\begin{bmatrix}\n a & b \\\\\n c & d\n \\end{bmatrix} =\n \\begin{bmatrix}\n 1 & 0 \\\\\n \\frac{1}{\\lambda R} & 1\n \\end{bmatrix}\n"
},
{
"math_id": 57,
"text": "\\begin{bmatrix}\n a & b \\\\\n c & d\n\\end{bmatrix}\n\\begin{bmatrix}\n x \\\\\n y\n\\end{bmatrix}.\n"
}
] | https://en.wikipedia.org/wiki?curid=857897 |
85816 | Complete graph | Graph in which every two vertices are adjacent
In the mathematical field of graph theory, a complete graph is a simple undirected graph in which every pair of distinct vertices is connected by a unique edge. A complete digraph is a directed graph in which every pair of distinct vertices is connected by a pair of unique edges (one in each direction).
Graph theory itself is typically dated as beginning with Leonhard Euler's 1736 work on the Seven Bridges of Königsberg. However, drawings of complete graphs, with their vertices placed on the points of a regular polygon, had already appeared in the 13th century, in the work of Ramon Llull. Such a drawing is sometimes referred to as a mystic rose.
Properties.
The complete graph on n vertices is denoted by Kn. Some sources claim that the letter K in this notation stands for the German word , but the German name for a complete graph, , does not contain the letter K, and other sources state that the notation honors the contributions of Kazimierz Kuratowski to graph theory.
Kn has "n"("n" – 1)/2 edges (a triangular number), and is a regular graph of degree "n" – 1. All complete graphs are their own maximal cliques. They are maximally connected as the only vertex cut which disconnects the graph is the complete set of vertices. The complement graph of a complete graph is an empty graph.
If the edges of a complete graph are each given an orientation, the resulting directed graph is called a tournament.
Kn can be decomposed into n trees Ti such that Ti has i vertices. Ringel's conjecture asks if the complete graph "K"2"n"+1 can be decomposed into copies of any tree with n edges. This is known to be true for sufficiently large n.
The number of all distinct paths between a specific pair of vertices in "K""n"+2 is given by
formula_0
where e refers to Euler's constant, and
formula_1
The number of matchings of the complete graphs are given by the telephone numbers
1, 1, 2, 4, 10, 26, 76, 232, 764, 2620, 9496, 35696, 140152, 568504, 2390480, 10349536, 46206736, ... (sequence in the OEIS).
These numbers give the largest possible value of the Hosoya index for an n-vertex graph. The number of perfect matchings of the complete graph Kn (with n even) is given by the double factorial ("n" – 1)!!.
The crossing numbers up to "K"27 are known, with "K"28 requiring either 7233 or 7234 crossings. Further values are collected by the Rectilinear Crossing Number project. Rectilinear Crossing numbers for Kn are
0, 0, 0, 0, 1, 3, 9, 19, 36, 62, 102, 153, 229, 324, 447, 603, 798, 1029, 1318, 1657, 2055, 2528, 3077, 3699, 4430, 5250, 6180, ... (sequence in the OEIS).
Geometry and topology.
A complete graph with n nodes represents the edges of an ("n" – 1)-simplex. Geometrically "K"3 forms the edge set of a triangle, "K"4 a tetrahedron, etc. The Császár polyhedron, a nonconvex polyhedron with the topology of a torus, has the complete graph "K"7 as its skeleton. Every neighborly polytope in four or more dimensions also has a complete skeleton.
"K"1 through "K"4 are all planar graphs. However, every planar drawing of a complete graph with five or more vertices must contain a crossing, and the nonplanar complete graph "K"5 plays a key role in the characterizations of planar graphs: by Kuratowski's theorem, a graph is planar if and only if it contains neither "K"5 nor the complete bipartite graph "K"3,3 as a subdivision, and by Wagner's theorem the same result holds for graph minors in place of subdivisions. As part of the Petersen family, "K"6 plays a similar role as one of the forbidden minors for linkless embedding. In other words, and as Conway and Gordon proved, every embedding of "K"6 into three-dimensional space is intrinsically linked, with at least one pair of linked triangles. Conway and Gordon also showed that any three-dimensional embedding of "K"7 contains a Hamiltonian cycle that is embedded in space as a nontrivial knot.
Examples.
Complete graphs on formula_2 vertices, for formula_2 between 1 and 12, are shown below along with the numbers of edges:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " w_{n+2} = n! e_n = \\lfloor en!\\rfloor,"
},
{
"math_id": 1,
"text": "e_n = \\sum_{k=0}^n\\frac{1}{k!}."
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "n+1"
}
] | https://en.wikipedia.org/wiki?curid=85816 |
8581934 | Complex polytope | In geometry, a complex polytope is a generalization of a polytope in real space to an analogous structure in a complex Hilbert space, where each real dimension is accompanied by an imaginary one.
A complex polytope may be understood as a collection of complex points, lines, planes, and so on, where every point is the junction of multiple lines, every line of multiple planes, and so on.
Precise definitions exist only for the regular complex polytopes, which are configurations. The regular complex polytopes have been completely characterized, and can be described using a symbolic notation developed by Coxeter.
Some complex polytopes which are not fully regular have also been described.
Definitions and introduction.
The complex line formula_0 has one dimension with real coordinates and another with imaginary coordinates. Applying real coordinates to both dimensions is said to give it two dimensions over the real numbers. A real plane, with the imaginary axis labelled as such, is called an Argand diagram. Because of this it is sometimes called the complex plane. Complex 2-space (also sometimes called the complex plane) is thus a four-dimensional space over the reals, and so on in higher dimensions.
A complex "n"-polytope in complex "n"-space is the analogue of a real "n"-polytope in real "n"-space. However, there is no natural complex analogue of the ordering of points on a real line (or of the associated combinatorial properties). Because of this a complex polytope cannot be seen as a contiguous surface and it does not bound an interior in the way that a real polytope does.
In the case of "regular" polytopes, a precise definition can be made by using the notion of symmetry. For any regular polytope the symmetry group (here a complex reflection group, called a Shephard group) acts transitively on the flags, that is, on the nested sequences of a point contained in a line contained in a plane and so on.
More fully, say that a collection "P" of affine subspaces (or "flats") of a complex unitary space "V" of dimension "n" is a regular complex polytope if it meets the following conditions:
(Here, a flat of dimension −1 is taken to mean the empty set.) Thus, by definition, regular complex polytopes are configurations in complex unitary space.
The regular complex polytopes were discovered by Shephard (1952), and the theory was further developed by Coxeter (1974).
A complex polytope exists in the complex space of equivalent dimension. For example, the vertices of a complex polygon are points in the complex plane formula_1 (a plane in which each point has two complex numbers as its coordinates, not to be confused with the Argand plane of complex numbers), and the edges are complex lines formula_0 existing as (affine) subspaces of the plane and intersecting at the vertices. Thus, as a one-dimensional complex space, an edge can be given its own coordinate system, within which the points of the edge are each represented by a single complex number.
In a regular complex polytope the vertices incident on the edge are arranged symmetrically about their centroid, which is often used as the origin of the edge's coordinate system (in the real case the centroid is just the midpoint of the edge). The symmetry arises from a complex reflection about the centroid; this reflection will leave the magnitude of any vertex unchanged, but change its argument by a fixed amount, moving it to the coordinates of the next vertex in order. So we may assume (after a suitable choice of scale) that the vertices on the edge satisfy the equation formula_2 where "p" is the number of incident vertices. Thus, in the Argand diagram of the edge, the vertex points lie at the vertices of a regular polygon centered on the origin.
Three real projections of regular complex polygon 4{4}2 are illustrated above, with edges "a, b, c, d, e, f, g, h". It has 16 vertices, which for clarity have not been individually marked. Each edge has four vertices and each vertex lies on two edges, hence each edge meets four other edges. In the first diagram, each edge is represented by a square. The sides of the square are "not" parts of the polygon but are drawn purely to help visually relate the four vertices. The edges are laid out symmetrically. (Note that the diagram looks similar to the B4 Coxeter plane projection of the tesseract, but it is structurally different).
The middle diagram abandons octagonal symmetry in favour of clarity. Each edge is shown as a real line, and each meeting point of two lines is a vertex. The connectivity between the various edges is clear to see.
The last diagram gives a flavour of the structure projected into three dimensions: the two cubes of vertices are in fact the same size but are seen in perspective at different distances away in the fourth dimension.
Regular complex one-dimensional polytopes.
A real 1-dimensional polytope exists as a closed segment in the real line formula_3, defined by its two end points or vertices in the line. Its Schläfli symbol is {} .
Analogously, a complex 1-polytope exists as a set of "p" vertex points in the complex line formula_0. These may be represented as a set of points in an Argand diagram ("x","y")="x"+"iy". A regular complex 1-dimensional polytope "p"{} has "p" ("p" ≥ 2) vertex points arranged to form a convex regular polygon {"p"} in the Argand plane.
Unlike points on the real line, points on the complex line have no natural ordering. Thus, unlike real polytopes, no interior can be defined. Despite this, complex 1-polytopes are often drawn, as here, as a bounded regular polygon in the Argand plane.
A regular real 1-dimensional polytope is represented by an empty Schläfli symbol {}, or Coxeter-Dynkin diagram . The dot or node of the Coxeter-Dynkin diagram itself represents a reflection generator while the circle around the node means the generator point is not on the reflection, so its reflective image is a distinct point from itself. By extension, a regular complex 1-dimensional polytope in formula_0 has Coxeter-Dynkin diagram , for any positive integer "p", 2 or greater, containing "p" vertices. "p" can be suppressed if it is 2. It can also be represented by an empty Schläfli symbol "p"{}, }"p"{, {}"p", or "p"{2}1. The 1 is a notational placeholder, representing a nonexistent reflection, or a period 1 identity generator. (A 0-polytope, real or complex is a point, and is represented as } {, or 1{2}1.)
The symmetry is denoted by the Coxeter diagram , and can alternatively be described in Coxeter notation as "p"[], []"p" or ]"p"[, "p"[2]1 or "p"[1]"p". The symmetry is isomorphic to the cyclic group, order "p". The subgroups of "p"[] are any whole divisor "d", "d"[], where "d"≥2.
A unitary operator generator for is seen as a rotation by 2π/"p" radians counter clockwise, and a edge is created by sequential applications of a single unitary reflection. A unitary reflection generator for a 1-polytope with "p" vertices is "e"2π"i"/"p"
cos(2π/"p") + "i" sin(2π/"p"). When "p" = 2, the generator is "e"π"i" = –1, the same as a point reflection in the real plane.
In higher complex polytopes, 1-polytopes form "p"-edges. A 2-edge is similar to an ordinary real edge, in that it contains two vertices, but need not exist on a real line.
Regular complex polygons.
While 1-polytopes can have unlimited "p", finite regular complex polygons, excluding the double prism polygons "p"{4}2, are limited to 5-edge (pentagonal edges) elements, and infinite regular apeirogons also include 6-edge (hexagonal edges) elements.
Notations.
Shephard's modified Schläfli notation.
Shephard originally devised a modified form of Schläfli's notation for regular polytopes. For a polygon bounded by "p"1-edges, with a "p"2-set as vertex figure and overall symmetry group of order "g", we denote the polygon as "p"1("g")"p"2.
The number of vertices "V" is then "g"/"p"2 and the number of edges "E" is "g"/"p"1.
The complex polygon illustrated above has eight square edges ("p"1=4) and sixteen vertices ("p"2=2). From this we can work out that "g" = 32, giving the modified Schläfli symbol 4(32)2.
Coxeter's revised modified Schläfli notation.
A more modern notation "p"1{"q"}"p"2 is due to Coxeter, and is based on group theory. As a symmetry group, its symbol is "p"1["q"]"p"2.
The symmetry group "p"1["q"]"p"2 is represented by 2 generators R1, R2, where: R1"p"1 = R2"p"2 = I. If "q" is even, (R2R1)"q"/2 = (R1R2)"q"/2. If "q" is odd, (R2R1)(q−1)/2R2 = (R1R2)("q"−1)/2R1. When "q" is odd, "p"1="p"2.
For 4[4]2 has R14 = R22 = I, (R2R1)2 = (R1R2)2.
For 3[5]3 has R13 = R23 = I, (R2R1)2R2 = (R1R2)2R1.
Coxeter-Dynkin diagrams.
Coxeter also generalised the use of Coxeter-Dynkin diagrams to complex polytopes, for example the complex polygon "p"{"q"}"r" is represented by and the equivalent symmetry group, "p"["q"]"r", is a ringless diagram . The nodes "p" and "r" represent mirrors producing "p" and "r" images in the plane. Unlabeled nodes in a diagram have implicit 2 labels. For example, a real regular polygon is 2{"q"}2 or {"q"} or .
One limitation, nodes connected by odd branch orders must have identical node orders. If they do not, the group will create "starry" polygons, with overlapping element. So and are ordinary, while is starry.
12 Irreducible Shephard groups.
Coxeter enumerated this list of regular complex polygons in formula_1. A regular complex polygon, "p"{"q"}"r" or , has "p"-edges, and "r"-gonal vertex figures. "p"{"q"}"r" is a finite polytope if ("p"+"r")"q">"pr"("q"-2).
Its symmetry is written as "p"["q"]"r", called a "Shephard group", analogous to a Coxeter group, while also allowing unitary reflections.
For nonstarry groups, the order of the group "p"["q"]"r" can be computed as formula_4.
The Coxeter number for "p"["q"]"r" is formula_5, so the group order can also be computed as formula_6. A regular complex polygon can be drawn in orthogonal projection with "h"-gonal symmetry.
The rank 2 solutions that generate complex polygons are:
Excluded solutions with odd "q" and unequal "p" and "r" are: 6[3]2, 6[3]3, 9[3]3, 12[3]3, ..., 5[5]2, 6[5]2, 8[5]2, 9[5]2, 4[7]2, 9[5]2, 3[9]2, and 3[11]2.
Other whole "q" with unequal "p" and "r", create starry groups with overlapping fundamental domains: , , , , , and .
The dual polygon of "p"{"q"}"r" is "r"{"q"}"p". A polygon of the form "p"{"q"}"p" is self-dual. Groups of the form "p"[2"q"]2 have a half symmetry "p"["q"]"p", so a regular polygon is the same as quasiregular . As well, regular polygon with the same node orders, , have an alternated construction , allowing adjacent edges to be two different colors.
The group order, "g", is used to compute the total number of vertices and edges. It will have "g"/"r" vertices, and "g"/"p" edges. When "p"="r", the number of vertices and edges are equal. This condition is required when "q" is odd.
Matrix generators.
The group "p"["q"]"r", , can be represented by two matrices:
With
k=formula_7
Enumeration of regular complex polygons.
Coxeter enumerated the complex polygons in Table III of Regular Complex Polytopes.
Visualizations of regular complex polygons.
Polygons of the form "p"{2"r"}"q" can be visualized by "q" color sets of "p"-edge. Each "p"-edge is seen as a regular polygon, while there are no faces.
Polygons of the form 2{4}"q" are called generalized orthoplexes. They share vertices with the 4D "q"-"q" duopyramids, vertices connected by 2-edges.
Polygons of the form "p"{4}2 are called generalized hypercubes (squares for polygons). They share vertices with the 4D "p"-"p" duoprisms, vertices connected by p-edges. Vertices are drawn in green, and "p"-edges are drawn in alternate colors, red and blue. The perspective is distorted slightly for odd dimensions to move overlapping vertices from the center.
Polygons of the form "p"{"r"}"p" have equal number of vertices and edges. They are also self-dual.
Regular complex polytopes.
In general, a regular complex polytope is represented by Coxeter as "p"{"z"1}"q"{z2}"r"{z3}"s"… or Coxeter diagram …, having symmetry "p"["z"1]"q"["z"2]"r"["z"3]"s"… or ….
There are infinite families of regular complex polytopes that occur in all dimensions, generalizing the hypercubes and cross polytopes in real space. Shephard's "generalized orthotope" generalizes the hypercube; it has symbol given by γ = "p"{4}2{3}2…2{3}2 and diagram …. Its symmetry group has diagram "p"[4]2[3]2…2[3]2; in the Shephard–Todd classification, this is the group G("p", 1, "n") generalizing the signed permutation matrices. Its dual regular polytope, the "generalized cross polytope", is represented by the symbol β = 2{3}2{3}2…2{4}"p" and diagram ….
A 1-dimensional "regular complex polytope" in formula_0 is represented as , having "p" vertices, with its real representation a regular polygon, {"p"}. Coxeter also gives it symbol γ or β as 1-dimensional generalized hypercube or cross polytope. Its symmetry is "p"[] or , a cyclic group of order "p". In a higher polytope, "p"{} or represents a "p"-edge element, with a 2-edge, {} or , representing an ordinary real edge between two vertices.
A dual complex polytope is constructed by exchanging "k" and ("n"-1-"k")-elements of an "n"-polytope. For example, a dual complex polygon has vertices centered on each edge, and new edges are centered at the old vertices. A "v"-valence vertex creates a new "v"-edge, and "e"-edges become "e"-valence vertices. The dual of a regular complex polytope has a reversed symbol. Regular complex polytopes with symmetric symbols, i.e. "p"{"q"}"p", "p"{"q"}"r"{"q"}"p", "p"{"q"}"r"{"s"}"r"{"q"}"p", etc. are self dual.
Enumeration of regular complex polyhedra.
Coxeter enumerated this list of nonstarry regular complex polyhedra in formula_9, including the 5 platonic solids in formula_10.
A regular complex polyhedron, "p"{"n"1}"q"{"n"2}"r" or , has faces, edges, and vertex figures.
A complex regular polyhedron "p"{"n"1}"q"{"n"2}"r" requires both "g"1 = order("p"["n"1]"q") and "g"2 = order("q"["n"2]"r") be finite.
Given "g" = order("p"["n"1]"q"["n"2]"r"), the number of vertices is "g"/"g"2, and the number of faces is "g"/"g"1. The number of edges is "g"/"pr".
Visualizations of regular complex polyhedra.
Generalized octahedra have a regular construction as and quasiregular form as . All elements are simplexes.
Generalized cubes have a regular construction as and prismatic construction as , a product of three "p"-gonal 1-polytopes. Elements are lower dimensional generalized cubes.
Enumeration of regular complex 4-polytopes.
Coxeter enumerated this list of nonstarry regular complex 4-polytopes in formula_11, including the 6 convex regular 4-polytopes in formula_8.
Visualizations of regular complex 4-polytopes.
Generalized 4-orthoplexes have a regular construction as and quasiregular form as . All elements are simplexes.
Generalized tesseracts have a regular construction as and prismatic construction as , a product of four "p"-gonal 1-polytopes. Elements are lower dimensional generalized cubes.
Enumeration of regular complex 5-polytopes.
Regular complex 5-polytopes in formula_12 or higher exist in three families, the real simplexes and the generalized hypercube, and orthoplex.
Visualizations of regular complex 5-polytopes.
Generalized 5-orthoplexes have a regular construction as and quasiregular form as . All elements are simplexes.
Generalized 5-cubes have a regular construction as and prismatic construction as , a product of five "p"-gonal 1-polytopes. Elements are lower dimensional generalized cubes.
Enumeration of regular complex 6-polytopes.
Visualizations of regular complex 6-polytopes.
Generalized 6-orthoplexes have a regular construction as and quasiregular form as . All elements are simplexes.
Generalized 6-cubes have a regular construction as and prismatic construction as , a product of six "p"-gonal 1-polytopes. Elements are lower dimensional generalized cubes.
Enumeration of regular complex apeirotopes.
Coxeter enumerated this list of nonstarry regular complex apeirotopes or honeycombs.
For each dimension there are 12 apeirotopes symbolized as δ exists in any dimensions formula_13, or formula_14 if "p"="q"=2. Coxeter calls these generalized cubic honeycombs for "n">2.
Each has proportional element counts given as:
k-faces = formula_15, where formula_16 and "n"! denotes the factorial of "n".
Regular complex 1-polytopes.
The only regular complex 1-polytope is ∞{}, or . Its real representation is an apeirogon, {∞}, or .
Regular complex apeirogons.
Rank 2 complex apeirogons have symmetry "p"["q"]"r", where 1/"p" + 2/"q" + 1/"r" = 1. Coxeter expresses them as δ where "q" is constrained to satisfy "q"
2/(1 – ("p" + "r")/"pr").
There are 8 solutions:
There are two excluded solutions odd "q" and unequal "p" and "r": 10[5]2 and 12[3]4, or and .
A regular complex apeirogon "p"{"q"}"r" has "p"-edges and "r"-gonal vertex figures. The dual apeirogon of "p"{"q"}"r" is "r"{"q"}"p". An apeirogon of the form "p"{"q"}"p" is self-dual. Groups of the form "p"[2"q"]2 have a half symmetry "p"["q"]"p", so a regular apeirogon is the same as quasiregular .
Apeirogons can be represented on the Argand plane share four different vertex arrangements. Apeirogons of the form 2{"q"}"r" have a vertex arrangement as {"q"/2,"p"}. The form "p"{"q"}2 have vertex arrangement as r{"p","q"/2}. Apeirogons of the form "p"{4}"r" have vertex arrangements {"p","r"}.
Including affine nodes, and formula_1, there are 3 more infinite solutions: ∞[2]∞, ∞[4]2, ∞[3]3, and , , and . The first is an index 2 subgroup of the second. The vertices of these apeirogons exist in formula_0.
Regular complex apeirohedra.
There are 22 regular complex apeirohedra, of the form "p"{"a"}"q"{"b"}"r". 8 are self-dual ("p"="r" and "a"="b"), while 14 exist as dual polytope pairs. Three are entirely real ("p"="q"="r"=2).
Coxeter symbolizes 12 of them as δ or "p"{4}2{4}"r" is the regular form of the product apeirotope δ × δ or "p"{"q"}"r" × "p"{"q"}"r", where "q" is determined from "p" and "r".
is the same as , as well as , for "p","r"=2,3,4,6. Also = .
Regular complex 3-apeirotopes.
There are 16 regular complex apeirotopes in formula_9. Coxeter expresses 12 of them by δ where "q" is constrained to satisfy "q"
2/(1 – ("p" + "r")/"pr"). These can also be decomposed as product apeirotopes: = . The first case is the formula_10 cubic honeycomb.
Regular complex 4-apeirotopes.
There are 15 regular complex apeirotopes in formula_11. Coxeter expresses 12 of them by δ where "q" is constrained to satisfy "q"
2/(1 – ("p" + "r")/"pr"). These can also be decomposed as product apeirotopes: = . The first case is the formula_8 tesseractic honeycomb. The 16-cell honeycomb and 24-cell honeycomb are real solutions. The last solution is generated has Witting polytope elements.
Regular complex 5-apeirotopes and higher.
There are only 12 regular complex apeirotopes in formula_12 or higher, expressed δ where "q" is constrained to satisfy "q"
2/(1 – ("p" + "r")/"pr"). These can also be decomposed a product of "n" apeirogons: ... = ... . The first case is the real formula_14 hypercube honeycomb.
van Oss polygon.
A van Oss polygon is a regular polygon in the plane (real plane formula_17, or unitary plane formula_1) in which both an edge and the centroid of a regular polytope lie, and formed of elements of the polytope. Not all regular polytopes have Van Oss polygons.
For example, the van Oss polygons of a real octahedron are the three squares whose planes pass through its center. In contrast a cube does not have a van Oss polygon because the edge-to-center plane cuts diagonally across two square faces and the two edges of the cube which lie in the plane do not form a polygon.
Infinite honeycombs also have van Oss apeirogons. For example, the real square tiling and triangular tiling have apeirogons {∞} van Oss apeirogons.
If it exists, the "van Oss polygon" of regular complex polytope of the form "p"{"q"}"r"{"s"}"t"... has "p"-edges.
Non-regular complex polytopes.
Product complex polytopes.
Some complex polytopes can be represented as Cartesian products. These product polytopes are not strictly regular since they'll have more than one facet type, but some can represent lower symmetry of regular forms if all the orthogonal polytopes are identical. For example, the product "p"{}×"p"{} or of two 1-dimensional polytopes is the same as the regular "p"{4}2 or . More general products, like "p"{}×"q"{} have real representations as the 4-dimensional "p"-"q" duoprisms. The dual of a product polytope can be written as a sum "p"{}+"q"{} and have real representations as the 4-dimensional "p"-"q" duopyramid. The "p"{}+"p"{} can have its symmetry doubled as a regular complex polytope 2{4}"p" or .
Similarly, a formula_9 complex polyhedron can be constructed as a triple product: "p"{}×"p"{}×"p"{} or is the same as the regular "generalized cube", "p"{4}2{3}2 or , as well as product "p"{4}2×"p"{} or .
Quasiregular polygons.
A quasiregular polygon is a truncation of a regular polygon. A quasiregular polygon contains alternate edges of the regular polygons and . The quasiregular polygon has "p" vertices on the p-edges of the regular form.
Quasiregular apeirogons.
There are 7 quasiregular complex apeirogons which alternate edges of a regular apeirogon and its regular dual. The vertex arrangements of these apeirogon have real representations with the regular and uniform tilings of the Euclidean plane. The last column for the 6{3}6 apeirogon is not only self-dual, but the dual coincides with itself with overlapping hexagonal edges, thus their quasiregular form also has overlapping hexagonal edges, so it can't be drawn with two alternating colors like the others. The symmetry of the self-dual families can be doubled, so creating an identical geometry as the regular forms: =
Quasiregular polyhedra.
Like real polytopes, a complex quasiregular polyhedron can be constructed as a rectification (a complete truncation) of a regular polyhedron. Vertices are created mid-edge of the regular polyhedron and faces of the regular polyhedron and its dual are positioned alternating across common edges.
For example, a p-generalized cube, , has "p"3 vertices, 3"p"2 edges, and 3"p" "p"-generalized square faces, while the "p"-generalized octahedron, , has 3"p" vertices, 3"p"2 edges and "p"3 triangular faces. The middle quasiregular form "p"-generalized cuboctahedron, , has 3"p"2 vertices, 3"p"3 edges, and 3"p"+"p"3 faces.
Also the rectification of the Hessian polyhedron , is , a quasiregular form sharing the geometry of the regular complex polyhedron .
Other complex polytopes with unitary reflections of period two.
Other nonregular complex polytopes can be constructed within unitary reflection groups that don't make linear Coxeter graphs. In Coxeter diagrams with loops Coxeter marks a special period interior, like or symbol (11 1 1)3, and group [1 1 1]3. These complex polytopes have not been systematically explored beyond a few cases.
The group is defined by 3 unitary reflections, R1, R2, R3, all order 2: R12 = R12 = R32 = (R1R2)3 = (R2R3)3 = (R3R1)3 = (R1R2R3R1)"p" = 1. The period "p" can be seen as a double rotation in real formula_8.
As with all Wythoff constructions, polytopes generated by reflections, the number of vertices of a single-ringed Coxeter diagram polytope is equal to the order of the group divided by the order of the subgroup where the ringed node is removed. For example, a real cube has Coxeter diagram , with octahedral symmetry order 48, and subgroup dihedral symmetry order 6, so the number of vertices of a cube is 48/6=8. Facets are constructed by removing one node furthest from the ringed node, for example for the cube. Vertex figures are generated by removing a ringed node and ringing one or more connected nodes, and for the cube.
Coxeter represents these groups by the following symbols. Some groups have the same order, but a different structure, defining the same vertex arrangement in complex polytopes, but different edges and higher elements, like and with "p"≠3.
Coxeter calls some of these complex polyhedra "almost regular" because they have regular facets and vertex figures. The first is a lower symmetry form of the generalized cross-polytope in formula_9. The second is a fractional generalized cube, reducing "p"-edges into single vertices leaving ordinary 2-edges. Three of them are related to the finite regular skew polyhedron in formula_8.
Coxeter defines other groups with anti-unitary constructions, for example these three. The first was discovered and drawn by Peter McMullen in 1966.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{C}^1"
},
{
"math_id": 1,
"text": "\\mathbb{C}^2"
},
{
"math_id": 2,
"text": "x^p - 1 = 0"
},
{
"math_id": 3,
"text": "\\mathbb{R}^1"
},
{
"math_id": 4,
"text": "g = 8/q \\cdot (1/p+2/q+1/r-1)^{-2}"
},
{
"math_id": 5,
"text": "h = 2/(1/p+2/q+1/r-1)"
},
{
"math_id": 6,
"text": "g = 2h^2/q"
},
{
"math_id": 7,
"text": "\\sqrt \\frac{ cos(\\frac{\\pi}{p}-\\frac{\\pi}{r})+cos(\\frac{2\\pi}{q}) }{2\\sin\\frac{\\pi}{p}\\sin\\frac{\\pi}{r} } "
},
{
"math_id": 8,
"text": "\\mathbb{R}^4"
},
{
"math_id": 9,
"text": "\\mathbb{C}^3"
},
{
"math_id": 10,
"text": "\\mathbb{R}^3"
},
{
"math_id": 11,
"text": "\\mathbb{C}^4"
},
{
"math_id": 12,
"text": "\\mathbb{C}^5"
},
{
"math_id": 13,
"text": "\\mathbb{C}^n"
},
{
"math_id": 14,
"text": "\\mathbb{R}^n"
},
{
"math_id": 15,
"text": " {n \\choose k}p^{n-k}r^k "
},
{
"math_id": 16,
"text": "{n \\choose m}=\\frac{n!}{m!\\,(n-m)!}"
},
{
"math_id": 17,
"text": "\\mathbb{R}^2"
}
] | https://en.wikipedia.org/wiki?curid=8581934 |
85821 | Regular graph | Graph where each vertex has the same number of neighbors
In graph theory, a regular graph is a graph where each vertex has the same number of neighbors; i.e. every vertex has the same degree or valency. A regular directed graph must also satisfy the stronger condition that the indegree and outdegree of each internal vertex are equal to each other. A regular graph with vertices of degree k is called a graph or regular graph of degree k.
<templatestyles src="Template:TOC_left/styles.css" />
Special cases.
Regular graphs of degree at most 2 are easy to classify: a 0-regular graph consists of disconnected vertices, a 1-regular graph consists of disconnected edges, and a 2-regular graph consists of a disjoint union of cycles and infinite chains.
A 3-regular graph is known as a cubic graph.
A strongly regular graph is a regular graph where every adjacent pair of vertices has the same number l of neighbors in common, and every non-adjacent pair of vertices has the same number n of neighbors in common. The smallest graphs that are regular but not strongly regular are the cycle graph and the circulant graph on 6 vertices.
The complete graph Km is strongly regular for any m.
Existence.
The necessary and sufficient conditions for a formula_0-regular graph of order formula_1 to exist are that formula_2 and that formula_3 is even.
Proof: A complete graph has every pair of distinct vertices connected to each other by a unique edge. So edges are maximum in complete graph and number of edges are formula_4 and degree here is formula_5. So formula_6. This is the minimum formula_1 for a particular formula_0. Also note that if any regular graph has order formula_1 then number of edges are formula_7 so formula_8 has to be even.
In such case it is easy to construct regular graphs by considering appropriate parameters for circulant graphs.
Properties.
From the handshaking lemma, a k-regular graph with odd k has an even number of vertices.
A theorem by Nash-Williams says that every graph on 2"k" + 1 vertices has a Hamiltonian cycle.
Let "A" be the adjacency matrix of a graph. Then the graph is regular if and only if formula_9 is an eigenvector of "A". Its eigenvalue will be the constant degree of the graph. Eigenvectors corresponding to other eigenvalues are orthogonal to formula_10, so for such eigenvectors formula_11, we have formula_12.
A regular graph of degree "k" is connected if and only if the eigenvalue "k" has multiplicity one. The "only if" direction is a consequence of the Perron–Frobenius theorem.
There is also a criterion for regular and connected graphs :
a graph is connected and regular if and only if the matrix of ones "J", with formula_13, is in the adjacency algebra of the graph (meaning it is a linear combination of powers of "A").
Let "G" be a "k"-regular graph with diameter "D" and eigenvalues of adjacency matrix formula_14. If "G" is not bipartite, then
formula_15
Generation.
Fast algorithms exist to generate, up to isomorphism, all regular graphs with a given degree and number of vertices.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": " n \\geq k+1 "
},
{
"math_id": 3,
"text": " nk "
},
{
"math_id": 4,
"text": "\\binom{n}{2} = \\dfrac{n(n-1)}{2}"
},
{
"math_id": 5,
"text": "n-1"
},
{
"math_id": 6,
"text": "k=n-1,n=k+1"
},
{
"math_id": 7,
"text": "\\dfrac{nk}{2}"
},
{
"math_id": 8,
"text": "nk"
},
{
"math_id": 9,
"text": "\\textbf{j}=(1, \\dots ,1)"
},
{
"math_id": 10,
"text": "\\textbf{j}"
},
{
"math_id": 11,
"text": "v=(v_1,\\dots,v_n)"
},
{
"math_id": 12,
"text": "\\sum_{i=1}^n v_i = 0"
},
{
"math_id": 13,
"text": "J_{ij}=1"
},
{
"math_id": 14,
"text": "k=\\lambda_0 >\\lambda_1\\geq \\cdots\\geq\\lambda_{n-1}"
},
{
"math_id": 15,
"text": "D\\leq \\frac{\\log{(n-1)}}{\\log(\\lambda_0/\\lambda_1)}+1. "
}
] | https://en.wikipedia.org/wiki?curid=85821 |
8585531 | T-function | Mathematical function used in cryptography
In cryptography, a T-function is a bijective mapping that updates every bit of the state in a way that can be described as formula_0, or in simple words an update function in which each bit of the state is updated by a linear combination of the same bit and a function of a subset of its less significant bits. If every single less significant bit is included in the update of every bit in the state, such a T-function is called triangular. Thanks to their bijectivity (no collisions, therefore no entropy loss) regardless of the used Boolean functions and regardless of the selection of inputs (as long as they all come from one side of the output bit), T-functions are now widely used in cryptography to construct block ciphers, stream ciphers, PRNGs and hash functions. T-functions were first proposed in 2002 by A. Klimov and A. Shamir in their paper "A New Class of Invertible Mappings". Ciphers such as TSC-1, TSC-3, TSC-4, ABC, Mir-1 and VEST are built with different types of T-functions.
Because arithmetic operations such as addition, subtraction and multiplication are also T-functions (triangular T-functions), software-efficient word-based T-functions can be constructed by combining bitwise logic with arithmetic operations. Another important property of T-functions based on arithmetic operations is predictability of their period, which is highly attractive to cryptographers. Although triangular T-functions are naturally vulnerable to guess-and-determine attacks, well chosen bitwise transpositions between rounds can neutralize that imbalance. In software-efficient ciphers, it can be done by interleaving arithmetic operations with byte-swapping operations and to a small degree with bitwise rotation operations. However, triangular T-functions remain highly inefficient in hardware.
T-functions do not have any restrictions on the types and the widths of the update functions used for each bit. Subsequent transposition of the output bits and iteration of the T-function also do not affect bijectivity. This freedom allows the designer to choose the update functions or S-boxes that satisfy all other cryptographic criteria and even choose arbitrary or key-dependent update functions (see family keying).
Hardware-efficient lightweight T-functions with identical widths of all the update functions for each bit of the state can thus be easily constructed. The core accumulators of VEST ciphers are a good example of such reasonably light-weight T-functions that are balanced out after 2 rounds by the transposition layer making all the 2-round feedback functions of roughly the same width and losing the "T-function" bias of depending only on the less significant bits of the state. | [
{
"math_id": 0,
"text": "x_i' = x_i + f(x_0, \\cdots, x_{i-1})"
}
] | https://en.wikipedia.org/wiki?curid=8585531 |
8588347 | Arens–Fort space | In mathematics, the Arens–Fort space is a special example in the theory of topological spaces, named for Richard Friederich Arens and M. K. Fort, Jr.
Definition.
The Arens–Fort space is the topological space formula_0 where formula_1 is the set of ordered pairs of non-negative integers formula_2 A subset formula_3 is open, that is, belongs to formula_4 if and only if:
In other words, an open set is only "allowed" to contain formula_7 if only a finite number of its columns contain significant gaps, where a gap in a column is significant if it omits an infinite number of points.
Properties.
It is
It is not:
There is no sequence in formula_10 that converges to formula_11 However, there is a sequence formula_12 in formula_10 such that formula_7 is a cluster point of formula_13 | [
{
"math_id": 0,
"text": "(X,\\tau)"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "(m, n)."
},
{
"math_id": 3,
"text": "U \\subseteq X"
},
{
"math_id": 4,
"text": "\\tau,"
},
{
"math_id": 5,
"text": "U"
},
{
"math_id": 6,
"text": "(0, 0),"
},
{
"math_id": 7,
"text": "(0, 0)"
},
{
"math_id": 8,
"text": "\\{ (m, n) ~:~ 0 \\leq n \\in \\mathbb{Z} \\}"
},
{
"math_id": 9,
"text": "0 \\leq m \\in \\mathbb{Z}"
},
{
"math_id": 10,
"text": "X \\setminus \\{ (0, 0) \\}"
},
{
"math_id": 11,
"text": "(0, 0)."
},
{
"math_id": 12,
"text": "x_{\\bull} = \\left( x_i \\right)_{i=1}^{\\infty}"
},
{
"math_id": 13,
"text": "x_{\\bull}."
}
] | https://en.wikipedia.org/wiki?curid=8588347 |
858843 | Acetylation | Chemical reaction that attaches an acetyl group to a compound
In chemistry, acetylation is an organic esterification reaction with acetic acid. It introduces an acetyl group into a chemical compound. Such compounds are termed "acetate esters" or simply "acetates". Deacetylation is the opposite reaction, the removal of an acetyl group from a chemical compound.
Acetylation/deacetylation in biology.
Deacylations "play crucial roles in gene transcription and most likely in all eukaryotic biological processes that involve chromatin".
Acetylation is one type of post-translational modification of proteins. The acetylation of the ε-amino group of lysine, which is common, converts a charged side chain to a neutral one. Acetylation/deacetylation of histones also plays a role in gene expression and cancer. These modifications are effected by enzymes called histone acetyltransferases (HATs) and histone deacetylases (HDACs).
Two general mechanisms are known for deacetylation. One mechanism involves zinc binding to the acetyl oxygen. Another family of deacetylases require NAD+, which transfers an ribosyl group to the acetyl oxygen.
Organic synthesis.
Acetate esters and acetamides are generally prepared by acetylations. Acetylations are often used in making C-acetyl bonds in Friedel-Crafts reactions. Carbanions and their equivalents are susceptible to acetylations.
Acetylation reagents.
Many acetylations are achieved using these three reagents:
formula_0
Acetylation of cellulose.
Cellulose is a polyol and thus susceptible to acetylation, which is achieved using acetic anhydride. Acetylation disrupts hydrogen bonding, which otherwise dominates the properties of cellulose. Consequently, the cellulose esters are soluble in organic solvents and can be cast into fibers and films.
Transacetylation.
Transacetylation uses vinyl acetate as an acetyl donor and lipase as a catalyst. This methodology allows the preparation of enantio-enriched alcohols and acetates.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta H = -63 \\text{ kJ/mol}"
}
] | https://en.wikipedia.org/wiki?curid=858843 |
858914 | Al-Battani | Islamic astronomer and mathematician (died 929)
Abū ʿAbd Allāh Muḥammad ibn Jābir ibn Sinān al-Raqqī al-Ḥarrānī aṣ-Ṣābiʾ al-Battānī (), usually called al-Battānī, a name that was in the past Latinized as Albategnius, (before 858 – 929) was an astronomer, astrologer and mathematician, who lived and worked for most of his life at Raqqa, now in Syria. He is considered to be the greatest and most famous of the astronomers of the medieval Islamic world.
Al-Battānī's writings became instrumental in the development of science and astronomy in the west. His (c. 900), is the earliest extant (astronomical table) made in the Ptolemaic tradition that is hardly influenced by Hindu or Sasanian astronomy. Al-Battānī refined and corrected Ptolemy's "Almagest", but also included new ideas and astronomical tables of his own. A handwritten Latin version by the Italian astronomer Plato Tiburtinus was produced between 1134 and 1138, through which medieval astronomers became familiar with al-Battānī. In 1537, a Latin translation of the was printed in Nuremberg. An annotated version, also in Latin, published in three separate volumes between 1899 and 1907 by the Italian Orientalist Carlo Alfonso Nallino, provided the foundation of the modern study of medieval Islamic astronomy.
Al-Battānī's observations of the Sun led him to understand the nature of annular solar eclipses. He accurately calculated the Earth's obliquity (the angle between the planes of the equator and the ecliptic), the solar year, and the equinoxes (obtaining a value for the precession of the equinoxes of one degree in 66 years). The accuracy of his data encouraged Nicolaus Copernicus to pursue ideas about the heliocentric nature of the cosmos. Al-Battānī's tables were used by the German mathematician Christopher Clavius in reforming the Julian calendar, and the astronomers Tycho Brahe, Johannes Kepler, Galileo Galilei and Edmund Halley all used Al-Battānī's observations.
Al-Battānī introduced the use of sines and tangents in geometrical calculations, replacing the geometrical methods of the Greeks. Using trigonometry, he created an equation for finding the (the direction which Muslims need to face during their prayers). His equation was widely used until superseded by more accurate methods, introduced a century later by the polymath al-Biruni.
Life.
Al-Battānī, whose full name was , and whose Latinized name was , was born before 858 in Harran in Bilād ash-Shām (Islamic Syria), southeast of the modern Turkish city of Urfa. He was the son of Jabir ibn Sinan al-Harrani, a maker of astronomical instruments. The epithet suggests that his family belonged to the pagan Sabian sect of Harran, whose religion featured star worship, and who had inherited the Mesopotamian legacy of an interest in mathematics and astronomy. His contemporary, the polymath Thābit ibn Qurra, was also an adherent of Sabianism, which died out during the 11th century.
Although his ancestors were likely Sabians, al-Battānī was a Muslim, as shown by his first name. Between 877 and 918/19 he lived in Raqqa, now in north central Syria, which was an ancient Roman settlement beside the Euphrates, near Harran. During this period he also lived in Antioch, where he observed a solar and a lunar eclipse in 901. According to the Arab biographer Ibn al-Nadīm, the financial problems encountered by al-Battānī in old age forced him to move from Raqqa to Baghdad.
Al-Battānī died in 929 at Qasr al-Jiss, near Samarra, after returning from Baghdad where he had resolved an unfair taxation grievance on behalf of a clan from Raqqa.
Astronomy.
Al-Battānī is considered to be the greatest and most famous of the known astronomers of the medieval Islamic world. He made more accurate observations of the night sky than any of his contemporaries, and was the first of a generation of new Islamic astronomers that followed the founding of the House of Wisdom in the 8th century. His meticulously described methods allowed others to assess his results, but some of his explanations about the movements of the planets were poorly written, and have mistakes.
Sometimes referred to as the "Ptolemy of the Arabs", al-Battānī's works reveal him to have been a devout believer in Ptolemy's geocentric model of the cosmos. He refined the observations found in Ptolemy's , and compiled new tables of the Sun and the Moon, previously long accepted as authoritative. Al-Battānī established his own observatory at Raqqa. He recommended that the astronomical instruments there were greater than in size. Such instruments, being larger—and so having scales capable of measuring smaller values—were capable of greater precision than had previously been achieved. Some of his measurements were more accurate than those taken by the Polish astronomer and mathematician Nicolaus Copernicus during the Renaissance. One reason for this is thought to be that al-Battānī's location for his observations at Raqqa was closer to the Earth's equator, so that the ecliptic and the Sun, being higher in the sky, were less susceptible to atmospheric refraction. The careful construction and alignment of his astronomical instruments enabled him to achieve an accuracy of observations of equinoxes and solstices that had previously been unknown.
Al-Battānī was one of the first astronomers to observe that the distance between the Earth and the Sun varies during the year, which led him to understand the reason why annular solar eclipses occur. He saw that the position in the sky at which the angular diameter of the Sun appeared smallest was no longer located where Ptolemy had stated it should be, and that since Ptolemy's time, the longitudinal position of the apogee had increased by 16°47'.
Al-Battānī was an excellent observer. He improved Ptolemy's measurement of the obliquity of the ecliptic (the angle between the planes of the equator and the ecliptic), producing a value of 23° 35'; the accepted value is around 23°.44. Al-Battānī obtained the criterion for observation of the lunar crescent—i.e., if the longitude difference between the Moon and the Sun is greater than 13° 66˝ and the Moon's delay after sunset is more than 43.2 minutes, the crescent will be visible. His value for the solar year of 365 days, 5 hours, 46 minutes and 24 seconds, is 2 minutes and 22 seconds from the accepted value.
Al-Battānī observed changes in the direction of the Sun's apogee, as recorded by Ptolemy, and that as a result, the equation of time was subject to a slow cyclical variation. His careful measurements of when the March and September equinoxes took place allowed him to obtain a value for the precession of the equinoxes of 54.5" per year, or 1 degree in 66 years, a phenomenon that he realised was altering the Sun's annual apparent motion through the zodiac constellations.
It was impossible for al-Battānī, who adhered to the ideas of a stationary Earth and geocentricism, to understand the underlying scientific reasons for his observations or the importance of his discoveries.
Mathematics.
One of al-Battani's greatest contributions was his introduction of the use of sines and tangents in geometrical calculations, especially spherical trigonometric functions, to replace Ptolemy's geometrical methods. Al-Battānī's methods involved some of the most complex mathematics developed up to that time. He was aware of the superiority of trigonometry over geometrical chords, and demonstrated awareness of a relation between the sides and angles of a spherical triangle, now given by the expression:
formula_0
Al-Battānī produced a number of trigonometrical relationships:
formula_1
formula_2, where formula_3.
He also solved the equation
formula_4,
discovering the formula
formula_5
Al-Battānī used the Iranian astronomer Habash al-Hasib al-Marwazi's idea of tangents to develop equations for calculating and compiling tables of both tangents and cotangents. He discovered their reciprocal functions, the secant and cosecant, and produced the first table of cosecants for each degree from 1° to 90°, which he referred to as a "table of shadows", in reference to the shadow produced on a sundial.
Using these trigonometrical relationships, al-Battānī created an equation for finding the , which Muslims face in each of the five prayers they practice every day. The equation he created did not give accurate directions, as it did not take into account the fact that Earth is a sphere. The relationship he used was precise enough only for a person located in (or close to) Mecca, but was still a widely used method at the time. Al-Battānī's equation for formula_6, the angle of the direction of a place towards Mecca is given by:
formula_7
where formula_8 is the difference between the longitude of the place and Mecca, and formula_9 is the difference between the latitude of the place and Mecca.
Al-Battānī's equation was superseded a century after it was first used, when the polymath al-Biruni summarized several other methods to produce results that were more accurate than those that could be obtained using al-Battānī's equation.
A small work on trigonometry, ("Summary of the principles for establishing sines") is known. Once attributed to the Iranian astronomer Kushyar Gilani by the German orientalist Carl Brockelmann, it is a fragment of al-Battānī's . The manuscript is extant in Istanbul as MS Carullah 1499/3. The authenticity of this work has been questioned, as scholars believe al-Battānī would have not have included for "sines" in the title.
Works.
Al-Battānī's ( or , "Book of Astronomical Tables"), written in around 900, and also known as the (), is the earliest extant made in the Ptolemaic tradition that is hardly influenced by Hindu or Sasanian–Iranian astronomy. It corrected mistakes made by Ptolemy and described instruments such as horizontal and vertical sundials, the triquetrum, the mural instrument, and a quadrant instrument. Ibn al-Nadim wrote that al-Battānī's existed in two different editions, "the second being better than the first". In the west, the work was sometimes called the "Sabean Tables".
The work, consisting of 57 chapters and additional tables, is extant (in the manuscript árabe 908, held in El Escorial), copied in Al-Andalus during the 12th or 13th century. Incomplete copies exist in other western European libraries. Much of the book consists of instructions for using the attached tables. Al-Battānī used an Arabic translation of the "Almagest" made from Syriac, and used few foreign terms. He copied some data directly from Ptolemy's Handy Tables, but also produced his own. His star table of 880 used around half the stars found in the then 743-year-old "Almagest". It was made by increasing Ptolemy's stellar longitudes, to allow for the different positions of the stars, now known to be caused by precession.
Other based on include those written by Kushyar Gilani, Alī ibn Ahmad al-Nasawī, Abū Rashīd Dāneshī, and Ibn al-Kammad.
The first version in Latin from the Arabic was made by the English astronomer Robert of Ketton; this version is now lost. A Latin edition was also produced by the Italian astronomer Plato Tiburtinus between 1134 and 1138. Medieval astronomers became quite familiar with al-Battānī through this translation, renamed ("On stellar motion"). It was also translated from Arabic into Spanish during the 13th century, under the orders of Alphonso X of Castile; a part of the manuscript is extant.
The appears to have been widely used until the early 12th century. One 11th-century , now lost, was compiled by al-Nasawī. That it was based on al-Battānī can be inferred from the matching values for the longitudes of the solar and planetary apogees. Al-Nasawī had as a young man written astronomical tables using data obtained from al-Battānī's , but then discovered the data he used had been superseded by more accurately made calculations.
The invention of movable type in 1436 made it possible for astronomical works to be circulated more widely, and a Latin translation of the was printed in Nuremberg in 1537 by the astronomer Regiomontanus, which enabled Al-Battānī's observations to become accessible at the start of the scientific revolution in astronomy. The was reprinted in Bologna in 1645; the original document is preserved at the Vatican Library in Rome.
The Latin translations, including the printed edition of 1537, made the influential in the development of European astronomy. A chapter of the also appeared as a separate work, ("On the accurate determination of the quantities of conjunctions [according to the latitudes of the planets]").
Al-Battānī's work was published in three volumes, in 1899, 1903, and 1907, by the Italian Orientalist Carlo Alfonso Nallino, who gave it the title . Nallino's edition, although in Latin, is the foundation of the modern study of medieval Islamic astronomy.
(, “The book of the science of the ascensions of the signs of the zodiac in the spaces between the quadrants of the celestial sphere”) may have been about calculations relating to the zodiac. The work is mentioned in a work by Ibn al-Nadim, and is probably identical with chapter 55 of al-Battānī's . It provided methods of calculation needed in the astrological problem of finding (directio).
Legacy.
Medieval period.
The was renowned by medieval Islamic astronomers; the Arab polymath al-Bīrūnī wrote ("Elucidation of genius in al-Battānī's Zīj"), now lost.
Al-Battānī's work was instrumental in the development of science and astronomy in the west. Once it became known, it was used by medieval European astronomers and during the Renaissance. He influenced Jewish rabbis and philosophers such as Abraham ibn Ezra and Gersonides. The 12th-century scholar Moses Maimonides, the intellectual leader of medieval Judaism, closely followed al-Battānī. Hebrew editions of the were produced by the 12th-century Catalan astronomer Abraham bar Hiyya and the 14th-century French mathematician Immanuel Bonfils.
Copernicus referred to "al-Battani the Harranite" when discussing the orbits of Mercury and Venus. He compared to his own value for the sidereal year with those obtained by al-Battānī, Ptolemy and a value he attributed to the 9th-century scholar Thabit ibn Qurra. The accuracy of al-Battānī's observations encouraged Copernicus to pursue his ideas about the heliocentric nature of the cosmos, and in the book that initiated the Copernican Revolution, the , al-Battānī is mentioned 23 times.
16th and 17th centuries.
Al-Battānī's tables were used by the German mathematician Christopher Clavius in reforming the Julian calendar, leading to it being replaced by the Gregorian calendar in 1582. The astronomers Tycho Brahe, Giovanni Battista Riccioli, Johannes Kepler and Galileo Galilei cited Al-Battānī or his observations. His almost exactly correct value obtained for the Sun's eccentricity is better than the values determined by both Copernicus and Brahe.
The lunar crater Albategnius was named in his honour during the 17th century. Like many of the craters on the Moon's near side, it was given its name by Riccioli, whose 1651 nomenclature system has become standardized.
In the 1690s, the English physicist and astronomer Edmund Halley, using Plato Tiburtius's translation of al-Battānī's , discovered that the Moon's speed was possibly increasing. Halley researched the location of Raqqa, where al-Battānī's observatory had been built, using the astronomer's calculations for the solar obliquity, the interval between successive autumnal equinoxes and several solar and lunar eclipses seen from Raqqa and Antioch. From this information, Halley derived the mean motion and position of the Moon for the years 881, 882, 883, 891, and 901. To interpret his results, Halley was dependent upon on knowing the location of Raqqa, which he was able to do once he had corrected the accepted value for the latitude of Aleppo.
18th century – present.
Al-Battānī's observations of eclipses were used by the English astronomer Richard Dunthorne to determine a value for the increasing speed of the Moon in its orbit, he calculated that the lunar longitude was changing at a rate of 10 arcseconds per century.
Al-Battānī's data is still used by geophysicists.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\cos a = \\cos b\\cos c + \\sin b\\sin c\\cos A"
},
{
"math_id": 1,
"text": "\\tan \\alpha = \\frac{\\sin \\alpha}{\\cos \\alpha}"
},
{
"math_id": 2,
"text": "\\sec \\alpha = \\sqrt{1 + \\tan^2 \\alpha }"
},
{
"math_id": 3,
"text": "\\sec \\alpha = \\frac{1}{\\cos \\alpha}"
},
{
"math_id": 4,
"text": "\\sin x = y\\cos x"
},
{
"math_id": 5,
"text": "\\sin x = \\frac{y}{\\sqrt{1 + y^2}}"
},
{
"math_id": 6,
"text": "q"
},
{
"math_id": 7,
"text": "\\tan q=\\frac{\\sin\\Delta\\lambda}{\\sin\\Delta\\phi}"
},
{
"math_id": 8,
"text": "\\Delta\\lambda"
},
{
"math_id": 9,
"text": "\\Delta\\phi"
}
] | https://en.wikipedia.org/wiki?curid=858914 |
8590426 | Athermalization | Process of achieving optothermal stability in optomechanical systems
Athermalization, in the field of optics, is the process of achieving optothermal stability in optomechanical systems. This is done by minimizing variations in optical performance over a range of temperatures.
Optomechanical systems are typically made of several materials with different thermal properties. These materials compose the optics (refractive or reflective elements) and the mechanics (optical mounts and system housing). As the temperature of these materials change, the volume and index of refraction will change as well, increasing strain and aberration content (primarily defocus). Compensating for optical variations over a temperature range is known as athermalizing a system in optical engineering.
Material property changes.
Thermal expansion is the driving phenomena for the extensive and intensive property changes in an optomechanical system.
Extensive properties.
Extensive property changes, such as volume, alter the shape of optical and mechanical components. Systems are geometrically optimized for optical performance and are sensitive to components changing shape and orientation. While volume is a three dimensional parameter, thermal changes can be modeled in a single dimension with linear expansion, assuming an adequately small temperature range. For examples, glass manufacturer Schott provides the coefficient of linear thermal expansion for a temperature range of -30 C to 70 C. The change in length of a material is a function of the change in temperature with respect to the standard measurement temperature, formula_0. This temperature is typically room temperature or 22 degrees Celsius.
formula_1
formula_2
Where formula_3 is the length of a material at temperature formula_4, formula_5 is the length of the material at temperature formula_0, formula_6 is the change in temperature, and formula_7 is the coefficient of thermal expansion. These equations describe how diameter, thickness, radius of curvature, and element spacing change as a function of temperature.
Intensive properties.
The dominant intensive property change, in terms of optical performance, is the index of refraction. The refractive index of glass is a function of wavelength and temperature. There are multiple formulas that can be used to define the wavelength dependence, or dispersion, of a glass. Following the notation from Schott, the empirical Sellmeier equation is shown below.
formula_8
Where formula_9 is wavelength and formula_10, formula_11, formula_12, formula_13, formula_14, and formula_15 are the Sellmeier coefficients. These coefficients can be found in glass catalogs provided from manufacturers and are usually valid from the near-ultraviolet to the near-infrared. For wavelengths beyond this range, it is necessary to know the material's transmittance with respect to wavelength. From the dispersion formula, the temperature dependence of refractive index can be written:
formula_16
and
formula_17
Where formula_18, formula_19, formula_20, formula_21, formula_22, and formula_23 are glass-dependent constants for an optic in vacuum. The power of an optic as a function of temperature can be written from the equations for extensive and intensive property changes, in addition to the lensmaker's equation.
formula_24
formula_25
Where formula_26 is optical power, formula_27 is the radius of curvature, formula_28 is the thickness of the lens. These equations assume spherical surfaces of curvature. If a system is not in vacuum, the index of refraction for air will vary with temperature and pressure according to the Ciddor equation, a modified version of the Edlén equation.
Athermalization techniques.
To account for optical variations introduced by extensive and intensive property changes in materials, systems can be athermalized through material selection or feedback loops.
Passive athermalization.
Passive athermalization works by choosing materials for a system that will compensate the overall change in system performance. The simplest way to do this is to choose materials for the optics and mechanics which have low CTE and formula_29 values. This technique is not always possible as glass types are primarily chosen based on their refractive index and dispersion characteristics at operating temperature. Alternatively, mechanical materials can be chosen which have CTE values complementary to the change in focus introduced by the optics. A material with the preferred CTE is not always available, so two materials can be used in conjunction to effectively get the desired CTE value. Negative thermal expansion materials have recently increased the range of potential CTEs available, expanding passive athermalization options.
Active athermalization.
When optical designs do not permit the selection of materials based on their thermal characteristics, passive athermalization may not be a viable technique. For example, the use of germanium in mid to long wave infrared systems is common because of its exceptional optical properties (high index of refraction and low dispersion). Unfortunately, germanium is also known for its large formula_29 value, which makes it difficult to passively athermalize.
Because the primary aberration induced by temperature change is defocus, an optical element, group, or focal plane can be mechanically moved to refocus a system and account for thermal changes. Actively athermalized systems are designed with a feedback loop including a motor, for the focusing mechanism, and temperature sensor, to indicate the magnitude of the focus adjustment.
Temperature gradients.
When a system is not in thermal equilibrium, it complicates the process of determining system performance. A common temperature gradient to encounter is an axial gradient. This involves temperatures changing in a lens as a function of the thickness of the lens, or often along the optical axis. In optical lens design it is standard notation for the optical axis to be co-linear with the Z-axis in cartesian coordinates. A difference between the temperature of the first and second surface of a lens will cause the lens to bend. This affects each radius of curvature, therefor changing the optical power of the lens. The radius of curvature change is a function of the temperature gradient in the optic.
formula_30
Where formula_28 is the thickness of the lens. Radial gradients are less predictable as they may cause the shape of curvature to change, making spherical surfaces aspherical. Determining temperature gradients in an optomechanical system can quickly become an arduous task, requiring an intimate understanding of the heat sources and sinks in a system. Temperature gradients are determined by heat flow and can be a result of conduction, convection, or radiation. Whether steady-state or transient solutions are adequate for an analysis is determined by operating requirements, system design, and the environment. It can be beneficial to leverage the computational power of the finite element method to solve the applicable heat flow equations to determine the temperature gradients of optical and mechanical components.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "T_0"
},
{
"math_id": 1,
"text": "\\Delta T = T - T_{0}\\,"
},
{
"math_id": 2,
"text": "L_{\\left(T\\right)}= L_0 + \\Delta L = \\left(1+\\alpha\\Delta T\\right)L_0\\,"
},
{
"math_id": 3,
"text": "L_{\\left(T\\right)}"
},
{
"math_id": 4,
"text": "T"
},
{
"math_id": 5,
"text": "L_0"
},
{
"math_id": 6,
"text": "\\Delta T"
},
{
"math_id": 7,
"text": "\\alpha"
},
{
"math_id": 8,
"text": "n_{\\left(\\lambda,\\ T_0\\right)}=\\sqrt{1+\\frac{B_1\\lambda^2}{\\lambda^2-C_1}+\\frac{B_2\\lambda^2}{\\lambda^2-C_2}+\\frac{B_3\\lambda^2}{\\lambda^2-C_3}}\\,"
},
{
"math_id": 9,
"text": "\\lambda"
},
{
"math_id": 10,
"text": "B_1"
},
{
"math_id": 11,
"text": "B_2"
},
{
"math_id": 12,
"text": "B_3"
},
{
"math_id": 13,
"text": "C_1"
},
{
"math_id": 14,
"text": "C_2"
},
{
"math_id": 15,
"text": "C_3"
},
{
"math_id": 16,
"text": "\\frac{dn_{abs\\left(\\lambda,\\ T\\right)}}{dT}=\\frac{n_{\\left(\\lambda,\\ T_0\\right)}^2-1}{2n_{\\left(\\lambda,\\ T_0\\right)}}\\left(D_0+2D_1\\Delta T+3D_2\\Delta T^2+\\frac{E_0+2E_1\\Delta T}{\\lambda^2-\\lambda_{TK}^2}\\right)\\,"
},
{
"math_id": 17,
"text": "n_{\\left(\\lambda,\\ T\\right)}=\\left(1+\\Delta T\\frac{dn_{abs\\left(\\lambda,\\ T\\right)}}{dT}\\right)n_{\\left(\\lambda,\\ T_0\\right)}\\,"
},
{
"math_id": 18,
"text": "D_0"
},
{
"math_id": 19,
"text": "D_1"
},
{
"math_id": 20,
"text": "D_2"
},
{
"math_id": 21,
"text": "E_0"
},
{
"math_id": 22,
"text": "E_1"
},
{
"math_id": 23,
"text": "\\lambda_{TK}"
},
{
"math_id": 24,
"text": "\\Phi_{lens\\left(\\lambda,\\ T\\right)}=\\left(n_{\\left(\\lambda,\\ T\\right)}-1\\right)\\left(\\frac{1}{R_{1\\left(T\\right)}}-\\frac{1}{R_{2\\left(T\\right)}}+\\frac{\\left(n_{\\left(\\lambda,\\ T\\right)}-1\\right)L_{\\left(T\\right)}}{{n_{\\left(T\\right)}R}_{1\\left(T\\right)}R_{2\\left(T\\right)}}\\right)\\,"
},
{
"math_id": 25,
"text": "\\Phi_{mirror\\ \\left(T\\right)}=\\frac{-2}{R_{(T)}}\\,"
},
{
"math_id": 26,
"text": "\\Phi"
},
{
"math_id": 27,
"text": "R"
},
{
"math_id": 28,
"text": "L"
},
{
"math_id": 29,
"text": "\\frac{dn}{dT}"
},
{
"math_id": 30,
"text": " R = (R_0 + \\Delta R) = \\left(1 -\\alpha R_0 \\frac{dT}{dL} \\right) R_0\\,"
}
] | https://en.wikipedia.org/wiki?curid=8590426 |
859234 | Mechanical energy | Sum of potential and kinetic energy
In physical sciences, mechanical energy is the sum of potential energy and kinetic energy. The principle of conservation of mechanical energy states that if an isolated system is subject only to conservative forces, then the mechanical energy is constant. If an object moves in the opposite direction of a conservative net force, the potential energy will increase; and if the speed (not the velocity) of the object changes, the kinetic energy of the object also changes. In all real systems, however, nonconservative forces, such as frictional forces, will be present, but if they are of negligible magnitude, the mechanical energy changes little and its conservation is a useful approximation. In elastic collisions, the kinetic energy is conserved, but in inelastic collisions some mechanical energy may be converted into thermal energy. The equivalence between lost mechanical energy and an increase in temperature was discovered by James Prescott Joule.
Many devices are used to convert mechanical energy to or from other forms of energy, e.g. an electric motor converts electrical energy to mechanical energy, an electric generator converts mechanical energy into electrical energy and a heat engine converts heat to mechanical energy.
General.
Energy is a scalar quantity and the mechanical energy of a system is the sum of the potential energy (which is measured by the position of the parts of the system) and the kinetic energy (which is also called the energy of motion):
formula_0
The potential energy, "U", depends on the position of an object subjected to gravity or some other conservative force. The gravitational potential energy of an object is equal to the weight "W" of the object multiplied by the height "h" of the object's center of gravity relative to an arbitrary datum:
formula_1
The potential energy of an object can be defined as the object's ability to do work and is increased as the object is moved in the opposite direction of the direction of the force. If "F" represents the conservative force and "x" the position, the potential energy of the force between the two positions "x1" and "x2" is defined as the negative integral of "F" from "x1" to "x2":
formula_2
The kinetic energy, "K", depends on the speed of an object and is the ability of a moving object to do work on other objects when it collides with them. It is defined as one half the product of the object's mass with the square of its speed, and the total kinetic energy of a system of objects is the sum of the kinetic energies of the respective objects:
formula_3
The principle of conservation of mechanical energy states that if a body or system is subjected only to conservative forces, the mechanical energy of that body or system remains constant. The difference between a conservative and a non-conservative force is that when a conservative force moves an object from one point to another, the work done by the conservative force is independent of the path. On the contrary, when a non-conservative force acts upon an object, the work done by the non-conservative force is dependent of the path.
Conservation of mechanical energy.
According to the principle of conservation of mechanical energy, the mechanical energy of an isolated system remains constant in time, as long as the system is free of friction and other non-conservative forces. In any real situation, frictional forces and other non-conservative forces are present, but in many cases their effects on the system are so small that the principle of conservation of mechanical energy can be used as a fair approximation. Though energy cannot be created or destroyed, it can be converted to another form of energy.
Swinging pendulum.
In a mechanical system like a swinging pendulum subjected to the conservative gravitational force where frictional forces like air drag and friction at the pivot are negligible, energy passes back and forth between kinetic and potential energy but never leaves the system. The pendulum reaches greatest kinetic energy and least potential energy when in the vertical position, because it will have the greatest speed and be nearest the Earth at this point. On the other hand, it will have its least kinetic energy and greatest potential energy at the extreme positions of its swing, because it has zero speed and is farthest from Earth at these points. However, when taking the frictional forces into account, the system loses mechanical energy with each swing because of the negative work done on the pendulum by these non-conservative forces.
Irreversibilities.
That the loss of mechanical energy in a system always resulted in an increase of the system's temperature has been known for a long time, but it was the amateur physicist James Prescott Joule who first experimentally demonstrated how a certain amount of work done against friction resulted in a definite quantity of heat which should be conceived as the random motions of the particles that comprise matter. This equivalence between mechanical energy and heat is especially important when considering colliding objects. In an elastic collision, mechanical energy is conserved – the sum of the mechanical energies of the colliding objects is the same before and after the collision. After an inelastic collision, however, the mechanical energy of the system will have changed. Usually, the mechanical energy before the collision is greater than the mechanical energy after the collision. In inelastic collisions, some of the mechanical energy of the colliding objects is transformed into kinetic energy of the constituent particles. This increase in kinetic energy of the constituent particles is perceived as an increase in temperature. The collision can be described by saying some of the mechanical energy of the colliding objects has been converted into an equal amount of heat. Thus, the total energy of the system remains unchanged though the mechanical energy of the system has reduced.
Satellite.
A satellite of mass formula_6 at a distance formula_7 from the centre of Earth possesses both kinetic energy, formula_8, (by virtue of its motion) and gravitational potential energy, formula_4, (by virtue of its position within the Earth's gravitational field; Earth's mass is formula_9).
Hence, mechanical energy formula_5 of the satellite-Earth system is given by
formula_10
formula_11
If the satellite is in circular orbit, the energy conservation equation can be further simplified into
formula_12
since in circular motion, Newton's 2nd Law of motion can be taken to be
formula_13
Conversion.
Today, many technological devices convert mechanical energy into other forms of energy or vice versa. These devices can be placed in these categories:
Distinction from other types.
The classification of energy into different types often follows the boundaries of the fields of study in the natural sciences.
References.
Notes
<templatestyles src="Reflist/styles.css" />
Citations
<templatestyles src="Reflist/styles.css" />
Bibliography | [
{
"math_id": 0,
"text": "E_\\text{mechanical}=U+K"
},
{
"math_id": 1,
"text": "U = W h"
},
{
"math_id": 2,
"text": "U = - \\int_{x_1}^{x_2} \\vec{F}\\cdot d\\vec{x}"
},
{
"math_id": 3,
"text": "K={1 \\over 2}mv^2"
},
{
"math_id": 4,
"text": "U"
},
{
"math_id": 5,
"text": "E_\\text{mechanical}"
},
{
"math_id": 6,
"text": " m"
},
{
"math_id": 7,
"text": " r"
},
{
"math_id": 8,
"text": " K"
},
{
"math_id": 9,
"text": "M"
},
{
"math_id": 10,
"text": "E_\\text{mechanical} = U + K"
},
{
"math_id": 11,
"text": "E_\\text{mechanical} = - G \\frac{M m}{r}\\ + \\frac{1}{2}\\, m v^2"
},
{
"math_id": 12,
"text": "E_\\text{mechanical} = - G \\frac{M m}{2r} "
},
{
"math_id": 13,
"text": "G \\frac{M m}{r^2}\\ = \\frac{m v^2}{r} "
}
] | https://en.wikipedia.org/wiki?curid=859234 |
859275 | Displacement (geometry) | Vector relating the initial and the final positions of a moving point
<templatestyles src="Hlist/styles.css"/>
In geometry and mechanics, a displacement is a vector whose length is the shortest distance from the initial to the final position of a point P undergoing motion. It quantifies both the distance and direction of the net or total motion along a straight line from the initial position to the final position of the point trajectory. A displacement may be identified with the translation that maps the initial position to the final position. Displacement is the shift in location when an object in motion changes from one position to another.
A displacement may also be described as a "relative position" (resulting from the motion), that is, as the final position "x"f of a point relative to its initial position "x"i. The corresponding displacement vector can be defined as the difference between the final and initial positions:
formula_0
In considering motions of objects over time, the instantaneous velocity of the object is the rate of change of the displacement as a function of time. The instantaneous speed, then, is distinct from velocity, or the time rate of change of the distance travelled along a specific path. The velocity may be equivalently defined as the time rate of change of the position vector. If one considers a moving initial position, or equivalently a moving origin (e.g. an initial position or origin which is fixed to a train wagon, which in turn moves on its rail track), the velocity of P (e.g. a point representing the position of a passenger walking on the train) may be referred to as a "relative velocity"; this is opposed to an "absolute velocity", which is computed with respect to a point and coordinate axes which are considered to be at rest (a inertial frame of reference such as, for instance, a point fixed on the floor of the train station and the usual vertical and horizontal directions).
For motion over a given interval of time, the displacement divided by the length of the time interval defines the average velocity, which is a vector, and differs thus from the average speed, which is a scalar quantity.
Rigid body.
In dealing with the motion of a rigid body, the term "displacement" may also include the rotations of the body. In this case, the displacement of a particle of the body is called linear displacement (displacement along a line), while the rotation of the body is called "angular displacement".
Derivatives.
For a position vector formula_1 that is a function of time formula_2, the derivatives can be computed with respect to formula_2. The first two derivatives are frequently encountered in physics.
formula_3
formula_4
formula_5
These common names correspond to terminology used in basic kinematics. By extension, the higher order derivatives can be computed in a similar fashion. Study of these higher order derivatives can improve approximations of the original displacement function. Such higher-order terms are required in order to accurately represent the displacement function as a sum of an infinite series, enabling several analytical techniques in engineering and physics. The fourth order derivative is called jounce.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " s = x_\\textrm{f} - x_\\textrm{i} = \\Delta{x}"
},
{
"math_id": 1,
"text": "\\mathbf{s}"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": "\\mathbf{v} = \\frac{d\\mathbf{s}}{dt}"
},
{
"math_id": 4,
"text": "\\mathbf{a} = \\frac{d\\mathbf{v}}{dt} = \\frac{d^2\\mathbf{s}}{dt^2}"
},
{
"math_id": 5,
"text": "\\mathbf{j} = \\frac{d\\mathbf{a}}{dt} = \\frac{d^2\\mathbf{v}}{dt^2}=\\frac{d^3\\mathbf{s}}{dt^3}"
}
] | https://en.wikipedia.org/wiki?curid=859275 |
859283 | Cylinder | Three-dimensional solid
A cylinder (from grc " "" ()" 'roller, tumbler') has traditionally been a three-dimensional solid, one of the most basic of curvilinear geometric shapes. In elementary geometry, it is considered a prism with a circle as its base.
A cylinder may also be defined as an infinite curvilinear surface in various modern branches of geometry and topology. The shift in the basic meaning—solid versus surface (as in a solid ball versus sphere surface)—has created some ambiguity with terminology. The two concepts may be distinguished by referring to solid cylinders and cylindrical surfaces. In the literature the unadorned term cylinder could refer to either of these or to an even more specialized object, the "right circular cylinder".
Types.
The definitions and results in this section are taken from the 1913 text "Plane and Solid Geometry" by George A. Wentworth and David Eugene Smith .
A "<dfn >cylindrical surface</dfn>" is a surface consisting of all the points on all the lines which are parallel to a given line and which pass through a fixed plane curve in a plane not parallel to the given line. Any line in this family of parallel lines is called an "element" of the cylindrical surface. From a kinematics point of view, given a plane curve, called the "directrix", a cylindrical surface is that surface traced out by a line, called the "generatrix", not in the plane of the directrix, moving parallel to itself and always passing through the directrix. Any particular position of the generatrix is an element of the cylindrical surface.
A solid bounded by a cylindrical surface and two parallel planes is called a (solid) "<dfn >cylinder</dfn>". The line segments determined by an element of the cylindrical surface between the two parallel planes is called an "element of the cylinder". All the elements of a cylinder have equal lengths. The region bounded by the cylindrical surface in either of the parallel planes is called a "<dfn >base</dfn>" of the cylinder. The two bases of a cylinder are congruent figures. If the elements of the cylinder are perpendicular to the planes containing the bases, the cylinder is a "<dfn >right cylinder</dfn>", otherwise it is called an "<dfn >oblique cylinder</dfn>". If the bases are disks (regions whose boundary is a circle) the cylinder is called a "<dfn >circular cylinder</dfn>". In some elementary treatments, a cylinder always means a circular cylinder.
The "<dfn >height</dfn>" (or altitude) of a cylinder is the perpendicular distance between its bases.
The cylinder obtained by rotating a line segment about a fixed line that it is parallel to is a "<dfn >cylinder of revolution</dfn>". A cylinder of revolution is a right circular cylinder. The height of a cylinder of revolution is the length of the generating line segment. The line that the segment is revolved about is called the "<dfn >axis</dfn>" of the cylinder and it passes through the centers of the two bases.
Right circular cylinders.
The bare term "cylinder" often refers to a solid cylinder with circular ends perpendicular to the axis, that is, a right circular cylinder, as shown in the figure. The cylindrical surface without the ends is called an "<dfn >open cylinder</dfn>". The formulae for the surface area and the volume of a right circular cylinder have been known from early antiquity.
A right circular cylinder can also be thought of as the solid of revolution generated by rotating a rectangle about one of its sides. These cylinders are used in an integration technique (the "disk method") for obtaining volumes of solids of revolution.
A tall and thin "needle cylinder" has a height much greater than its diameter, whereas a short and wide "disk cylinder" has a diameter much greater than its height.
Properties.
Cylindric sections.
A cylindric section is the intersection of a cylinder's surface with a plane. They are, in general, curves and are special types of "plane sections". The cylindric section by a plane that contains two elements of a cylinder is a parallelogram. Such a cylindric section of a right cylinder is a rectangle.
A cylindric section in which the intersecting plane intersects and is perpendicular to all the elements of the cylinder is called a "<dfn >right section</dfn>". If a right section of a cylinder is a circle then the cylinder is a circular cylinder. In more generality, if a right section of a cylinder is a conic section (parabola, ellipse, hyperbola) then the solid cylinder is said to be parabolic, elliptic and hyperbolic, respectively.
For a right circular cylinder, there are several ways in which planes can meet a cylinder. First, planes that intersect a base in at most one point. A plane is tangent to the cylinder if it meets the cylinder in a single element. The right sections are circles and all other planes intersect the cylindrical surface in an ellipse. If a plane intersects a base of the cylinder in exactly two points then the line segment joining these points is part of the cylindric section. If such a plane contains two elements, it has a rectangle as a cylindric section, otherwise the sides of the cylindric section are portions of an ellipse. Finally, if a plane contains more than two points of a base, it contains the entire base and the cylindric section is a circle.
In the case of a right circular cylinder with a cylindric section that is an ellipse, the eccentricity "e" of the cylindric section and semi-major axis "a" of the cylindric section depend on the radius of the cylinder "r" and the angle "α" between the secant plane and cylinder axis, in the following way:
formula_0
Volume.
If the base of a circular cylinder has a radius "r" and the cylinder has height h, then its volume is given by
formula_1
This formula holds whether or not the cylinder is a right cylinder.
This formula may be established by using Cavalieri's principle.
In more generality, by the same principle, the volume of any cylinder is the product of the area of a base and the height. For example, an elliptic cylinder with a base having semi-major axis a, semi-minor axis b and height h has a volume "V" = "Ah", where A is the area of the base ellipse (= ). This result for right elliptic cylinders can also be obtained by integration, where the axis of the cylinder is taken as the positive x-axis and "A"("x") = "A" the area of each elliptic cross-section, thus:
formula_2
Using cylindrical coordinates, the volume of a right circular cylinder can be calculated by integration
formula_3
Surface area.
Having radius "r" and altitude (height) h, the surface area of a right circular cylinder, oriented so that its axis is vertical, consists of three parts:
The area of the top and bottom bases is the same, and is called the "base area", "B". The area of the side is known as the "<dfn >lateral area</dfn>", "L".
An "open cylinder" does not include either top or bottom elements, and therefore has surface area (lateral area)
formula_4
The surface area of the solid right circular cylinder is made up the sum of all three components: top, bottom and side. Its surface area is therefore
formula_5
where "d" = 2"r" is the diameter of the circular top or bottom.
For a given volume, the right circular cylinder with the smallest surface area has "h" = 2"r". Equivalently, for a given surface area, the right circular cylinder with the largest volume has "h" = 2"r", that is, the cylinder fits snugly in a cube of side length = altitude ( = diameter of base circle).
The lateral area, L, of a circular cylinder, which need not be a right cylinder, is more generally given by
formula_6
where e is the length of an element and p is the perimeter of a right section of the cylinder. This produces the previous formula for lateral area when the cylinder is a right circular cylinder.
Right circular hollow cylinder (cylindrical shell).
A "right circular hollow cylinder" (or "<dfn >cylindrical shell</dfn>") is a three-dimensional region bounded by two right circular cylinders having the same axis and two parallel annular bases perpendicular to the cylinders' common axis, as in the diagram.
Let the height be "h", internal radius "r", and external radius "R". The volume is given by
formula_7
Thus, the volume of a cylindrical shell equals average radius × altitude × thickness.
The surface area, including the top and bottom, is given by
formula_8
Cylindrical shells are used in a common integration technique for finding volumes of solids of revolution.
"On the Sphere and Cylinder".
In the treatise by this name, written c. 225 BCE, Archimedes obtained the result of which he was most proud, namely obtaining the formulas for the volume and surface area of a sphere by exploiting the relationship between a sphere and its circumscribed right circular cylinder of the same height and diameter. The sphere has a volume two-thirds that of the circumscribed cylinder and a surface area two-thirds that of the cylinder (including the bases). Since the values for the cylinder were already known, he obtained, for the first time, the corresponding values for the sphere. The volume of a sphere of radius r is π"r"3 = (2π"r"3). The surface area of this sphere is 4π"r"2 = (6π"r"2). A sculpted sphere and cylinder were placed on the tomb of Archimedes at his request.
Cylindrical surfaces.
In some areas of geometry and topology the term "cylinder" refers to what has been called a cylindrical surface. A cylinder is defined as a surface consisting of all the points on all the lines which are parallel to a given line and which pass through a fixed plane curve in a plane not parallel to the given line. Such cylinders have, at times, been referred to as "<dfn >generalized cylinders</dfn>". Through each point of a generalized cylinder there passes a unique line that is contained in the cylinder. Thus, this definition may be rephrased to say that a cylinder is any ruled surface spanned by a one-parameter family of parallel lines.
A cylinder having a right section that is an ellipse, parabola, or hyperbola is called an elliptic cylinder, parabolic cylinder and hyperbolic cylinder, respectively. These are degenerate quadric surfaces.
When the principal axes of a quadric are aligned with the reference frame (always possible for a quadric), a general equation of the quadric in three dimensions is given by
formula_9
with the coefficients being real numbers and not all of A, B and C being 0. If at least one variable does not appear in the equation, then the quadric is degenerate. If one variable is missing, we may assume by an appropriate rotation of axes that the variable z does not appear and the general equation of this type of degenerate quadric can be written as
formula_10
where
formula_11
Elliptic cylinder.
If "AB" > 0 this is the equation of an "elliptic cylinder". Further simplification can be obtained by translation of axes and scalar multiplication. If formula_12 has the same sign as the coefficients A and B, then the equation of an elliptic cylinder may be rewritten in Cartesian coordinates as:
formula_13
This equation of an elliptic cylinder is a generalization of the equation of the ordinary, "circular cylinder" ("a" = "b"). Elliptic cylinders are also known as "cylindroids", but that name is ambiguous, as it can also refer to the Plücker conoid.
If formula_12 has a different sign than the coefficients, we obtain the "imaginary elliptic cylinders":
formula_14
which have no real points on them. (formula_15 gives a single real point.)
Hyperbolic cylinder.
If A and B have different signs and formula_16, we obtain the "hyperbolic cylinders", whose equations may be rewritten as:
formula_17
Parabolic cylinder.
Finally, if "AB" = 0 assume, without loss of generality, that "B" = 0 and "A" = 1 to obtain the "parabolic cylinders" with equations that can be written as:
formula_18
Projective geometry.
In projective geometry, a cylinder is simply a cone whose apex (vertex) lies on the plane at infinity. If the cone is a quadratic cone, the plane at infinity (which passes through the vertex) can intersect the cone at two real lines, a single real line (actually a coincident pair of lines), or only at the vertex. These cases give rise to the hyperbolic, parabolic or elliptic cylinders respectively.
This concept is useful when considering degenerate conics, which may include the cylindrical conics.
Prisms.
A "solid circular cylinder" can be seen as the limiting case of a n-gonal prism where "n" approaches infinity. The connection is very strong and many older texts treat prisms and cylinders simultaneously. Formulas for surface area and volume are derived from the corresponding formulas for prisms by using inscribed and circumscribed prisms and then letting the number of sides of the prism increase without bound. One reason for the early emphasis (and sometimes exclusive treatment) on circular cylinders is that a circular base is the only type of geometric figure for which this technique works with the use of only elementary considerations (no appeal to calculus or more advanced mathematics). Terminology about prisms and cylinders is identical. Thus, for example, since a "truncated prism" is a prism whose bases do not lie in parallel planes, a solid cylinder whose bases do not lie in parallel planes would be called a "truncated cylinder".
From a polyhedral viewpoint, a cylinder can also be seen as a dual of a bicone as an infinite-sided bipyramid.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\ne &= \\cos\\alpha, \\\\[1ex]\na &= \\frac{r}{\\sin\\alpha}.\n\\end{align}"
},
{
"math_id": 1,
"text": "V = \\pi r^2h"
},
{
"math_id": 2,
"text": "V = \\int_0^h A(x) dx = \\int_0^h \\pi ab dx = \\pi ab \\int_0^h dx = \\pi a b h."
},
{
"math_id": 3,
"text": "\\begin{align}\nV &= \\int_0^h \\int_0^{2\\pi} \\int_0^r s \\,\\, ds \\, d\\phi \\, dz \\\\[5mu]\n&= \\pi\\,r^2\\,h.\n\\end{align}"
},
{
"math_id": 4,
"text": "L = 2 \\pi r h"
},
{
"math_id": 5,
"text": "A = L + 2B = 2\\pi rh + 2\\pi r^2 = 2 \\pi r (h + r) = \\pi d (r + h)"
},
{
"math_id": 6,
"text": "L = e \\times p,"
},
{
"math_id": 7,
"text": "\nV = \\pi \\left( R ^2 - r ^2 \\right) h\n= 2 \\pi \\left ( \\frac{R + r}{2} \\right) h (R - r).\n"
},
{
"math_id": 8,
"text": " A = 2 \\pi \\left( R + r \\right) h + 2 \\pi \\left( R^2 - r^2 \\right). "
},
{
"math_id": 9,
"text": "f(x,y,z)=Ax^2 + By^2 + C z^2 + Dx + Ey + Gz + H = 0,"
},
{
"math_id": 10,
"text": "A \\left ( x + \\frac{D}{2A} \\right )^2 + B \\left(y + \\frac{E}{2B} \\right)^2 = \\rho,"
},
{
"math_id": 11,
"text": "\\rho = -H + \\frac{D^2}{4A} + \\frac{E^2}{4B}."
},
{
"math_id": 12,
"text": "\\rho"
},
{
"math_id": 13,
"text": "\\left(\\frac{x}{a}\\right)^2+ \\left(\\frac{y}{b}\\right)^2 = 1."
},
{
"math_id": 14,
"text": "\\left(\\frac{x}{a}\\right)^2 + \\left(\\frac{y}{b}\\right)^2 = -1,"
},
{
"math_id": 15,
"text": "\\rho = 0"
},
{
"math_id": 16,
"text": "\\rho \\neq 0"
},
{
"math_id": 17,
"text": "\\left(\\frac{x}{a}\\right)^2 - \\left(\\frac{y}{b}\\right)^2 = 1."
},
{
"math_id": 18,
"text": " x^2 + 2 a y = 0 ."
}
] | https://en.wikipedia.org/wiki?curid=859283 |
859590 | CORDIC | Algorithm for computing trigonometric, hyperbolic, logarithmic and exponential functions
CORDIC (coordinate rotation digital computer), Volder's algorithm, Digit-by-digit method, Circular CORDIC (Jack E. Volder), Linear CORDIC, Hyperbolic CORDIC (John Stephen Walther), and Generalized Hyperbolic CORDIC (GH CORDIC) (Yuanyong Luo et al.), is a simple and efficient algorithm to calculate trigonometric functions, hyperbolic functions, square roots, multiplications, divisions, and exponentials and logarithms with arbitrary base, typically converging with one digit (or bit) per iteration. CORDIC is therefore also an example of digit-by-digit algorithms. CORDIC and closely related methods known as pseudo-multiplication and pseudo-division or factor combining are commonly used when no hardware multiplier is available (e.g. in simple microcontrollers and field-programmable gate arrays or FPGAs), as the only operations they require are additions, subtractions, bitshift and lookup tables. As such, they all belong to the class of shift-and-add algorithms. In computer science, CORDIC is often used to implement floating-point arithmetic when the target platform lacks hardware multiply for cost or space reasons.
History.
Similar mathematical techniques were published by Henry Briggs as early as 1624 and Robert Flower in 1771, but CORDIC is better optimized for low-complexity finite-state CPUs.
CORDIC was conceived in 1956 by Jack E. Volder at the aeroelectronics department of Convair out of necessity to replace the analog resolver in the B-58 bomber's navigation computer with a more accurate and faster real-time digital solution. Therefore, CORDIC is sometimes referred to as a digital resolver.
In his research Volder was inspired by a formula in the 1946 edition of the "CRC Handbook of Chemistry and Physics":
formula_0
where formula_1 is such that formula_2, and formula_3.
His research led to an internal technical report proposing the CORDIC algorithm to solve sine and cosine functions and a prototypical computer implementing it. The report also discussed the possibility to compute hyperbolic coordinate rotation, logarithms and exponential functions with modified CORDIC algorithms. Utilizing CORDIC for multiplication and division was also conceived at this time. Based on the CORDIC principle, Dan H. Daggett, a colleague of Volder at Convair, developed conversion algorithms between binary and binary-coded decimal (BCD).
In 1958, Convair finally started to build a demonstration system to solve radar fix–taking problems named "CORDIC I", completed in 1960 without Volder, who had left the company already. More universal "CORDIC II" models "A" (stationary) and "B" (airborne) were built and tested by Daggett and Harry Schuss in 1962.
Volder's CORDIC algorithm was first described in public in 1959, which caused it to be incorporated into navigation computers by companies including Martin-Orlando, Computer Control, Litton, Kearfott, Lear-Siegler, Sperry, Raytheon, and Collins Radio.
Volder teamed up with Malcolm McMillan to build "Athena", a fixed-point desktop calculator utilizing his binary CORDIC algorithm. The design was introduced to Hewlett-Packard in June 1965, but not accepted. Still, McMillan introduced David S. Cochran (HP) to Volder's algorithm and when Cochran later met Volder he referred him to a similar approach John E. Meggitt (IBM) had proposed as "pseudo-multiplication" and "pseudo-division" in 1961. Meggitt's method also suggested the use of base 10 rather than base 2, as used by Volder's CORDIC so far. These efforts led to the ROMable logic implementation of a decimal CORDIC prototype machine inside of Hewlett-Packard in 1966, built by and conceptually derived from Thomas E. Osborne's prototypical "Green Machine", a four-function, floating-point desktop calculator he had completed in DTL logic in December 1964. This project resulted in the public demonstration of Hewlett-Packard's first desktop calculator with scientific functions, the HP 9100A in March 1968, with series production starting later that year.
When Wang Laboratories found that the HP 9100A used an approach similar to the "factor combining" method in their earlier LOCI-1 (September 1964) and LOCI-2 (January 1965) "Logarithmic Computing Instrument" desktop calculators, they unsuccessfully accused Hewlett-Packard of infringement of one of An Wang's patents in 1968.
John Stephen Walther at Hewlett-Packard generalized the algorithm into the "Unified CORDIC" algorithm in 1971, allowing it to calculate hyperbolic functions, natural exponentials, natural logarithms, multiplications, divisions, and square roots. The CORDIC subroutines for trigonometric and hyperbolic functions could share most of their code. This development resulted in the first scientific handheld calculator, the HP-35 in 1972. Based on hyperbolic CORDIC, Yuanyong Luo et al. further proposed a Generalized Hyperbolic CORDIC (GH CORDIC) to directly compute logarithms and exponentials with an arbitrary fixed base in 2019. Theoretically, Hyperbolic CORDIC is a special case of GH CORDIC.
Originally, CORDIC was implemented only using the binary numeral system and despite Meggitt suggesting the use of the decimal system for his pseudo-multiplication approach, decimal CORDIC continued to remain mostly unheard of for several more years, so that Hermann Schmid and Anthony Bogacki still suggested it as a novelty as late as 1973 and it was found only later that Hewlett-Packard had implemented it in 1966 already.
Decimal CORDIC became widely used in pocket calculators, most of which operate in binary-coded decimal (BCD) rather than binary. This change in the input and output format did not alter CORDIC's core calculation algorithms. CORDIC is particularly well-suited for handheld calculators, in which low cost – and thus low chip gate count – is much more important than speed.
CORDIC has been implemented in the ARM-based STM32G4, Intel 8087, 80287, 80387 up to the 80486 coprocessor series as well as in the Motorola 68881 and 68882 for some kinds of floating-point instructions, mainly as a way to reduce the gate counts (and complexity) of the FPU sub-system.
Applications.
CORDIC uses simple shift-add operations for several computing tasks such as the calculation of trigonometric, hyperbolic and logarithmic functions, real and complex multiplications, division, square-root calculation, solution of linear systems, eigenvalue estimation, singular value decomposition, QR factorization and many others. As a consequence, CORDIC has been used for applications in diverse areas such as signal and image processing, communication systems, robotics and 3D graphics apart from general scientific and technical computation.
Hardware.
The algorithm was used in the navigational system of the Apollo program's Lunar Roving Vehicle to compute bearing and range, or distance from the Lunar module. CORDIC was used to implement the Intel 8087 math coprocessor in 1980, avoiding the need to implement hardware multiplication.
CORDIC is generally faster than other approaches when a hardware multiplier is not available (e.g., a microcontroller), or when the number of gates required to implement the functions it supports should be minimized (e.g., in an FPGA or ASIC).
In fact, CORDIC is a standard drop-in IP in FPGA development applications such as Vivado for Xilinx, while a power series implementation is not due to the specificity of such an IP, i.e. CORDIC can compute many different functions (general purpose) while a hardware multiplier configured to execute power series implementations can only compute the function it was designed for.
On the other hand, when a hardware multiplier is available ("e.g.", in a DSP microprocessor), table-lookup methods and power series are generally faster than CORDIC. In recent years, the CORDIC algorithm has been used extensively for various biomedical applications, especially in FPGA implementations.
The STM32G4 series and certain STM32H7 series of MCUs implement a CORDIC module to accelerate computations in various mixed signal applications such as graphics for human-machine interface and field oriented control of motors. While not as fast as a power series approximation, CORDIC is indeed faster than interpolating table based implementations such as the ones provided by the ARM CMSIS and C standard libraries. Though the results may be slightly less accurate as the CORDIC modules provided only achieve 20 bits of precision in the result. For example, most of the performance difference compared to the ARM implementation is due to the overhead of the interpolation algorithm, which achieves full floating point precision (24 bits) and can likely achieve relative error to that precision. Another benefit is that the CORDIC module is a coprocessor and can be run in parallel with other CPU tasks.
The issue with using Taylor series is that while they do provide small absolute error, they do not exhibit well behaved relative error. Other means of polynomial approximation, such as minimax optimization, may be used to control both kinds of error.
Software.
Many older systems with integer-only CPUs have implemented CORDIC to varying extents as part of their IEEE floating-point libraries. As most modern general-purpose CPUs have floating-point registers with common operations such as add, subtract, multiply, divide, sine, cosine, square root, log10, natural log, the need to implement CORDIC in them with software is nearly non-existent. Only microcontroller or special safety and time-constrained software applications would need to consider using CORDIC.
Modes of operation.
Rotation mode.
CORDIC can be used to calculate a number of different functions. This explanation shows how to use CORDIC in "rotation mode" to calculate the sine and cosine of an angle, assuming that the desired angle is given in radians and represented in a fixed-point format. To determine the sine or cosine for an angle formula_4, the "y" or "x" coordinate of a point on the unit circle corresponding to the desired angle must be found. Using CORDIC, one would start with the vector formula_5:
formula_6
In the first iteration, this vector is rotated 45° counterclockwise to get the vector formula_7. Successive iterations rotate the vector in one or the other direction by size-decreasing steps, until the desired angle has been achieved. Each step angle is formula_8 for formula_9.
More formally, every iteration calculates a rotation, which is performed by multiplying the vector formula_10 with the rotation matrix formula_11:
formula_12
The rotation matrix is given by
formula_13
Using the trigonometric identity:
formula_14
the cosine factor can be taken out to give:
formula_15
The expression for the rotated vector formula_16 then becomes:
formula_17
where formula_18 and formula_19 are the components of formula_10. Setting the angle formula_20 for each iteration such that formula_21 still yields a series that converges to every possible output value. The multiplication with the tangent can therefore be replaced by a division by a power of two, which is efficiently done in digital computer hardware using a bit shift. The expression then becomes:
formula_22
and formula_23 is used to determine the direction of the rotation: if the angle formula_20 is positive, then formula_23 is +1, otherwise it is −1.
The following trigonometric identity can be used to replace the cosine:
formula_24,
giving this multiplier for each iteration:
formula_25
The formula_26 factors can then be taken out of the iterative process and applied all at once afterwards with a scaling factor formula_27:
formula_28
which is calculated in advance and stored in a table or as a single constant, if the number of iterations is fixed. This correction could also be made in advance, by scaling formula_5 and hence saving a multiplication. Additionally, it can be noted that
formula_29
to allow further reduction of the algorithm's complexity. Some applications may avoid correcting for formula_30 altogether, resulting in a processing gain formula_31:
formula_32
After a sufficient number of iterations, the vector's angle will be close to the wanted angle formula_4. For most ordinary purposes, 40 iterations ("n" = 40) are sufficient to obtain the correct result to the 10th decimal place.
The only task left is to determine whether the rotation should be clockwise or counterclockwise at each iteration (choosing the value of formula_33). This is done by keeping track of how much the angle was rotated at each iteration and subtracting that from the wanted angle; then in order to get closer to the wanted angle formula_4, if formula_34 is positive, the rotation is clockwise, otherwise it is negative and the rotation is counterclockwise:
formula_35
formula_36
The values of formula_37 must also be precomputed and stored. For small angles it can be approximated with formula_38 to reduce the table size.
As can be seen in the illustration above, the sine of the angle formula_4 is the "y" coordinate of the final vector formula_39 while the "x" coordinate is the cosine value.
Vectoring mode.
The rotation-mode algorithm described above can rotate any vector (not only a unit vector aligned along the "x" axis) by an angle between −90° and +90°. Decisions on the direction of the rotation depend on formula_40 being positive or negative.
The vectoring-mode of operation requires a slight modification of the algorithm. It starts with a vector whose "x" coordinate is positive whereas the "y" coordinate is arbitrary. Successive rotations have the goal of rotating the vector to the "x" axis (and therefore reducing the "y" coordinate to zero). At each step, the value of "y" determines the direction of the rotation. The final value of formula_40 contains the total angle of rotation. The final value of "x" will be the magnitude of the original vector scaled by "K". So, an obvious use of the vectoring mode is the transformation from rectangular to polar coordinates.
Implementation.
In Java the Math class has a codice_0 method to perform such a shift, C has the ldexp function, and the x86 class of processors have the codice_1 floating point operation.
Software Example (Python).
from math import atan2, sqrt, sin, cos, radians
ITERS = 16
theta_table = [atan2(1, 2**i) for i in range(ITERS)]
def compute_K(n):
Compute K(n) for n = ITERS. This could also be
stored as an explicit constant if ITERS above is fixed.
k = 1.0
for i in range(n):
k *= 1 / sqrt(1 + 2 ** (-2 * i))
return k
def CORDIC(alpha, n):
K_n = compute_K(n)
theta = 0.0
x = 1.0
y = 0.0
P2i = 1 # This will be 2**(-i) in the loop below
for arc_tangent in theta_table:
sigma = +1 if theta < alpha else -1
theta += sigma * arc_tangent
x, y = x - sigma * y * P2i, sigma * P2i * x + y
P2i /= 2
return x * K_n, y * K_n
if __name__ == "__main__":
# Print a table of computed sines and cosines, from -90° to +90°, in steps of 15°,
# comparing against the available math routines.
print(" x sin(x) diff. sine cos(x) diff. cosine ")
for x in range(-90, 91, 15):
cos_x, sin_x = CORDIC(radians(x), ITERS)
print(
f"{x:+05.1f}° {sin_x:+.8f} ({sin_x-sin(radians(x)):+.8f}) {cos_x:+.8f} ({cos_x-cos(radians(x)):+.8f})"
Output.
$ python cordic.py
x sin(x) diff. sine cos(x) diff. cosine
-90.0° -1.00000000 (+0.00000000) -0.00001759 (-0.00001759)
-75.0° -0.96592181 (+0.00000402) +0.25883404 (+0.00001499)
-60.0° -0.86601812 (+0.00000729) +0.50001262 (+0.00001262)
-45.0° -0.70711776 (-0.00001098) +0.70709580 (-0.00001098)
-30.0° -0.50001262 (-0.00001262) +0.86601812 (-0.00000729)
-15.0° -0.25883404 (-0.00001499) +0.96592181 (-0.00000402)
+00.0° +0.00001759 (+0.00001759) +1.00000000 (-0.00000000)
+15.0° +0.25883404 (+0.00001499) +0.96592181 (-0.00000402)
+30.0° +0.50001262 (+0.00001262) +0.86601812 (-0.00000729)
+45.0° +0.70709580 (-0.00001098) +0.70711776 (+0.00001098)
+60.0° +0.86601812 (-0.00000729) +0.50001262 (+0.00001262)
+75.0° +0.96592181 (-0.00000402) +0.25883404 (+0.00001499)
+90.0° +1.00000000 (-0.00000000) -0.00001759 (-0.00001759)
Hardware example.
The number of logic gates for the implementation of a CORDIC is roughly comparable to the number required for a multiplier as both require combinations of shifts and additions. The choice for a multiplier-based or CORDIC-based implementation will depend on the context. The multiplication of two complex numbers represented by their real and imaginary components (rectangular coordinates), for example, requires 4 multiplications, but could be realized by a single CORDIC operating on complex numbers represented by their polar coordinates, especially if the magnitude of the numbers is not relevant (multiplying a complex vector with a vector on the unit circle actually amounts to a rotation). CORDICs are often used in circuits for telecommunications such as digital down converters.
Double iterations CORDIC.
In two of the publications by Vladimir Baykov, it was proposed to use the double iterations method for the implementation of the functions: arcsine, arccosine, natural logarithm, exponential function, as well as for the calculation of the hyperbolic functions. Double iterations method consists in the fact that unlike the classical CORDIC method, where the iteration step value changes "every" time, i.e. on each iteration, in the double iteration method, the iteration step value is repeated twice and changes only through one iteration. Hence the designation for the degree indicator for double iterations appeared: formula_41. Whereas with ordinary iterations: formula_42. The double iteration method guarantees the convergence of the method throughout the valid range of argument changes.
The generalization of the CORDIC convergence problems for the arbitrary positional number system with radix formula_43 showed that for the functions sine, cosine, arctangent, it is enough to perform formula_44 iterations for each value of i (i = 0 or 1 to n, where n is the number of digits), i.e. for each digit of the result. For the natural logarithm, exponential, hyperbolic sine, cosine and arctangent, formula_43 iterations should be performed for each value formula_45. For the functions arcsine and arccosine, two formula_44 iterations should be performed for each number digit, i.e. for each value of formula_45.
For inverse hyperbolic sine and arcosine functions, the number of iterations will be formula_46 for each formula_45, that is, for each result digit.
Related algorithms.
CORDIC is part of the class of "shift-and-add" algorithms, as are the logarithm and exponential algorithms derived from Henry Briggs' work. Another shift-and-add algorithm which can be used for computing many elementary functions is the BKM algorithm, which is a generalization of the logarithm and exponential algorithms to the complex plane. For instance, BKM can be used to compute the sine and cosine of a real angle formula_47 (in radians) by computing the exponential of formula_48, which is formula_49. The BKM algorithm is slightly more complex than CORDIC, but has the advantage that it does not need a scaling factor ("K").
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{align}\n K_n R \\sin(\\theta \\pm \\varphi) &= R \\sin(\\theta) \\pm 2^{-n} R \\cos(\\theta), \\\\\n K_n R \\cos(\\theta \\pm \\varphi) &= R \\cos(\\theta) \\mp 2^{-n} R \\sin(\\theta), \\\\\n\\end{align}\n"
},
{
"math_id": 1,
"text": "\\varphi"
},
{
"math_id": 2,
"text": "\\tan(\\varphi) = 2^{-n}"
},
{
"math_id": 3,
"text": "K_n := \\sqrt{1 + 2^{-2n}}"
},
{
"math_id": 4,
"text": "\\beta"
},
{
"math_id": 5,
"text": "v_0"
},
{
"math_id": 6,
"text": "v_0 = \\begin{bmatrix} 1 \\\\ 0 \\end{bmatrix}."
},
{
"math_id": 7,
"text": "v_1"
},
{
"math_id": 8,
"text": "\\gamma_i = \\arctan{(2^{-i})}"
},
{
"math_id": 9,
"text": "i = 0, 1, 2, \\dots"
},
{
"math_id": 10,
"text": "v_i"
},
{
"math_id": 11,
"text": "R_{i}"
},
{
"math_id": 12,
"text": "v_{i+1} = R_i v_i."
},
{
"math_id": 13,
"text": "R_i = \\begin{bmatrix}\n \\cos(\\gamma_i) & -\\sin(\\gamma_i) \\\\\n \\sin(\\gamma_i) & \\cos(\\gamma_i)\n\\end{bmatrix}."
},
{
"math_id": 14,
"text": "\\begin{align} \n \\tan(\\gamma_i) &\\equiv \\frac{\\sin(\\gamma_i)}{\\cos(\\gamma_i)},\n\\end{align}"
},
{
"math_id": 15,
"text": "R_i = \\cos(\\gamma_i) \\begin{bmatrix}\n 1 & -\\tan(\\gamma_i) \\\\\n \\tan(\\gamma_i) & 1\n\\end{bmatrix}."
},
{
"math_id": 16,
"text": "v_{i+1} = R_i v_i"
},
{
"math_id": 17,
"text": "\\begin{bmatrix}\n x_{i+1} \\\\\n y_{i+1}\n\\end{bmatrix} = \\cos(\\gamma_i) \\begin{bmatrix}\n 1 & -\\tan(\\gamma_i) \\\\\n \\tan(\\gamma_i) & 1\n\\end{bmatrix} \\begin{bmatrix}\n x_i \\\\\n y_i\n\\end{bmatrix},"
},
{
"math_id": 18,
"text": "x_i"
},
{
"math_id": 19,
"text": "y_i"
},
{
"math_id": 20,
"text": "\\gamma_i"
},
{
"math_id": 21,
"text": "\\tan(\\gamma_i) = \\pm 2^{-i}"
},
{
"math_id": 22,
"text": "\\begin{bmatrix}\n x_{i+1} \\\\\n y_{i+1}\n\\end{bmatrix} = \\cos(\\arctan(2^{-i})) \\begin{bmatrix}\n 1 & -\\sigma_i 2^{-i} \\\\\n \\sigma_i 2^{-i} & 1\n\\end{bmatrix} \\begin{bmatrix}\n x_i \\\\\n y_i\n\\end{bmatrix},"
},
{
"math_id": 23,
"text": "\\sigma_i"
},
{
"math_id": 24,
"text": "\\cos(\\gamma_i) \\equiv \\frac{1}{\\sqrt{1 + \\tan^2{\\gamma_i}}}"
},
{
"math_id": 25,
"text": "K_i = \\cos(\\arctan(2^{-i})) = \\frac{1}{\\sqrt{1 + 2^{-2i}}}."
},
{
"math_id": 26,
"text": "K_i"
},
{
"math_id": 27,
"text": "K(n)"
},
{
"math_id": 28,
"text": "K(n) = \\prod_{i=0}^{n-1} K_i = \\prod_{i=0}^{n-1} \\frac{1}{\\sqrt{1 + 2^{-2i}}},"
},
{
"math_id": 29,
"text": "K = \\lim_{n \\to \\infty} K(n) \\approx 0.6072529350088812561694"
},
{
"math_id": 30,
"text": "K"
},
{
"math_id": 31,
"text": "A"
},
{
"math_id": 32,
"text": "A = \\frac{1}{K} = \\lim_{n \\to \\infty} \\prod_{i=0}^{n-1} \\sqrt{1 + 2^{-2i}} \\approx 1.64676025812107."
},
{
"math_id": 33,
"text": "\\sigma"
},
{
"math_id": 34,
"text": "\\beta_{n+1}"
},
{
"math_id": 35,
"text": "\\beta_0 = \\beta "
},
{
"math_id": 36,
"text": "\\beta_{i+1} = \\beta_i - \\sigma_i \\gamma_i, \\quad \\gamma_i = \\arctan(2^{-i})."
},
{
"math_id": 37,
"text": "\\gamma_n"
},
{
"math_id": 38,
"text": "\\arctan(\\gamma_n) \\approx \\gamma_n"
},
{
"math_id": 39,
"text": "v_n,"
},
{
"math_id": 40,
"text": "\\beta_i"
},
{
"math_id": 41,
"text": "i = 0, 0, 1, 1, 2, 2\\dots"
},
{
"math_id": 42,
"text": "i = 0, 1, 2\\dots"
},
{
"math_id": 43,
"text": "R"
},
{
"math_id": 44,
"text": "R - 1"
},
{
"math_id": 45,
"text": "i"
},
{
"math_id": 46,
"text": "2R"
},
{
"math_id": 47,
"text": "x"
},
{
"math_id": 48,
"text": "0+ix"
},
{
"math_id": 49,
"text": "\\operatorname{cis}(x) = \\cos(x) + i \\sin(x)"
}
] | https://en.wikipedia.org/wiki?curid=859590 |
859686 | Supercommutative algebra | Type of associative algebra that "almost commutes"
In mathematics, a supercommutative (associative) algebra is a superalgebra (i.e. a Z2-graded algebra) such that for any two homogeneous elements "x", "y" we have
formula_0
where |"x"| denotes the grade of the element and is 0 or 1 (in Z2) according to whether the grade is even or odd, respectively.
Equivalently, it is a superalgebra where the supercommutator
formula_1
always vanishes. Algebraic structures which supercommute in the above sense are sometimes referred to as skew-commutative associative algebras to emphasize the anti-commutation, or, to emphasize the grading, graded-commutative or, if the supercommutativity is understood, simply commutative.
Any commutative algebra is a supercommutative algebra if given the trivial gradation (i.e. all elements are even). Grassmann algebras (also known as exterior algebras) are the most common examples of nontrivial supercommutative algebras. The supercenter of any superalgebra is the set of elements that supercommute with all elements, and is a supercommutative algebra.
The even subalgebra of a supercommutative algebra is always a commutative algebra. That is, even elements always commute. Odd elements, on the other hand, always anticommute. That is,
formula_2
for odd "x" and "y". In particular, the square of any odd element "x" vanishes whenever 2 is invertible:
formula_3
Thus a commutative superalgebra (with 2 invertible and nonzero degree one component) always contains nilpotent elements.
A Z-graded anticommutative algebra with the property that "x"2 = 0 for every element "x" of odd grade (irrespective of whether 2 is invertible) is called an alternating algebra.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "yx = (-1)^{|x| |y|}xy ,"
},
{
"math_id": 1,
"text": "[x,y] = xy - (-1)^{|x| |y|}yx"
},
{
"math_id": 2,
"text": "xy + yx = 0\\,"
},
{
"math_id": 3,
"text": "x^2 = 0 ."
}
] | https://en.wikipedia.org/wiki?curid=859686 |
8597625 | Key encapsulation mechanism | In cryptography, a key encapsulation mechanism, or KEM, is a public-key cryptosystem that allows a sender to generate a short secret key and transmit it to a receiver securely, in spite of eavesdropping and intercepting adversaries.
A KEM allows a sender who knows a public key to simultaneously generate a short random secret key and an encapsulation or ciphertext of the secret key by the KEM's encapsulation algorithm.
The receiver who knows the private key corresponding to the public key can recover the same random secret key from the encapsulation by the KEM's decapsulation algorithm.
The security goal of a KEM is to prevent anyone who "doesn't" know the private key from recovering any information about the encapsulated secret keys, even after eavesdropping or submitting other encapsulations to the receiver to study how the receiver reacts.
Difference from public-key encryption.
The difference between a public-key encryption scheme and a KEM is that a public-key encryption scheme allows a sender to choose an arbitrary message from some space of possible messages, while a KEM chooses a short secret key at random for the sender.
The sender may take the random secret key produced by a KEM and use it as a symmetric key for an authenticated cipher whose ciphertext is sent alongside the encapsulation to the receiver.
This serves to compose a public-key encryption scheme out of a KEM and a symmetric-key authenticated cipher in a hybrid cryptosystem.
Most public-key encryption schemes such as RSAES-PKCS1-v1_5, RSAES-OAEP, and Elgamal encryption are limited to small messages and are almost always used to encrypt a short random secret key in a hybrid cryptosystem anyway.
And although a public-key encryption scheme can conversely be converted to a KEM by choosing a random secret key and encrypting it as a message, it is easier to design and analyze a secure KEM than to design a secure public-key encryption scheme as a basis.
So most modern public-key encryption schemes are based on KEMs rather than the other way around.
Definition.
Syntax.
A KEM consists of three algorithms:
Correctness.
A KEM is correct if, for any key pair formula_10 generated by formula_11, decapsulating an encapsulation formula_1 returned by formula_5 with high probability yields the same key formula_0, that is, formula_12.
Security: IND-CCA.
Security of a KEM is quantified by its indistinguishability against chosen-ciphertext attack, IND-CCA, which is loosely how much better an adversary can do than a coin toss to tell whether, given a random key and an encapsulation, the key is encapsulated by that encapsulation or is an independent random key.
Specifically, in the IND-CCA game:
The IND-CCA advantage of the adversary is formula_20, that is, the probability beyond a fair coin toss at correctly distinguishing an encapsulated key from an independently randomly chosen key.
Examples and motivation.
RSA.
Traditional RSA encryption, with formula_21-bit moduli and exponent formula_22, is defined as follows:
This naive approach is totally insecure.
For example, since it is nonrandomized, it cannot be secure against even known-plaintext attack—an adversary can tell whether the sender is sending the message codice_0 versus the message codice_1 simply by encrypting those messages and comparing the ciphertext.
Even if formula_31 is always a random secret key, such as a 256-bit AES key, when formula_22 is chosen to optimize efficiency as formula_42, the message formula_31 can be computed from the ciphertext formula_1 simply by taking real number cube roots, and there are many other attacks against plain RSA.
Various randomized padding schemes have been devised in attempts—sometimes failed, like RSAES-PKCS1-v1_5—to make it secure for arbitrary short messages formula_31.
Since the message formula_31 is almost always a short secret key for a symmetric-key authenticated cipher used to encrypt an arbitrary bit string message, a simpler approach called RSA-KEM is to choose an element of formula_43 at random and use that to "derive" a secret key using a key derivation function formula_44, roughly as follows:
This approach is simpler to implement, and provides a tighter reduction to the RSA problem, than padding schemes like RSAES-OAEP.
Elgamal.
Traditional Elgamal encryption is defined over a multiplicative subgroup of the finite field formula_47 with generator formula_48 of order formula_49 as follows:
This meets the syntax of a public-key encryption scheme, restricted to messages in the space formula_47 (which limits it to message of a few hundred bytes for typical values of formula_68).
By validating ciphertexts in decryption, it avoids leaking bits of the private key formula_69 through maliciously chosen ciphertexts outside the group generated by formula_48.
However, this fails to achieve indistinguishability against chosen ciphertext attack.
For example, an adversary having a ciphertext formula_70 for an unknown message formula_31 can trivially decrypt it by querying the decryption oracle for the distinct ciphertext formula_71, yielding the related plaintext formula_72, from which formula_31 can be recovered by formula_73.
Traditional Elgamal encryption can be adapted to the elliptic-curve setting, but it requires some way to reversibly encode messages as points on the curve, which is less trivial than encoding messages as integers mod formula_68.
Since the message formula_31 is almost always a short secret key for a symmetric-key authenticated cipher used to encrypt an arbitrary bit string message, a simpler approach is to "derive" the secret key from formula_21 and dispense with formula_31 and formula_74 altogether, as a KEM, using a key derivation function formula_44:
When combined with an authenticated cipher to encrypt arbitrary bit string messages, the combination is essentially the Integrated Encryption Scheme.
Since this KEM only requires a one-way key derivation function to hash random elements of the group it is defined over, formula_47 in this case, and not a reversible encoding of messages, it is easy to extend to more compact and efficient elliptic curve groups for the same security, as in the ECIES, Elliptic Curve Integrated Encryption Scheme.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "c"
},
{
"math_id": 2,
"text": "(\\mathit{pk}, \\mathit{sk}) := \\operatorname{Gen}()"
},
{
"math_id": 3,
"text": "\\mathit{pk}"
},
{
"math_id": 4,
"text": "\\mathit{sk}"
},
{
"math_id": 5,
"text": "(k, c) := \\operatorname{Encap}(\\mathit{pk})"
},
{
"math_id": 6,
"text": "k' := \\operatorname{Decap}(\\mathit{sk}, c')"
},
{
"math_id": 7,
"text": "c'"
},
{
"math_id": 8,
"text": "k'"
},
{
"math_id": 9,
"text": "\\bot"
},
{
"math_id": 10,
"text": "(\\mathit{pk}, \\mathit{sk})"
},
{
"math_id": 11,
"text": "\\operatorname{Gen}"
},
{
"math_id": 12,
"text": "\\operatorname{Decap}(\\mathit{sk}, c) = k"
},
{
"math_id": 13,
"text": "\\operatorname{Decap}(\\mathit{sk}, c')"
},
{
"math_id": 14,
"text": "(k_0, c) := \\operatorname{Encap}(\\mathit{pk})"
},
{
"math_id": 15,
"text": "k_1"
},
{
"math_id": 16,
"text": "b \\in \\{0,1\\}"
},
{
"math_id": 17,
"text": "(k_b, c)"
},
{
"math_id": 18,
"text": "b' \\in \\{0,1\\}"
},
{
"math_id": 19,
"text": "b = b'"
},
{
"math_id": 20,
"text": "\\left|\\Pr[b' = b] - 1/2\\right|"
},
{
"math_id": 21,
"text": "t"
},
{
"math_id": 22,
"text": "e"
},
{
"math_id": 23,
"text": "n"
},
{
"math_id": 24,
"text": "2^{t - 1} < n < 2^t"
},
{
"math_id": 25,
"text": "\\gcd(e, \\lambda(n)) = 1"
},
{
"math_id": 26,
"text": "\\lambda(n)"
},
{
"math_id": 27,
"text": "d := e^{-1} \\bmod \\lambda(n)"
},
{
"math_id": 28,
"text": "\\mathit{pk} := n"
},
{
"math_id": 29,
"text": "\\mathit{sk} := (n, d)"
},
{
"math_id": 30,
"text": "(t - 1)"
},
{
"math_id": 31,
"text": "m"
},
{
"math_id": 32,
"text": "\\mathit{pk} = n"
},
{
"math_id": 33,
"text": "c := \\operatorname{Encrypt}(\\mathit{pk}, m)"
},
{
"math_id": 34,
"text": "r"
},
{
"math_id": 35,
"text": "0 \\leq r < n"
},
{
"math_id": 36,
"text": "c := r^e \\bmod n"
},
{
"math_id": 37,
"text": "\\mathit{sk} = (n, d)"
},
{
"math_id": 38,
"text": "m' := \\operatorname{Decrypt}(\\mathit{sk}, c')"
},
{
"math_id": 39,
"text": "r' := (c')^d \\bmod n"
},
{
"math_id": 40,
"text": "r'"
},
{
"math_id": 41,
"text": "m'"
},
{
"math_id": 42,
"text": "e = 3"
},
{
"math_id": 43,
"text": "\\mathbb Z/n\\mathbb Z"
},
{
"math_id": 44,
"text": "H"
},
{
"math_id": 45,
"text": "k := H(r)"
},
{
"math_id": 46,
"text": "k' := H(r')"
},
{
"math_id": 47,
"text": "\\mathbb Z/p\\mathbb Z"
},
{
"math_id": 48,
"text": "g"
},
{
"math_id": 49,
"text": "q"
},
{
"math_id": 50,
"text": "(pk, sk) := \\operatorname{Gen}()"
},
{
"math_id": 51,
"text": "x \\in \\mathbb Z/q\\mathbb Z"
},
{
"math_id": 52,
"text": "y := g^x \\bmod p"
},
{
"math_id": 53,
"text": "\\mathit{sk} := x"
},
{
"math_id": 54,
"text": "\\mathit{pk} := y"
},
{
"math_id": 55,
"text": "m \\in \\mathbb Z/p\\mathbb Z"
},
{
"math_id": 56,
"text": "\\mathit{pk} = y"
},
{
"math_id": 57,
"text": "r \\in \\mathbb Z/q\\mathbb Z"
},
{
"math_id": 58,
"text": "\\begin{align} t &:= y^r \\bmod p \\\\ c_1 &:= g^r \\bmod p \\\\ c_2 &:= (t \\cdot m) \\bmod p\\end{align}"
},
{
"math_id": 59,
"text": "c := (c_1, c_2)"
},
{
"math_id": 60,
"text": "c' = (c'_1, c'_2)"
},
{
"math_id": 61,
"text": "\\mathit{sk} = x"
},
{
"math_id": 62,
"text": "(c'_1)^{(p - 1)/q} \\not\\equiv 1 \\pmod p"
},
{
"math_id": 63,
"text": "(c'_2)^{(p - 1)/q} \\not\\equiv 1 \\pmod p"
},
{
"math_id": 64,
"text": "c'_1"
},
{
"math_id": 65,
"text": "c'_2"
},
{
"math_id": 66,
"text": "t' := (c'_1)^x \\bmod p"
},
{
"math_id": 67,
"text": "m' := t^{-1} c'_2 \\bmod p"
},
{
"math_id": 68,
"text": "p"
},
{
"math_id": 69,
"text": "x"
},
{
"math_id": 70,
"text": "c = (c_1, c_2)"
},
{
"math_id": 71,
"text": "c' := (c_1, c_2 g)"
},
{
"math_id": 72,
"text": "m' := m g \\bmod p"
},
{
"math_id": 73,
"text": "m = m' g^{-1} \\bmod p"
},
{
"math_id": 74,
"text": "c_2"
},
{
"math_id": 75,
"text": "t := y^r \\bmod p"
},
{
"math_id": 76,
"text": "k := H(t)"
},
{
"math_id": 77,
"text": "c := g^r \\bmod p"
},
{
"math_id": 78,
"text": "(c')^{(p - 1)/q} \\not\\equiv 1 \\pmod p"
},
{
"math_id": 79,
"text": "t' := (c')^x \\bmod p"
},
{
"math_id": 80,
"text": "k' := H(t')"
}
] | https://en.wikipedia.org/wiki?curid=8597625 |
859805 | Ivan M. Niven | Canadian-American number theorist (1915–1999)
Ivan Morton Niven (October 25, 1915 – May 9, 1999) was a Canadian-American number theorist best remembered for his work on Waring's problem. He worked for many years as a professor at the University of Oregon, and was president of the Mathematical Association of America. He wrote several books on mathematics.
Life.
Niven was born in Vancouver. He did his undergraduate studies at the University of British Columbia and was awarded his doctorate in 1938 from the University of Chicago. He was a member of the University of Oregon faculty from 1947 to his retirement in 1981. He was president of the Mathematical Association of America (MAA) from 1983 to 1984.
He died in 1999 in Eugene, Oregon.
Research.
Niven completed the solution of most of Waring's problem in 1944. This problem, based on a 1770 conjecture by Edward Waring, consists of finding the smallest number formula_0 such that every positive integer is the sum of at most formula_0 formula_1-th powers of positive integers. David Hilbert had proved the existence of such a formula_0 in 1909; Niven's work established the value of formula_0 for all but finitely many values of formula_1.
Niven gave an elementary proof that formula_2 is irrational in 1947.
Niven numbers, Niven's constant, and Niven's theorem are named for Niven.
He has an Erdős number of 1 because he coauthored a paper with Paul Erdős, on partial sums of the harmonic series.
Recognition.
Niven received the University of Oregon's Charles E. Johnson Award in 1981. He received the MAA Distinguished Service Award in 1989.
He won a Lester R. Ford Award in 1970. In 2000, the asteroid 12513 Niven, discovered in 1998, was named after him.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "g(n)"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "\\pi"
}
] | https://en.wikipedia.org/wiki?curid=859805 |
8599355 | Generalized Pareto distribution | Family of probability distributions often used to model tails or extreme values
In statistics, the generalized Pareto distribution (GPD) is a family of continuous probability distributions. It is often used to model the tails of another distribution. It is specified by three parameters: location formula_2, scale formula_0, and shape formula_1. Sometimes it is specified by only scale and shape and sometimes only by its shape parameter. Some references give the shape parameter as formula_3.
Definition.
The standard cumulative distribution function (cdf) of the GPD is defined by
formula_4
where the support is formula_5 for formula_6 and formula_7 for formula_8. The corresponding probability density function (pdf) is
formula_9
Characterization.
The related location-scale family of distributions is obtained by replacing the argument "z" by formula_10 and adjusting the support accordingly.
The cumulative distribution function of formula_11 (formula_12, formula_13, and formula_14) is
formula_15
where the support of formula_16 is formula_17 when formula_18, and formula_19 when formula_8.
The probability density function (pdf) of formula_11 is
formula_20,
again, for formula_17 when formula_21, and formula_19 when formula_8.
The pdf is a solution of the following differential equation:
formula_22
Generating generalized Pareto random variables.
Generating GPD random variables.
If "U" is uniformly distributed on
(0, 1], then
formula_36
and
formula_37
Both formulas are obtained by inversion of the cdf.
In Matlab Statistics Toolbox, you can easily use "gprnd" command to generate generalized Pareto random numbers.
GPD as an Exponential-Gamma Mixture.
A GPD random variable can also be expressed as an exponential random variable, with a Gamma distributed rate parameter.
formula_38
and
formula_39
then
formula_40
Notice however, that since the parameters for the Gamma distribution must be greater than zero, we obtain the additional restrictions that:formula_1 must be positive.
In addition to this mixture (or compound) expression, the generalized Pareto distribution can also be expressed as a simple ratio. Concretely, for formula_41 and formula_42, we have formula_43. This is a consequence of the mixture after setting formula_44 and taking into account that the rate parameters of the exponential and gamma distribution are simply inverse multiplicative constants.
Exponentiated generalized Pareto distribution.
The exponentiated generalized Pareto distribution (exGPD).
If formula_45 formula_32formula_33, formula_0, formula_1 formula_34, then formula_46 is distributed according to the exponentiated generalized Pareto distribution, denoted by formula_47 formula_30 formula_48 formula_32formula_0, formula_1 formula_34.
The probability density function(pdf) of formula_49 formula_30 formula_48 formula_32formula_0, formula_1 formula_50 is
formula_51
where the support is formula_52 for formula_53, and formula_54 for formula_55.
For all formula_1, the formula_56 becomes the location parameter. See the right panel for the pdf when the shape formula_1 is positive.
The exGPD has finite moments of all orders for all formula_13 and formula_57.
The moment-generating function of formula_58 is
formula_59
where formula_60 and formula_61 denote the beta function and gamma function, respectively.
The expected value of formula_49 formula_30 formula_48 formula_32formula_0, formula_1 formula_34 depends on the scale formula_62 and shape formula_63 parameters, while the formula_63 participates through the digamma function:
formula_64
Note that for a fixed value for the formula_65, the formula_66 plays as the location parameter under the exponentiated generalized Pareto distribution.
The variance of formula_49 formula_30 formula_48 formula_32formula_0, formula_1 formula_34 depends on the shape parameter formula_63 only through the polygamma function of order 1 (also called the trigamma function):
formula_67
See the right panel for the variance as a function of formula_1. Note that formula_68.
Note that the roles of the scale parameter formula_0 and the shape parameter formula_1 under formula_69 are separably interpretable, which may lead to a robust efficient estimation for the formula_1 than using the formula_70 . The roles of the two parameters are associated each other under formula_71 (at least up to the second central moment); see the formula of variance formula_72 wherein both parameters are participated.
The Hill's estimator.
Assume that formula_73 are formula_74 observations (need not be i.i.d.) from an unknown heavy-tailed distribution formula_75 such that its tail distribution is regularly varying with the tail-index formula_76 (hence, the corresponding shape parameter is formula_77). To be specific, the tail distribution is described as
formula_78
It is of a particular interest in the extreme value theory to estimate the shape parameter formula_1, especially when formula_1 is positive (so called the heavy-tailed distribution).
Let formula_79 be their conditional excess distribution function. Pickands–Balkema–de Haan theorem (Pickands, 1975; Balkema and de Haan, 1974) states that for a large class of underlying distribution functions formula_80, and large formula_81, formula_79 is well approximated by the generalized Pareto distribution (GPD), which motivated Peak Over Threshold (POT) methods to estimate formula_1: the GPD plays the key role in POT approach.
A renowned estimator using the POT methodology is the Hill's estimator. Technical formulation of the Hill's estimator is as follows. For formula_82, write formula_83 for the formula_84-th largest value of formula_85. Then, with this notation, the Hill's estimator (see page 190 of Reference 5 by Embrechts et al ) based on the formula_86 upper order statistics is defined as
formula_87
In practice, the Hill estimator is used as follows. First, calculate the estimator formula_88 at each integer formula_89, and then plot the ordered pairs formula_90. Then, select from the set of Hill estimators formula_91 which are roughly constant with respect to formula_86: these stable values are regarded as reasonable estimates for the shape parameter formula_1. If formula_85 are i.i.d., then the Hill's estimator is a consistent estimator for the shape parameter formula_1 .
Note that the Hill estimator formula_88 makes a use of the log-transformation for the observations formula_73. (The Pickand's estimator formula_92 also employed the log-transformation, but in a slightly different way
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma"
},
{
"math_id": 1,
"text": "\\xi"
},
{
"math_id": 2,
"text": "\\mu"
},
{
"math_id": 3,
"text": " \\kappa = - \\xi \\,"
},
{
"math_id": 4,
"text": "F_{\\xi}(z) = \\begin{cases}\n1 - \\left(1 + \\xi z\\right)^{-1/\\xi} & \\text{for }\\xi \\neq 0, \\\\\n1 - e^{-z} & \\text{for }\\xi = 0.\n\\end{cases}\n"
},
{
"math_id": 5,
"text": " z \\geq 0 "
},
{
"math_id": 6,
"text": " \\xi \\geq 0"
},
{
"math_id": 7,
"text": " 0 \\leq z \\leq - 1 /\\xi "
},
{
"math_id": 8,
"text": " \\xi < 0"
},
{
"math_id": 9,
"text": "f_{\\xi}(z) = \\begin{cases}\n(1 + \\xi z)^{-\\frac{\\xi +1}{\\xi }} & \\text{for }\\xi \\neq 0, \\\\\ne^{-z} & \\text{for }\\xi = 0.\n\\end{cases}\n"
},
{
"math_id": 10,
"text": "\\frac{x-\\mu}{\\sigma}"
},
{
"math_id": 11,
"text": "X \\sim GPD(\\mu, \\sigma, \\xi)"
},
{
"math_id": 12,
"text": "\\mu\\in\\mathbb R"
},
{
"math_id": 13,
"text": "\\sigma>0"
},
{
"math_id": 14,
"text": "\\xi\\in\\mathbb R"
},
{
"math_id": 15,
"text": "F_{(\\mu,\\sigma,\\xi)}(x) = \\begin{cases}\n1 - \\left(1+ \\frac{\\xi(x-\\mu)}{\\sigma}\\right)^{-1/\\xi} & \\text{for }\\xi \\neq 0, \\\\\n1 - \\exp \\left(-\\frac{x-\\mu}{\\sigma}\\right) & \\text{for }\\xi = 0,\n\\end{cases}\n"
},
{
"math_id": 16,
"text": "X"
},
{
"math_id": 17,
"text": " x \\geqslant \\mu "
},
{
"math_id": 18,
"text": " \\xi \\geqslant 0 \\,"
},
{
"math_id": 19,
"text": " \\mu \\leqslant x \\leqslant \\mu - \\sigma /\\xi "
},
{
"math_id": 20,
"text": "f_{(\\mu,\\sigma,\\xi)}(x) = \\frac{1}{\\sigma}\\left(1 + \\frac{\\xi (x-\\mu)}{\\sigma}\\right)^{\\left(-\\frac{1}{\\xi} - 1\\right)}"
},
{
"math_id": 21,
"text": " \\xi \\geqslant 0"
},
{
"math_id": 22,
"text": "\\left\\{\\begin{array}{l}\nf'(x) (-\\mu \\xi +\\sigma+\\xi x)+(\\xi+1) f(x)=0, \\\\\nf(0)=\\frac{\\left(1-\\frac{\\mu \\xi}{\\sigma}\\right)^{-\\frac{1}{\\xi }-1}}{\\sigma}\n\\end{array}\\right\\}\n"
},
{
"math_id": 23,
"text": "\\xi = -1"
},
{
"math_id": 24,
"text": "U(0, \\sigma)"
},
{
"math_id": 25,
"text": "\\xi > 0"
},
{
"math_id": 26,
"text": "\\mu = \\sigma"
},
{
"math_id": 27,
"text": "x_m=\\sigma/\\xi"
},
{
"math_id": 28,
"text": "\\alpha=1/\\xi"
},
{
"math_id": 29,
"text": " X "
},
{
"math_id": 30,
"text": "\\sim"
},
{
"math_id": 31,
"text": "GPD"
},
{
"math_id": 32,
"text": "("
},
{
"math_id": 33,
"text": "\\mu = 0"
},
{
"math_id": 34,
"text": ")"
},
{
"math_id": 35,
"text": " Y = \\log (X) \\sim exGPD(\\sigma, \\xi)"
},
{
"math_id": 36,
"text": " X = \\mu + \\frac{\\sigma (U^{-\\xi}-1)}{\\xi} \\sim GPD(\\mu, \\sigma, \\xi \\neq 0)"
},
{
"math_id": 37,
"text": " X = \\mu - \\sigma \\ln(U) \\sim GPD(\\mu,\\sigma,\\xi =0)."
},
{
"math_id": 38,
"text": "X|\\Lambda \\sim \\operatorname{Exp}(\\Lambda) "
},
{
"math_id": 39,
"text": "\\Lambda \\sim \\operatorname{Gamma}(\\alpha, \\beta) "
},
{
"math_id": 40,
"text": "X \\sim \\operatorname{GPD}(\\xi = 1/\\alpha, \\ \\sigma = \\beta/\\alpha) "
},
{
"math_id": 41,
"text": "Y \\sim \\text{Exponential}(1)"
},
{
"math_id": 42,
"text": "Z \\sim \\text{Gamma}(1/\\xi, 1)"
},
{
"math_id": 43,
"text": "\\mu + \\sigma \\frac{Y}{\\xi Z} \\sim \\text{GPD}(\\mu,\\sigma,\\xi)"
},
{
"math_id": 44,
"text": "\\beta=\\alpha"
},
{
"math_id": 45,
"text": " X \\sim GPD"
},
{
"math_id": 46,
"text": " Y = \\log (X)"
},
{
"math_id": 47,
"text": " Y"
},
{
"math_id": 48,
"text": "exGPD"
},
{
"math_id": 49,
"text": " Y "
},
{
"math_id": 50,
"text": ")\\,\\, (\\sigma >0) "
},
{
"math_id": 51,
"text": " g_{(\\sigma, \\xi)}(y) = \\begin{cases} \\frac{e^y}{\\sigma}\\bigg( 1 + \\frac{\\xi e^y}{\\sigma} \\bigg)^{-1/\\xi -1}\\,\\,\\,\\, \\text{for } \\xi \\neq 0, \\\\ \n \\frac{1}{\\sigma}e^{y - e^{y}/\\sigma} \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\,\\,\\,\\, \\text{for } \\xi = 0 ,\\end{cases}"
},
{
"math_id": 52,
"text": " -\\infty < y < \\infty "
},
{
"math_id": 53,
"text": " \\xi \\geq 0 "
},
{
"math_id": 54,
"text": " -\\infty < y \\leq \\log(-\\sigma/\\xi)"
},
{
"math_id": 55,
"text": " \\xi < 0 "
},
{
"math_id": 56,
"text": "\\log \\sigma "
},
{
"math_id": 57,
"text": "-\\infty< \\xi < \\infty "
},
{
"math_id": 58,
"text": " Y \\sim exGPD(\\sigma,\\xi)"
},
{
"math_id": 59,
"text": " M_Y(s) = E[e^{sY}] = \\begin{cases} -\\frac{1}{\\xi}\\bigg(-\\frac{\\sigma}{\\xi}\\bigg)^{s} B(s+1, -1/\\xi) \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\text{for } s \\in (-1, \\infty), \\xi < 0 , \\\\ \n \\frac{1}{\\xi}\\bigg(\\frac{\\sigma}{\\xi}\\bigg)^{s} B(s+1, 1/\\xi - s) \\,\\,\\,\\,\\,\\, \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\text{for } s \\in (-1, 1/\\xi), \\xi > 0 , \\\\ \n \\sigma^{s} \\Gamma(1+s) \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\text{for } s \\in (-1, \\infty), \\xi = 0, \\end{cases}"
},
{
"math_id": 60,
"text": "B(a,b) "
},
{
"math_id": 61,
"text": " \\Gamma (a) "
},
{
"math_id": 62,
"text": " \\sigma"
},
{
"math_id": 63,
"text": " \\xi "
},
{
"math_id": 64,
"text": " E[Y] = \\begin{cases} \\log\\ \\bigg(-\\frac{\\sigma}{\\xi} \\bigg)+ \\psi(1) - \\psi(-1/\\xi+1) \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\,\\, \\text{for }\\xi < 0 , \\\\ \n \\log\\ \\bigg(\\frac{\\sigma}{\\xi} \\bigg)+ \\psi(1) - \\psi(1/\\xi) \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\,\\,\\, \\,\\,\\, \\,\\,\\, \\,\\,\\, \\,\\,\\, \\,\\,\\,\\,\\,\\, \\,\\,\\, \\text{for }\\xi > 0 , \\\\ \n \\log \\sigma + \\psi(1) \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\,\\,\\, \\,\\,\\, \\,\\,\\, \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\,\\,\\, \\,\\,\\,\\,\\,\\,\\, \\text{for }\\xi = 0. \\end{cases}"
},
{
"math_id": 65,
"text": " \\xi \\in (-\\infty,\\infty) "
},
{
"math_id": 66,
"text": " \\log\\ \\sigma "
},
{
"math_id": 67,
"text": " Var[Y] = \\begin{cases} \\psi'(1) - \\psi'(-1/\\xi +1) \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\, \\text{for }\\xi < 0 , \\\\ \n \\psi'(1) + \\psi'(1/\\xi) \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\text{for }\\xi > 0 , \\\\ \n \\psi'(1) \\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\,\\, \\,\\,\\,\\,\\,\\text{for }\\xi = 0. \\end{cases}"
},
{
"math_id": 68,
"text": " \\psi'(1) = \\pi^2/6 \\approx 1.644934 "
},
{
"math_id": 69,
"text": "Y \\sim exGPD(\\sigma, \\xi)"
},
{
"math_id": 70,
"text": "X \\sim GPD(\\sigma, \\xi)"
},
{
"math_id": 71,
"text": "X \\sim GPD(\\mu=0,\\sigma, \\xi)"
},
{
"math_id": 72,
"text": "Var(X)"
},
{
"math_id": 73,
"text": " X_{1:n} = (X_1, \\cdots, X_n) "
},
{
"math_id": 74,
"text": "n"
},
{
"math_id": 75,
"text": " F "
},
{
"math_id": 76,
"text": "1/\\xi "
},
{
"math_id": 77,
"text": "\\xi "
},
{
"math_id": 78,
"text": "\n\\bar{F}(x) = 1 - F(x) = L(x) \\cdot x^{-1/\\xi}, \\,\\,\\,\\,\\,\\text{for some }\\xi>0,\\,\\,\\text{where } L \\text{ is a slowly varying function.}\n "
},
{
"math_id": 79,
"text": "F_u"
},
{
"math_id": 80,
"text": "F"
},
{
"math_id": 81,
"text": "u"
},
{
"math_id": 82,
"text": " 1\\leq i \\leq n "
},
{
"math_id": 83,
"text": " X_{(i)} "
},
{
"math_id": 84,
"text": "i"
},
{
"math_id": 85,
"text": " X_1, \\cdots, X_n "
},
{
"math_id": 86,
"text": "k"
},
{
"math_id": 87,
"text": "\n\\widehat{\\xi}_{k}^{\\text{Hill}} = \\widehat{\\xi}_{k}^{\\text{Hill}}(X_{1:n}) = \\frac{1}{k-1} \\sum_{j=1}^{k-1} \\log \\bigg(\\frac{X_{(j)}}{X_{(k)}} \\bigg), \\,\\,\\,\\,\\,\\,\\,\\, \\text{for } 2 \\leq k \\leq n.\n "
},
{
"math_id": 88,
"text": "\\widehat{\\xi}_{k}^{\\text{Hill}}"
},
{
"math_id": 89,
"text": "k \\in \\{ 2, \\cdots, n\\}"
},
{
"math_id": 90,
"text": "\\{(k,\\widehat{\\xi}_{k}^{\\text{Hill}})\\}_{k=2}^{n}"
},
{
"math_id": 91,
"text": "\\{\\widehat{\\xi}_{k}^{\\text{Hill}}\\}_{k=2}^{n}"
},
{
"math_id": 92,
"text": "\\widehat{\\xi}_{k}^{\\text{Pickand}}"
}
] | https://en.wikipedia.org/wiki?curid=8599355 |
860138 | Sesquilinear form | Generalization of a bilinear form
In mathematics, a sesquilinear form is a generalization of a bilinear form that, in turn, is a generalization of the concept of the dot product of Euclidean space. A bilinear form is linear in each of its arguments, but a sesquilinear form allows one of the arguments to be "twisted" in a semilinear manner, thus the name; which originates from the Latin numerical prefix meaning "one and a half". The basic concept of the dot product – producing a scalar from a pair of vectors – can be generalized by allowing a broader range of scalar values and, perhaps simultaneously, by widening the definition of a vector.
A motivating special case is a sesquilinear form on a complex vector space, "V". This is a map "V" × "V" → C that is linear in one argument and "twists" the linearity of the other argument by complex conjugation (referred to as being antilinear in the other argument). This case arises naturally in mathematical physics applications. Another important case allows the scalars to come from any field and the twist is provided by a field automorphism.
An application in projective geometry requires that the scalars come from a division ring (skew field), "K", and this means that the "vectors" should be replaced by elements of a "K"-module. In a very general setting, sesquilinear forms can be defined over "R"-modules for arbitrary rings "R".
Informal introduction.
Sesquilinear forms abstract and generalize the basic notion of a Hermitian form on complex vector space. Hermitian forms are commonly seen in physics, as the inner product on a complex Hilbert space. In such cases, the standard Hermitian form on C"n" is given by
formula_0
where formula_1 denotes the complex conjugate of formula_2 This product may be generalized to situations where one is not working with an orthonormal basis for C"n", or even any basis at all. By inserting an extra factor of formula_3 into the product, one obtains the skew-Hermitian form, defined more precisely, below. There is no particular reason to restrict the definition to the complex numbers; it can be defined for arbitrary rings carrying an antiautomorphism, informally understood to be a generalized concept of "complex conjugation" for the ring.
Convention.
Conventions differ as to which argument should be linear. In the commutative case, we shall take the first to be linear, as is common in the mathematical literature, except in the section devoted to sesquilinear forms on complex vector spaces. There we use the other convention and take the first argument to be conjugate-linear (i.e. antilinear) and the second to be linear. This is the convention used mostly by physicists and originates in Dirac's bra–ket notation in quantum mechanics. It is also consistent with the definition of the usual (Euclidean) product of formula_4 as formula_5.
In the more general noncommutative setting, with right modules we take the second argument to be linear and with left modules we take the first argument to be linear.
Assumption: In this section, sesquilinear forms are antilinear in their first argument and linear in their second.
Complex vector spaces.
Over a complex vector space formula_6 a map formula_7 is sesquilinear if
formula_8
for all formula_9 and all formula_10 Here, formula_11 is the complex conjugate of a scalar formula_12
A complex sesquilinear form can also be viewed as a complex bilinear map
formula_13
where formula_14 is the complex conjugate vector space to formula_15 By the universal property of tensor products these are in one-to-one correspondence with complex linear maps
formula_16
For a fixed formula_17 the map formula_18 is a linear functional on formula_6 (i.e. an element of the dual space formula_19). Likewise, the map formula_20 is a conjugate-linear functional on formula_15
Given any complex sesquilinear form formula_21 on formula_6 we can define a second complex sesquilinear form formula_22 via the conjugate transpose:
formula_23
In general, formula_22 and formula_21 will be different. If they are the same then formula_21 is said to be Hermitian. If they are negatives of one another, then formula_21 is said to be skew-Hermitian. Every sesquilinear form can be written as a sum of a Hermitian form and a skew-Hermitian form.
Matrix representation.
If formula_6 is a finite-dimensional complex vector space, then relative to any basis formula_24 of formula_25 a sesquilinear form is represented by a matrix formula_26 and given by
formula_27
where formula_28 is the conjugate transpose. The components of the matrix formula_29 are given by formula_30
"The term Hermitian form may also refer to a different concept than that explained below: it may refer to a certain differential form on a Hermitian manifold."
Hermitian form.
A complex Hermitian form (also called a symmetric sesquilinear form), is a sesquilinear form formula_31 such that
formula_32
The standard Hermitian form on formula_33 is given (again, using the "physics" convention of linearity in the second and conjugate linearity in the first variable) by
formula_34
More generally, the inner product on any complex Hilbert space is a Hermitian form.
A minus sign is introduced in the Hermitian form formula_35 to define the group SU(1,1).
A vector space with a Hermitian form formula_36 is called a Hermitian space.
The matrix representation of a complex Hermitian form is a Hermitian matrix.
A complex Hermitian form applied to a single vector
formula_37
is always a real number. One can show that a complex sesquilinear form is Hermitian if and only if the associated quadratic form is real for all formula_38
Skew-Hermitian form.
A complex skew-Hermitian form (also called an antisymmetric sesquilinear form), is a complex sesquilinear form formula_39 such that
formula_40
Every complex skew-Hermitian form can be written as the imaginary unit formula_41 times a Hermitian form.
The matrix representation of a complex skew-Hermitian form is a skew-Hermitian matrix.
A complex skew-Hermitian form applied to a single vector
formula_42
is always a purely imaginary number.
Over a division ring.
This section applies unchanged when the division ring "K" is commutative. More specific terminology then also applies: the division ring is a field, the anti-automorphism is also an automorphism, and the right module is a vector space. The following applies to a left module with suitable reordering of expressions.
Definition.
A "σ"-sesquilinear form over a right "K"-module "M" is a bi-additive map "φ" : "M" × "M" → "K" with an associated anti-automorphism "σ" of a division ring "K" such that, for all "x", "y" in "M" and all "α", "β" in "K",
formula_43
The associated anti-automorphism "σ" for any nonzero sesquilinear form "φ" is uniquely determined by "φ".
Orthogonality.
Given a sesquilinear form "φ" over a module "M" and a subspace (submodule) "W" of "M", the orthogonal complement of "W" with respect to "φ" is
formula_44
Similarly, "x" ∈ "M" is orthogonal to "y" ∈ "M" with respect to "φ", written "x" ⊥"φ" "y" (or simply "x" ⊥ "y" if "φ" can be inferred from the context), when "φ"("x", "y") = 0. This relation need not be symmetric, i.e. "x" ⊥ "y" does not imply "y" ⊥ "x" (but see "" below).
Reflexivity.
A sesquilinear form "φ" is reflexive if, for all "x", "y" in "M",
formula_45 implies formula_46
That is, a sesquilinear form is reflexive precisely when the derived orthogonality relation is symmetric.
Hermitian variations.
A "σ"-sesquilinear form "φ" is called ("σ", "ε")-Hermitian if there exists "ε" in "K" such that, for all "x", "y" in "M",
formula_47
If "ε" = 1, the form is called "σ"-"Hermitian", and if "ε" = −1, it is called "σ"-"anti-Hermitian". (When "σ" is implied, respectively simply "Hermitian" or "anti-Hermitian".)
For a nonzero ("σ", "ε")-Hermitian form, it follows that for all "α" in "K",
formula_48
formula_49
It also follows that "φ"("x", "x") is a fixed point of the map "α" ↦ "σ"("α")"ε". The fixed points of this map form a subgroup of the additive group of "K".
A ("σ", "ε")-Hermitian form is reflexive, and every reflexive "σ"-sesquilinear form is ("σ", "ε")-Hermitian for some "ε".
In the special case that "σ" is the identity map (i.e., "σ" = id), "K" is commutative, "φ" is a bilinear form and "ε"2 = 1. Then for "ε" = 1 the bilinear form is called "symmetric", and for "ε" = −1 is called "skew-symmetric".
Example.
Let "V" be the three dimensional vector space over the finite field "F" = GF("q"2), where "q" is a prime power. With respect to the standard basis we can write "x" = ("x"1, "x"2, "x"3) and "y" = ("y"1, "y"2, "y"3) and define the map "φ" by:
formula_50
The map "σ" : "t" ↦ "t""q" is an involutory automorphism of "F". The map "φ" is then a "σ"-sesquilinear form. The matrix "M""φ" associated to this form is the identity matrix. This is a Hermitian form.
Assumption: In this section, sesquilinear forms are antilinear (resp. linear) in their second (resp. first) argument.
In projective geometry.
In a projective geometry "G", a permutation "δ" of the subspaces that inverts inclusion, i.e.
"S" ⊆ "T" ⇒ "T""δ" ⊆ "S""δ" for all subspaces "S", "T" of "G",
is called a correlation. A result of Birkhoff and von Neumann (1936) shows that the correlations of desarguesian projective geometries correspond to the nondegenerate sesquilinear forms on the underlying vector space. A sesquilinear form "φ" is "nondegenerate" if "φ"("x", "y") = 0 for all "y" in "V" (if and) only if "x" = 0.
To achieve full generality of this statement, and since every desarguesian projective geometry may be coordinatized by a division ring, Reinhold Baer extended the definition of a sesquilinear form to a division ring, which requires replacing vector spaces by "R"-modules. (In the geometric literature these are still referred to as either left or right vector spaces over skewfields.)
Over arbitrary rings.
The specialization of the above section to skewfields was a consequence of the application to projective geometry, and not intrinsic to the nature of sesquilinear forms. Only the minor modifications needed to take into account the non-commutativity of multiplication are required to generalize the arbitrary field version of the definition to arbitrary rings.
Let "R" be a ring, "V" an "R"-module and "σ" an antiautomorphism of "R".
A map "φ" : "V" × "V" → "R" is "σ"-sesquilinear if
formula_51
formula_52
for all "x", "y", "z", "w" in "V" and all "c", "d" in "R".
An element "x" is orthogonal to another element "y" with respect to the sesquilinear form "φ" (written "x" ⊥ "y") if "φ"("x", "y") = 0. This relation need not be symmetric, i.e. "x" ⊥ "y" does not imply "y" ⊥ "x".
A sesquilinear form "φ" : "V" × "V" → "R" is reflexive (or "orthosymmetric") if "φ"("x", "y") = 0 implies "φ"("y", "x") = 0 for all "x", "y" in "V".
A sesquilinear form "φ" : "V" × "V" → "R" is Hermitian if there exists "σ" such that
formula_53
for all "x", "y" in "V". A Hermitian form is necessarily reflexive, and if it is nonzero, the associated antiautomorphism "σ" is an involution (i.e. of order 2).
Since for an antiautomorphism "σ" we have "σ"("st") = "σ"("t")"σ"("s") for all "s", "t" in "R", if "σ" = id, then "R" must be commutative and "φ" is a bilinear form. In particular, if, in this case, "R" is a skewfield, then "R" is a field and "V" is a vector space with a bilinear form.
An antiautomorphism "σ" : "R" → "R" can also be viewed as an isomorphism "R" → "R"op, where "R"op is the opposite ring of "R", which has the same underlying set and the same addition, but whose multiplication operation (∗) is defined by "a" ∗ "b" = "ba", where the product on the right is the product in "R". It follows from this that a right (left) "R"-module "V" can be turned into a left (right) "R"op-module, "V"o. Thus, the sesquilinear form "φ" : "V" × "V" → "R" can be viewed as a bilinear form "φ"′ : "V" × "V"o → "R".
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\langle w,z \\rangle = \\sum_{i=1}^n \\overline{w}_i z_i."
},
{
"math_id": 1,
"text": "\\overline{w}_i"
},
{
"math_id": 2,
"text": "w_i ~."
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "w,z\\in\\mathbb C^n"
},
{
"math_id": 5,
"text": "w^*z"
},
{
"math_id": 6,
"text": "V"
},
{
"math_id": 7,
"text": "\\varphi : V \\times V \\to \\Complex"
},
{
"math_id": 8,
"text": "\\begin{align}\n&\\varphi(x + y, z + w) = \\varphi(x, z) + \\varphi(x, w) + \\varphi(y, z) + \\varphi(y, w)\\\\\n&\\varphi(a x, b y) = \\overline{a}b\\,\\varphi(x,y)\\end{align}"
},
{
"math_id": 9,
"text": "x, y, z, w \\in V"
},
{
"math_id": 10,
"text": "a, b \\in \\Complex."
},
{
"math_id": 11,
"text": "\\overline{a}"
},
{
"math_id": 12,
"text": "a."
},
{
"math_id": 13,
"text": "\\overline{V} \\times V \\to \\Complex"
},
{
"math_id": 14,
"text": "\\overline{V}"
},
{
"math_id": 15,
"text": "V."
},
{
"math_id": 16,
"text": "\\overline{V} \\otimes V \\to \\Complex."
},
{
"math_id": 17,
"text": "z \\in V"
},
{
"math_id": 18,
"text": "w \\mapsto \\varphi(z, w)"
},
{
"math_id": 19,
"text": "V^*"
},
{
"math_id": 20,
"text": "w \\mapsto \\varphi(w, z)"
},
{
"math_id": 21,
"text": "\\varphi"
},
{
"math_id": 22,
"text": "\\psi"
},
{
"math_id": 23,
"text": "\\psi(w,z) = \\overline{\\varphi(z,w)}."
},
{
"math_id": 24,
"text": "\\left\\{ e_i \\right\\}_i"
},
{
"math_id": 25,
"text": "V,"
},
{
"math_id": 26,
"text": "A,"
},
{
"math_id": 27,
"text": "\\varphi(w,z) = \\varphi \\left(\\sum_i w_i e_i, \\sum_j z_j e_j \\right) = \\sum_i \\sum_j \\overline{w_i} z_j \\varphi\\left(e_i, e_j\\right) = w^\\dagger A z ."
},
{
"math_id": 28,
"text": "w^\\dagger"
},
{
"math_id": 29,
"text": "A"
},
{
"math_id": 30,
"text": "A_{ij} := \\varphi\\left(e_i, e_j\\right)."
},
{
"math_id": 31,
"text": "h : V \\times V \\to \\Complex"
},
{
"math_id": 32,
"text": "h(w,z) = \\overline{h(z, w)}."
},
{
"math_id": 33,
"text": "\\Complex^n"
},
{
"math_id": 34,
"text": "\\langle w,z \\rangle = \\sum_{i=1}^n \\overline{w}_i z_i."
},
{
"math_id": 35,
"text": "w w^* - z z^*"
},
{
"math_id": 36,
"text": "(V, h)"
},
{
"math_id": 37,
"text": "|z|_h = h(z, z)"
},
{
"math_id": 38,
"text": "z \\in V."
},
{
"math_id": 39,
"text": "s : V \\times V \\to \\Complex"
},
{
"math_id": 40,
"text": "s(w,z) = -\\overline{s(z, w)}."
},
{
"math_id": 41,
"text": "i := \\sqrt{-1}"
},
{
"math_id": 42,
"text": "|z|_s = s(z, z)"
},
{
"math_id": 43,
"text": "\\varphi(x \\alpha, y \\beta) = \\sigma(\\alpha) \\, \\varphi(x, y) \\, \\beta ."
},
{
"math_id": 44,
"text": "W^{\\perp}=\\{\\mathbf{v} \\in M \\mid \\varphi (\\mathbf{v}, \\mathbf{w})=0,\\ \\forall \\mathbf{w}\\in W\\} . "
},
{
"math_id": 45,
"text": "\\varphi(x, y) = 0"
},
{
"math_id": 46,
"text": "\\varphi(y, x) = 0."
},
{
"math_id": 47,
"text": "\\varphi(x, y) = \\sigma ( \\varphi (y, x)) \\, \\varepsilon ."
},
{
"math_id": 48,
"text": " \\sigma ( \\varepsilon ) = \\varepsilon^{-1} "
},
{
"math_id": 49,
"text": " \\sigma ( \\sigma ( \\alpha ) ) = \\varepsilon \\alpha \\varepsilon^{-1} ."
},
{
"math_id": 50,
"text": "\\varphi(x, y) = x_1 y_1{}^q + x_2 y_2{}^q + x_3 y_3{}^q."
},
{
"math_id": 51,
"text": "\\varphi(x + y, z + w) = \\varphi(x, z) + \\varphi(x, w) + \\varphi(y, z) + \\varphi(y, w)"
},
{
"math_id": 52,
"text": "\\varphi(c x, d y) = c \\, \\varphi(x,y) \\, \\sigma(d)"
},
{
"math_id": 53,
"text": "\\varphi(x, y) = \\sigma(\\varphi(y, x))"
}
] | https://en.wikipedia.org/wiki?curid=860138 |
860249 | Booster engine | A locomotive booster for steam locomotives is a small supplementary two-cylinder steam engine back-gear-connected to the trailing truck axle on the locomotive or one of the trucks on the tender. It was invented in 1918 by Howard L. Ingersoll, assistant to the president of the New York Central Railroad.
A rocking idler gear permits the booster engine to be put into operation by the driver (engineer). A geared booster engine drives one axle only and can be non-reversible, with one idler gear, or reversible, with two idler gears. There were variations built by the Franklin company which utilized side rods to transmit tractive force to all axles of the booster truck. These rod boosters were predominately used on the leading truck of the tender, though there is an example of a Lehigh Valley 4-8-4 using it as a trailing tender truck.
A booster engine is used to start a heavy train or maintain low speed under demanding conditions. Rated at about at speeds from , it can be cut in while moving at speeds under and is semi-automatically cut out via the engineer notching back the reverse gear or manually through knocking down the control latch up to a speed between , depending on the model and gearing of the booster. A tractive effort rating of was common, although ratings of up to around were possible.
Tender boosters are equipped with side-rods connecting axles on the lead truck. Such small side-rods restrict speed and are therefore confined mostly to switching locomotives, often used in transfer services between yards. Tender boosters were far less common than engine boosters; the inherent weight of the tenders would decrease as coal and water were consumed during operation, effectively lowering the adhesion of the booster-powered truck.
Reasons for use.
The booster was intended to make up for fundamental flaws in the design of the standard steam locomotive. To start off, most steam locomotives do not provide power to all wheels. The amount of force that can be applied to the rail depends on the weight on the driven wheels and the factor of adhesion of the wheels against the track. Unpowered wheels are generally needed to provide stability at speed, but at low speed they are not required, so they effectively 'waste' weight which could be used for traction. Therefore, the application of a booster engine to the previously unpowered axle meant that overall starting tractive effort was increased with zero penalty to the adhesion levels of the main engine.
Additionally, the "gearing" of a steam locomotive is fixed, because the pistons are linked directly to the wheels via rods and cranks. Therefore, a compromise must be struck between ability to exert high tractive effort at low speed and the ability to run fast without inducing excessive piston speeds (which would cause failure), or the exhaustion of steam. That compromise means that, at low speeds, a steam locomotive is not able to use all the power the boiler is capable of producing; it simply cannot use steam that quickly, so there is a substantial difference between the amount of steam the boiler can produce and the amount that can be used. The booster engine enabled that wasted potential to be put to use.
The increased starting tractive effort provided by the booster meant that, in some instances, railroads were able to reduce the number of, or eliminate the use of additional helper locomotives on heavier trains. This resulted in lower operating and maintenance costs, higher locomotive availability and productivity (ton-miles), and ultimately, greater profitability.
Disadvantages.
Boosters were costly to maintain, with their flexible steam and exhaust pipes, idler gear, etc. Improper operation could also result in undesirable drops in boiler pressure and/or damage to the booster. The booster and its associated components also added several tons of weight to the locomotive which would be considered "dead weight" at speeds above which the booster could not be used. Additionally, if the booster suffered a failure where the idler gear could not be disengaged, the entire locomotive would be speed restricted to 20 mph or less, until it could be taken out of service to facilitate repairs, decreasing locomotive availability.
Calculating tractive effort and operating speeds.
A rough calculation of booster tractive effort could be made with the following formula:
formula_0
where
The typical locomotive booster employed a pair of by cylinders. Available gear ratios and associated operating speeds for both the Franklin type C and type E booster models are detailed in the table below.
Usage.
North America.
The booster saw most use in North America. Railway systems elsewhere often considered the expense and complexity unjustified.
Even in the North American region, booster engines were applied to only a fraction of all locomotives built. Some railroads used boosters extensively while others did not. The New York Central was the first railroad to use a booster in 1919 and remained a proponent of the device, applying them to all of its high-drivered 4-6-4 Hudson locomotives to increase their acceleration out of stations with crack passenger trains. The rival Pennsylvania Railroad, however, used few booster-equipped locomotives. Similarly, the Chesapeake & Ohio specified boosters on all of its Superpower locomotives aside from the Allegheny to increase tonnage ratings over some of the hilly terrain found on their main lines, while rival Norfolk & Western experimented with boosters briefly and found their cost unjustified, instead choosing to increase engine tractive effort through the raising of boiler pressure.
Canadian Pacific Railway rostered 3,257 steam locomotives acquired between 1881 and 1949, yet only 55 were equipped with boosters. 17 H1 class 4-6-4s, 2 K1 class 4-8-4s and all 36 Selkirk 2-10-4s.
Australia.
In Australia, Victorian Railways equipped all but one of its X class 2-8-2 locomotives (built between 1929 and 1947) with a 'Franklin' two-cylinder booster engine, following a successful trial of the device on a smaller N class 2-8-2 in 1927. From 1929 onwards, South Australian Railways 500 class 4-8-2 heavy passenger locomotives were rebuilt into 4-8-4s with the addition of a booster truck .
New Zealand.
NZR's KB class of 1939 were built with a booster truck to enable the locomotives to handle the steeper grades of some South Island lines (particularly the "Cass Bank" of the Midland Line). Some boosters were later removed because of the gear jamming.
Great Britain.
In Great Britain, eight locomotives of four different classes on the London and North Eastern Railway were equipped with booster units by Nigel Gresley. Four were existing locomotives rebuilt with boosters between 1923 and 1932: one of class C1 (in 1923); both of the conversions from class C7 to class C9 (in 1931); and one of class S1 (in 1932). The remaining four were all fitted to new locomotives: the two P1 2-8-2 locomotives, built in 1925; and two class S1 locomotives built in 1932. The boosters were removed between 1935 and 1938, apart from those on class S1 which were retained until 1943.
An early type of booster used in Great Britain was the steam tender, which was tried in 1859 by Benjamin Connor of the Caledonian Railway on four 2-4-0 locomotives. Archibald Sturrock of the Great Northern Railway (GNR) patented a similar system on 6 May 1863 (patent no. 1135). It was used on fifty GNR 0-6-0 locomotives: thirty converted from existing locomotives between 1863 and 1866, and twenty built new in 1865 (nos. 400–419). The equipment was removed from all fifty during 1867–68. | [
{
"math_id": 0,
"text": "t = \\frac {cd^2spr} {w},"
}
] | https://en.wikipedia.org/wiki?curid=860249 |
8603 | Diffraction | Phenomenon of the motion of waves
Diffraction is the interference or bending of waves around the corners of an obstacle or through an aperture into the region of geometrical shadow of the obstacle/aperture. The diffracting object or aperture effectively becomes a secondary source of the propagating wave. Italian scientist Francesco Maria Grimaldi coined the word "diffraction" and was the first to record accurate observations of the phenomenon in 1660.
In classical physics, the diffraction phenomenon is described by the Huygens–Fresnel principle that treats each point in a propagating wavefront as a collection of individual spherical wavelets. The characteristic bending pattern is most pronounced when a wave from a coherent source (such as a laser) encounters a slit/aperture that is comparable in size to its wavelength, as shown in the inserted image. This is due to the addition, or interference, of different points on the wavefront (or, equivalently, each wavelet) that travel by paths of different lengths to the registering surface. If there are multiple, closely spaced openings (e.g., a diffraction grating), a complex pattern of varying intensity can result.
These effects also occur when a light wave travels through a medium with a varying refractive index, or when a sound wave travels through a medium with varying acoustic impedance – all waves diffract, including gravitational waves, water waves, and other electromagnetic waves such as X-rays and radio waves. Furthermore, quantum mechanics also demonstrates that matter possesses wave-like properties and, therefore, undergoes diffraction (which is measurable at subatomic to molecular levels).
The amount of diffraction depends on the size of the gap. Diffraction is greatest when the size of the gap is similar to the wavelength of the wave. In this case, when the waves pass through the gap they become semi-circular.
History.
Da Vinci might have observed diffraction in a broadening of the shadow. The effects of diffraction of light were first carefully observed and characterized by Francesco Maria Grimaldi, who also coined the term "diffraction", from the Latin "diffringere", 'to break into pieces', referring to light breaking up into different directions. The results of Grimaldi's observations were published posthumously in 1665. Isaac Newton studied these effects and attributed them to "inflexion" of light rays. James Gregory (1638–1675) observed the diffraction patterns caused by a bird feather, which was effectively the first diffraction grating to be discovered. Thomas Young performed a celebrated experiment in 1803 demonstrating interference from two closely spaced slits. Explaining his results by interference of the waves emanating from the two different slits, he deduced that light must propagate as waves. Augustin-Jean Fresnel did more definitive studies and calculations of diffraction, made public in 1816 and 1818, and thereby gave great support to the wave theory of light that had been advanced by Christiaan Huygens and reinvigorated by Young, against Newton's corpuscular theory of light.
Mechanism.
In classical physics diffraction arises because of how waves propagate; this is described by the Huygens–Fresnel principle and the principle of superposition of waves. The propagation of a wave can be visualized by considering every particle of the transmitted medium on a wavefront as a point source for a secondary spherical wave. The wave displacement at any subsequent point is the sum of these secondary waves. When waves are added together, their sum is determined by the relative phases as well as the amplitudes of the individual waves so that the summed amplitude of the waves can have any value between zero and the sum of the individual amplitudes. Hence, diffraction patterns usually have a series of maxima and minima.
In the modern quantum mechanical understanding of light propagation through a slit (or slits) every photon is described by its wavefunction that determines the probability distribution for the photon: the light and dark bands are the areas where the photons are more or less likely to be detected. The wavefunction is determined by the physical surroundings such as slit geometry, screen distance, and initial conditions when the photon is created. The wave nature of individual photons (as opposed to wave properties only arising from the interactions between multitudes of photons) was implied by a low-intensity double-slit experiment first performed by G. I. Taylor in 1909. The quantum approach has some striking similarities to the Huygens-Fresnel principle; based on that principle, as light travels through slits and boundaries, secondary point light sources are created near or along these obstacles, and the resulting diffraction pattern is going to be the intensity profile based on the collective interference of all these light sources that have different optical paths. In the quantum formalism, that is similar to considering the limited regions around the slits and boundaries from which photons are more likely to originate, and calculating the probability distribution (that is proportional to the resulting intensity of classical formalism).
There are various analytical models which allow the diffracted field to be calculated, including the Kirchhoff diffraction equation (derived from the wave equation), the Fraunhofer diffraction approximation of the Kirchhoff equation (applicable to the far field), the Fresnel diffraction approximation (applicable to the near field) and the Feynman path integral formulation. Most configurations cannot be solved analytically, but can yield numerical solutions through finite element and boundary element methods.
It is possible to obtain a qualitative understanding of many diffraction phenomena by considering how the relative phases of the individual secondary wave sources vary, and, in particular, the conditions in which the phase difference equals half a cycle in which case waves will cancel one another out.
The simplest descriptions of diffraction are those in which the situation can be reduced to a two-dimensional problem. For water waves, this is already the case; water waves propagate only on the surface of the water. For light, we can often neglect one direction if the diffracting object extends in that direction over a distance far greater than the wavelength. In the case of light shining through small circular holes, we will have to take into account the full three-dimensional nature of the problem.
Examples.
The effects of diffraction are often seen in everyday life. The most striking examples of diffraction are those that involve light; for example, the closely spaced tracks on a CD or DVD act as a diffraction grating to form the familiar rainbow pattern seen when looking at a disc.
This principle can be extended to engineer a grating with a structure such that it will produce any diffraction pattern desired; the hologram on a credit card is an example.
Diffraction in the atmosphere by small particles can cause a corona - a bright disc and rings around a bright light source like the sun or the moon. At the opposite point one may also observe glory - bright rings around the shadow of the observer. In contrast to the corona, glory requires the particles to be transparent spheres (like fog droplets), since the backscattering of the light that forms the glory involves refraction and internal reflection within the droplet.
A shadow of a solid object, using light from a compact source, shows small fringes near its edges.
Diffraction spikes are diffraction patterns caused due to non-circular aperture in camera or support struts in telescope; In normal vision, diffraction through eyelashes may produce such spikes.
The speckle pattern which is observed when laser light falls on an optically rough surface is also a diffraction phenomenon. When deli meat appears to be iridescent, that is diffraction off the meat fibers. All these effects are a consequence of the fact that light propagates as a wave.
Diffraction can occur with any kind of wave. Ocean waves diffract around jetties and other obstacles. Sound waves can diffract around objects, which is why one can still hear someone calling even when hiding behind a tree.
Diffraction can also be a concern in some technical applications; it sets a fundamental limit to the resolution of a camera, telescope, or microscope.
Other examples of diffraction are considered below.
Single-slit diffraction.
A long slit of infinitesimal width which is illuminated by light diffracts the light into a series of circular waves and the wavefront which emerges from the slit is a cylindrical wave of uniform intensity, in accordance with the Huygens–Fresnel principle.
An illuminated slit that is wider than a wavelength produces interference effects in the space downstream of the slit. Assuming that the slit behaves as though it has a large number of point sources spaced evenly across the width of the slit interference effects can be calculated. The analysis of this system is simplified if we consider light of a single wavelength. If the incident light is coherent, these sources all have the same phase. Light incident at a given point in the space downstream of the slit is made up of contributions from each of these point sources and if the relative phases of these contributions vary by formula_2 or more, we may expect to find minima and maxima in the diffracted light. Such phase differences are caused by differences in the path lengths over which contributing rays reach the point from the slit.
We can find the angle at which a first minimum is obtained in the diffracted light by the following reasoning. The light from a source located at the top edge of the slit interferes destructively with a source located at the middle of the slit, when the path difference between them is equal to formula_3. Similarly, the source just below the top of the slit will interfere destructively with the source located just below the middle of the slit at the same angle. We can continue this reasoning along the entire height of the slit to conclude that the condition for destructive interference for the entire slit is the same as the condition for destructive interference between two narrow slits a distance apart that is half the width of the slit. The path difference is approximately formula_4 so that the minimum intensity occurs at an angle formula_5 given by
formula_6
where formula_0 is the width of the slit, formula_5 is the angle of incidence at which the minimum intensity occurs, and formula_7 is the wavelength of the light.
A similar argument can be used to show that if we imagine the slit to be divided into four, six, eight parts, etc., minima are obtained at angles formula_8 given by
formula_9
where formula_10 is an integer other than zero.
There is no such simple argument to enable us to find the maxima of the diffraction pattern. The intensity profile can be calculated using the Fraunhofer diffraction equation as
formula_11
where formula_12 is the intensity at a given angle, formula_13 is the intensity at the central maximum (formula_14), which is also a normalization factor of the intensity profile that can be determined by an integration from formula_15 to formula_16 and conservation of energy, and formula_17, which is the unnormalized sinc function.
This analysis applies only to the far field (Fraunhofer diffraction), that is, at a distance much larger than the width of the slit.
From the intensity profile above, if formula_18, the intensity will have little dependency on formula_1, hence the wavefront emerging from the slit would resemble a cylindrical wave with azimuthal symmetry; If formula_19, only formula_20 would have appreciable intensity, hence the wavefront emerging from the slit would resemble that of geometrical optics.
When the incident angle formula_21 of the light onto the slit is non-zero (which causes a change in the path length), the intensity profile in the Fraunhofer regime (i.e. far field) becomes:
formula_22
The choice of plus/minus sign depends on the definition of the incident angle formula_21.
Diffraction grating.
A diffraction grating is an optical component with a regular pattern. The form of the light diffracted by a grating depends on the structure of the elements and the number of elements present, but all gratings have intensity maxima at angles "θ""m" which are given by the grating equation
formula_23
where formula_24 is the angle at which the light is incident, formula_0 is the separation of grating elements, and formula_25 is an integer which can be positive or negative.
The light diffracted by a grating is found by summing the light diffracted from each of the elements, and is essentially a convolution of diffraction and interference patterns.
The figure shows the light diffracted by 2-element and 5-element gratings where the grating spacings are the same; it can be seen that the maxima are in the same position, but the detailed structures of the intensities are different.
Circular aperture.
The far-field diffraction of a plane wave incident on a circular aperture is often referred to as the Airy disk. The variation in intensity with angle is given by
formula_26
where formula_27 is the radius of the circular aperture, formula_28 is equal to formula_29 and formula_30 is a Bessel function. The smaller the aperture, the larger the spot size at a given distance, and the greater the divergence of the diffracted beams.
General aperture.
The wave that emerges from a point source has amplitude formula_31 at location formula_32 that is given by the solution of the frequency domain wave equation for a point source (the Helmholtz equation),
formula_33
where formula_34 is the 3-dimensional delta function. The delta function has only radial dependence, so the Laplace operator (a.k.a. scalar Laplacian) in the spherical coordinate system simplifies to
formula_35
(See del in cylindrical and spherical coordinates.) By direct substitution, the solution to this equation can be readily shown to be the scalar Green's function, which in the spherical coordinate system (and using the physics time convention formula_36) is
formula_37
This solution assumes that the delta function source is located at the origin. If the source is located at an arbitrary source point, denoted by the vector formula_38 and the field point is located at the point formula_32, then we may represent the scalar Green's function (for arbitrary source location) as
formula_39
Therefore, if an electric field formula_40 is incident on the aperture, the field produced by this aperture distribution is given by the surface integral
formula_41
where the source point in the aperture is given by the vector
formula_42
In the far field, wherein the parallel rays approximation can be employed, the Green's function,
formula_43
simplifies to
formula_44
as can be seen in the adjacent figure.
The expression for the far-zone (Fraunhofer region) field becomes
formula_45
Now, since
formula_46
and
formula_47
the expression for the Fraunhofer region field from a planar aperture now becomes
formula_48
Letting
formula_49
and
formula_50
the Fraunhofer region field of the planar aperture assumes the form of a Fourier transform
formula_51
In the far-field / Fraunhofer region, this becomes the spatial Fourier transform of the aperture distribution. Huygens' principle when applied to an aperture simply says that the far-field diffraction pattern is the spatial Fourier transform of the aperture shape, and this is a direct by-product of using the parallel-rays approximation, which is identical to doing a plane wave decomposition of the aperture plane fields (see Fourier optics).
Propagation of a laser beam.
The way in which the beam profile of a laser beam changes as it propagates is determined by diffraction. When the entire emitted beam has a planar, spatially coherent wave front, it approximates Gaussian beam profile and has the lowest divergence for a given diameter. The smaller the output beam, the quicker it diverges. It is possible to reduce the divergence of a laser beam by first expanding it with one convex lens, and then collimating it with a second convex lens whose focal point is coincident with that of the first lens. The resulting beam has a larger diameter, and hence a lower divergence. Divergence of a laser beam may be reduced below the diffraction of a Gaussian beam or even reversed to convergence if the refractive index of the propagation media increases with the light intensity. This may result in a self-focusing effect.
When the wave front of the emitted beam has perturbations, only the transverse coherence length (where the wave front perturbation is less than 1/4 of the wavelength) should be considered as a Gaussian beam diameter when determining the divergence of the laser beam. If the transverse coherence length in the vertical direction is higher than in horizontal, the laser beam divergence will be lower in the vertical direction than in the horizontal.
Diffraction-limited imaging.
The ability of an imaging system to resolve detail is ultimately limited by diffraction. This is because a plane wave incident on a circular lens or mirror is diffracted as described above. The light is not focused to a point but forms an Airy disk having a central spot in the focal plane whose radius (as measured to the first null) is
formula_52
where formula_7 is the wavelength of the light and formula_53 is the f-number (focal length formula_54 divided by aperture diameter formula_55) of the imaging optics; this is strictly accurate for formula_56 (paraxial case). In object space, the corresponding angular resolution is
formula_57
where formula_55 is the diameter of the entrance pupil of the imaging lens (e.g., of a telescope's main mirror).
Two point sources will each produce an Airy pattern – see the photo of a binary star. As the point sources move closer together, the patterns will start to overlap, and ultimately they will merge to form a single pattern, in which case the two point sources cannot be resolved in the image. The Rayleigh criterion specifies that two point sources are considered "resolved" if the separation of the two images is at least the radius of the Airy disk, i.e. if the first minimum of one coincides with the maximum of the other.
Thus, the larger the aperture of the lens compared to the wavelength, the finer the resolution of an imaging system. This is one reason astronomical telescopes require large objectives, and why microscope objectives require a large numerical aperture (large aperture diameter compared to working distance) in order to obtain the highest possible resolution.
Speckle patterns.
The speckle pattern seen when using a laser pointer is another diffraction phenomenon. It is a result of the superposition of many waves with different phases, which are produced when a laser beam illuminates a rough surface. They add together to give a resultant wave whose amplitude, and therefore intensity, varies randomly.
Babinet's principle.
Babinet's principle is a useful theorem stating that the diffraction pattern from an opaque body is identical to that from a hole of the same size and shape, but with differing intensities. This means that the interference conditions of a single obstruction would be the same as that of a single slit.
"Knife edge".
The knife-edge effect or knife-edge diffraction is a truncation of a portion of the incident radiation that strikes a sharp well-defined obstacle, such as a mountain range or the wall of a building.
The knife-edge effect is explained by the Huygens–Fresnel principle, which states that a well-defined obstruction to an electromagnetic wave acts as a secondary source, and creates a new wavefront. This new wavefront propagates into the geometric shadow area of the obstacle.
Knife-edge diffraction is an outgrowth of the "half-plane problem", originally solved by Arnold Sommerfeld using a plane wave spectrum formulation. A generalization of the half-plane problem is the "wedge problem", solvable as a boundary value problem in cylindrical coordinates. The solution in cylindrical coordinates was then extended to the optical regime by Joseph B. Keller, who introduced the notion of diffraction coefficients through his geometrical theory of diffraction (GTD). Pathak and Kouyoumjian extended the (singular) Keller coefficients via the uniform theory of diffraction (UTD).
Patterns.
Several qualitative observations can be made of diffraction in general:
Matter wave diffraction.
According to quantum theory every particle exhibits wave properties and can therefore diffract. Diffraction of electrons and neutrons is one of the powerful arguments in favor of quantum mechanics. The wavelength associated with a particle is the de Broglie wavelength
formula_58
where formula_59 is the Planck constant and formula_60 is the momentum of the particle (mass × velocity for slow-moving particles). For example, a sodium atom traveling at about 300 m/s would have a de Broglie wavelength of about 50 picometres.
Diffraction of matter waves has been observed for small particles, like electrons, neutrons, atoms, and even large molecules. The short wavelength of these matter waves makes them ideally suited to study the atomic crystal structure of solids, small molecules and proteins.
Bragg diffraction.
Diffraction from a large three-dimensional periodic structure such as many thousands of atoms in a crystal is called Bragg diffraction.
It is similar to what occurs when waves are scattered from a diffraction grating. Bragg diffraction is a consequence of interference between waves reflecting from many different crystal planes.
The condition of constructive interference is given by "Bragg's law":
formula_61
where formula_7 is the wavelength, formula_0 is the distance between crystal planes, formula_1 is the angle of the diffracted wave, and formula_25 is an integer known as the "order" of the diffracted beam.
Bragg diffraction may be carried out using either electromagnetic radiation of very short wavelength like X-rays or matter waves like neutrons (and electrons) whose wavelength is on the order of (or much smaller than) the atomic spacing. The pattern produced gives information of the separations of crystallographic planes formula_0, allowing one to deduce the crystal structure.
For completeness, Bragg diffraction is a limit for a large number of atoms with X-rays or neutrons, and is rarely valid for electron diffraction or with solid particles in the size range of less than 50 nanometers.
Coherence.
The description of diffraction relies on the interference of waves emanating from the same source taking different paths to the same point on a screen. In this description, the difference in phase between waves that took different paths is only dependent on the effective path length. This does not take into account the fact that waves that arrive at the screen at the same time were emitted by the source at different times. The initial phase with which the source emits waves can change over time in an unpredictable way. This means that waves emitted by the source at times that are too far apart can no longer form a constant interference pattern since the relation between their phases is no longer time independent.919
The length over which the phase in a beam of light is correlated is called the coherence length. In order for interference to occur, the path length difference must be smaller than the coherence length. This is sometimes referred to as spectral coherence, as it is related to the presence of different frequency components in the wave. In the case of light emitted by an atomic transition, the coherence length is related to the lifetime of the excited state from which the atom made its transition.
If waves are emitted from an extended source, this can lead to incoherence in the transversal direction. When looking at a cross section of a beam of light, the length over which the phase is correlated is called the transverse coherence length. In the case of Young's double-slit experiment, this would mean that if the transverse coherence length is smaller than the spacing between the two slits, the resulting pattern on a screen would look like two single-slit diffraction patterns.
In the case of particles like electrons, neutrons, and atoms, the coherence length is related to the spatial extent of the wave function that describes the particle.107
Applications.
Diffraction before destruction.
A new way to image single biological particles has emerged since the 2010s, utilising the bright X-rays generated by X-ray free-electron lasers. These femtosecond-duration pulses will allow for the (potential) imaging of single biological macromolecules. Due to these short pulses, radiation damage can be outrun, and diffraction patterns of single biological macromolecules will be able to be obtained.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d"
},
{
"math_id": 1,
"text": "\\theta"
},
{
"math_id": 2,
"text": "2\\pi"
},
{
"math_id": 3,
"text": "\\lambda/2"
},
{
"math_id": 4,
"text": "\\frac{d \\sin(\\theta)}{2}"
},
{
"math_id": 5,
"text": "\\theta_\\text{min}"
},
{
"math_id": 6,
"text": "d\\,\\sin\\theta_\\text{min} = \\lambda,"
},
{
"math_id": 7,
"text": "\\lambda"
},
{
"math_id": 8,
"text": "\\theta_{n}"
},
{
"math_id": 9,
"text": "d\\,\\sin\\theta_{n} = n \\lambda,"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": "I(\\theta) = I_0 \\, \\operatorname{sinc}^2 \\left( \\frac{d \\pi}{\\lambda} \\sin\\theta \\right),"
},
{
"math_id": 12,
"text": "I(\\theta)"
},
{
"math_id": 13,
"text": "I_0"
},
{
"math_id": 14,
"text": "\\theta = 0"
},
{
"math_id": 15,
"text": "\\theta = -\\frac{\\pi}{2}"
},
{
"math_id": 16,
"text": "\\theta = \\frac{\\pi}{2}"
},
{
"math_id": 17,
"text": "\\operatorname{sinc} x = \\frac{\\sin x}{x}"
},
{
"math_id": 18,
"text": "d \\ll \\lambda"
},
{
"math_id": 19,
"text": "d \\gg \\lambda"
},
{
"math_id": 20,
"text": "\\theta \\approx 0"
},
{
"math_id": 21,
"text": "\\theta_\\text{i}"
},
{
"math_id": 22,
"text": "I(\\theta) = I_0 \\, \\operatorname{sinc}^2 \\left[ \\frac{d \\pi}{\\lambda} (\\sin\\theta \\pm \\sin\\theta_\\text{i})\\right]"
},
{
"math_id": 23,
"text": " d \\left( \\sin{\\theta_m} \\pm \\sin{\\theta_i} \\right) = m \\lambda,"
},
{
"math_id": 24,
"text": "\\theta_{i}"
},
{
"math_id": 25,
"text": "m"
},
{
"math_id": 26,
"text": "I(\\theta) = I_0 \\left ( \\frac{2 J_1(ka \\sin \\theta)}{ka \\sin \\theta} \\right )^2 ,"
},
{
"math_id": 27,
"text": "a"
},
{
"math_id": 28,
"text": "k"
},
{
"math_id": 29,
"text": "2\\pi/\\lambda"
},
{
"math_id": 30,
"text": "J_1"
},
{
"math_id": 31,
"text": "\\psi"
},
{
"math_id": 32,
"text": "\\mathbf r"
},
{
"math_id": 33,
"text": "\\nabla^2 \\psi + k^2 \\psi = \\delta(\\mathbf r),"
},
{
"math_id": 34,
"text": " \\delta(\\mathbf r)"
},
{
"math_id": 35,
"text": "\\nabla ^2\\psi = \\frac{1}{r} \\frac {\\partial ^2}{\\partial r^2} (r \\psi) ."
},
{
"math_id": 36,
"text": "e^{-i \\omega t}"
},
{
"math_id": 37,
"text": "\\psi(r) = \\frac{e^{ikr}}{4 \\pi r}."
},
{
"math_id": 38,
"text": "\\mathbf r'"
},
{
"math_id": 39,
"text": "\\psi(\\mathbf r | \\mathbf r') = \\frac{e^{ik | \\mathbf r - \\mathbf r' | }}{4 \\pi | \\mathbf r - \\mathbf r' |}."
},
{
"math_id": 40,
"text": "E_\\mathrm{inc}(x, y)"
},
{
"math_id": 41,
"text": "\\Psi(r)\\propto \\iint\\limits_\\mathrm{aperture} \\!\\! E_\\mathrm{inc}(x',y') ~ \\frac{e^{ik | \\mathbf r - \\mathbf r'|}}{4 \\pi | \\mathbf r - \\mathbf r' |} \\,dx'\\, dy',"
},
{
"math_id": 42,
"text": "\\mathbf{r}' = x' \\mathbf{\\hat{x}} + y' \\mathbf{\\hat{y}}."
},
{
"math_id": 43,
"text": "\\psi(\\mathbf r | \\mathbf r') = \\frac{e^{ik | \\mathbf r - \\mathbf r' |} }{4 \\pi | \\mathbf r - \\mathbf r' |},"
},
{
"math_id": 44,
"text": " \\psi(\\mathbf{r} | \\mathbf{r}') = \\frac{e^{ik r}}{4 \\pi r} e^{-ik ( \\mathbf{r}' \\cdot \\mathbf{\\hat{r}})}"
},
{
"math_id": 45,
"text": "\\Psi(r)\\propto \\frac{e^{ik r}}{4 \\pi r} \\iint\\limits_\\mathrm{aperture} \\!\\! E_\\mathrm{inc}(x',y') e^{-ik ( \\mathbf{r}' \\cdot \\mathbf{\\hat{r}} ) } \\, dx' \\,dy'."
},
{
"math_id": 46,
"text": "\\mathbf{r}' = x' \\mathbf{\\hat{x}} + y' \\mathbf{\\hat{y}}"
},
{
"math_id": 47,
"text": "\\mathbf{\\hat{r}} = \\sin \\theta \\cos \\phi \\mathbf{\\hat{x}} + \\sin \\theta ~ \\sin \\phi ~ \\mathbf{\\hat{y}} + \\cos \\theta \\mathbf{\\hat{z}},"
},
{
"math_id": 48,
"text": "\\Psi(r) \\propto \\frac{e^{ik r}}{4 \\pi r} \\iint\\limits_\\mathrm{aperture} \\!\\! E_\\mathrm{inc}(x',y') e^{-ik \\sin \\theta (\\cos \\phi x' + \\sin \\phi y')} \\, dx' \\, dy'."
},
{
"math_id": 49,
"text": "k_x = k \\sin \\theta \\cos \\phi "
},
{
"math_id": 50,
"text": "k_y = k \\sin \\theta \\sin \\phi \\,,"
},
{
"math_id": 51,
"text": "\\Psi(r)\\propto \\frac{e^{ik r}}{4 \\pi r} \\iint\\limits_\\mathrm{aperture} \\!\\! E_\\mathrm{inc}(x',y') e^{-i (k_x x' + k_y y') } \\, dx' \\, dy' ,"
},
{
"math_id": 52,
"text": " \\Delta x = 1.22 \\lambda N ,"
},
{
"math_id": 53,
"text": "N"
},
{
"math_id": 54,
"text": "f"
},
{
"math_id": 55,
"text": "D"
},
{
"math_id": 56,
"text": "N \\gg 1"
},
{
"math_id": 57,
"text": " \\theta \\approx \\sin \\theta = 1.22 \\frac{\\lambda}{D},"
},
{
"math_id": 58,
"text": "\\lambda=\\frac{h}{p} \\, ,"
},
{
"math_id": 59,
"text": "h"
},
{
"math_id": 60,
"text": "p"
},
{
"math_id": 61,
"text": " m \\lambda = 2 d \\sin \\theta ,"
}
] | https://en.wikipedia.org/wiki?curid=8603 |
860507 | Centered polygonal number | Class of series of figurate numbers, each having a central dot
The centered polygonal numbers are a class of series of figurate numbers, each formed by a central dot, surrounded by polygonal layers of dots with a constant number of sides. Each side of a polygonal layer contains one more dot than each side in the previous layer; so starting from the second polygonal layer, each layer of a centered "k"-gonal number contains "k" more dots than the previous layer.
Examples.
Each centered "k"-gonal number in the series is "k" times the previous triangular number, plus 1. This can be formalized by the expression formula_0, where "n" is the series rank, starting with 0 for the initial 1. For example, each centered square number in the series is four times the previous triangular number, plus 1. This can be formalized by the expression formula_1.
These series consist of the
and so on.
The following diagrams show a few examples of centered polygonal numbers and their geometric construction. Compare these diagrams with the diagrams in Polygonal number.
Formulas.
As can be seen in the above diagrams, the "n"th centered "k"-gonal number can be obtained by placing "k" copies of the ("n"−1)th triangular number around a central point; therefore, the "n"th centered "k"-gonal number is equal to
formula_2
The difference of the "n"-th and the ("n"+1)-th consecutive centered "k"-gonal numbers is "k"(2"n"+1).
The "n"-th centered "k"-gonal number is equal to the "n"-th regular "k"-gonal number plus ("n"-1)2.
Just as is the case with regular polygonal numbers, the first centered "k"-gonal number is 1. Thus, for any "k", 1 is both "k"-gonal and centered "k"-gonal. The next number to be both "k"-gonal and centered "k"-gonal can be found using the formula:
formula_3
which tells us that 10 is both triangular and centered triangular, 25 is both square and centered square, etc.
Whereas a prime number "p" cannot be a polygonal number (except the trivial case, i.e. each "p" is the second "p"-gonal number), many centered polygonal numbers are primes. In fact, if "k" ≥ 3, "k" ≠ 8, "k" ≠ 9, then there are infinitely many centered "k"-gonal numbers which are primes (assuming the Bunyakovsky conjecture). Since all centered octagonal numbers are also square numbers, and all centered nonagonal numbers are also triangular numbers (and not equal to 3), thus both of them cannot be prime numbers.
Sum of reciprocals.
The sum of reciprocals for the centered "k"-gonal numbers is
formula_4, if "k" ≠ 8
formula_5, if "k" = 8
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{kn(n+1)}{2} +1"
},
{
"math_id": 1,
"text": "\\frac{4n(n+1)}{2} +1"
},
{
"math_id": 2,
"text": "C_{k,n} =\\frac{kn}{2}(n-1)+1."
},
{
"math_id": 3,
"text": "\\frac{k^2}{2}(k-1)+1"
},
{
"math_id": 4,
"text": "\\frac{2\\pi}{k\\sqrt{1-\\frac{8}{k}}}\\tan\\left(\\frac{\\pi}{2}\\sqrt{1-\\frac{8}{k}}\\right)"
},
{
"math_id": 5,
"text": "\\frac{\\pi^2}{8}"
}
] | https://en.wikipedia.org/wiki?curid=860507 |
8605453 | Solar neutrino unit | The solar neutrino unit (SNU) is a unit of Solar neutrino flux widely used in neutrino astronomy and radiochemical neutrino experiments. It is equal to the neutrino flux producing 10−36 captures per target atom per second. It is convenient given the very low event rates in radiochemical experiments. Typical rate is expected to be from tens SNU to hundred SNU.
There are two ways of detecting solar neutrinos: radiochemical and real time experiments. The principle of radiochemical experiments is the reaction of the form
formula_0.
The daughter nucleus's decay is used in the detection. Production rate of the daughter nucleus is given by
formula_1,
where
With typical neutrino flux of 1010 cm−2 s−1 and a typical interaction cross section of about 10−45 cm2, about 1030 target atoms are required to produce one event per day. Taking into account that 1 mole is equal to 6.022×1023 atoms, this number corresponds to ktons of the target substances, whereas present neutrino detectors operate at much lower quantities of those.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "^{A}_{N}Z + \\nu_{e}\\longrightarrow^{A}_{N-1}(Z+1)+e^{-}"
},
{
"math_id": 1,
"text": "R = N\\int\\Phi(E)\\sigma(E)dE"
},
{
"math_id": 2,
"text": "\\Phi"
},
{
"math_id": 3,
"text": "\\sigma"
},
{
"math_id": 4,
"text": "N"
}
] | https://en.wikipedia.org/wiki?curid=8605453 |
86058 | Binoculars | Pair of telescopes mounted side-by-side
Binoculars or field glasses are two refracting telescopes mounted side-by-side and aligned to point in the same direction, allowing the viewer to use both eyes (binocular vision) when viewing distant objects. Most binoculars are sized to be held using both hands, although sizes vary widely from opera glasses to large pedestal-mounted military models.
Unlike a (monocular) telescope, binoculars give users a three-dimensional image: each eyepiece presents a slightly different image to each of the viewer's eyes and the parallax allows the visual cortex to generate an impression of depth.
Optical design evolution.
Galilean.
Almost from the invention of the telescope in the 17th century the advantages of mounting two of them side by side for binocular vision seems to have been explored. Most early binoculars used Galilean optics; that is, they used a convex objective and a concave eyepiece lens. The Galilean design has the advantage of presenting an erect image but has a narrow field of view and is not capable of very high magnification. This type of construction is still used in very cheap models and in opera glasses or theater glasses. The Galilean design is also used in low magnification binocular surgical and jewelers' loupes because they can be very short and produce an upright image without extra or unusual erecting optics, reducing expense and overall weight. They also have large exit pupils, making centering less critical, and the narrow field of view works well in those applications. These are typically mounted on an eyeglass frame or custom-fit onto eyeglasses.
Keplerian.
An improved image and higher magnification are achieved in binoculars employing Keplerian optics, where the image formed by the objective lens is viewed through a positive eyepiece lens (ocular).
Since the Keplerian configuration produces an inverted image, different methods are used to turn the image the right way up.
Erecting lenses.
In aprismatic binoculars with Keplerian optics (which were sometimes called "twin telescopes"), each tube has one or two additional lenses (relay lens) between the objective and the eyepiece. These lenses are used to erect the image. The binoculars with erecting lenses had a serious disadvantage: they are too long. Such binoculars were popular in the 1800s (for example, G. & S. Merz models). The Keplerian "twin telescopes" binoculars were optically and mechanically hard to manufacture, but it took until the 1890s to supersede them with better prism-based technology.
Prism.
Optical prisms added to the design enabled the display of the image the right way up without needing as many lenses, and decreasing the overall length of the instrument, typically using Porro prism or roof prism systems. The Italian inventor of optical instruments Ignazio Porro worked during the 1860s with Hofmann in Paris to produce monoculars using the same prism configuration used in modern Porro prism binoculars. At the 1873 Vienna Trade Fair German optical designer and scientist Ernst Abbe displayed a prism telescope with two cemented Porro prisms. The optical solutions of Porro and Abbe were theoretically sound, but the employed prism systems failed in practice primarily due to insufficient glass quality.
Porro.
"Porro prism binoculars" are named after Ignazio Porro, who patented this image erecting system in 1854. The later refinement by Ernst Abbe and his cooperation with glass scientist Otto Schott, who managed to produce a better type of Crown glass in 1888, and instrument maker Carl Zeiss resulted in 1894 in the commercial introduction of improved 'modern' Porro prism binoculars by the Carl Zeiss company. Binoculars of this type use a pair of Porro prisms in a Z-shaped configuration to erect the image. This results in wide binoculars, with objective lenses that are well separated and offset from the eyepieces, giving a better sensation of depth. Porro prism designs have the added benefit of folding the optical path so that the physical length of the binoculars is less than the focal length of the objective. Porro prism binoculars were made in such a way to erect an image in a relatively small space, thus binoculars using prisms started in this way.
Porro prisms require typically within 10 arcminutes ( of 1 degree) tolerances for alignment of their optical elements (collimation) at the factory. Sometimes Porro prisms binoculars need their prisms set to be re-aligned to bring them into collimation. Good-quality Porro prism design binoculars often feature about deep grooves or notches ground across the width of the hypotenuse face center of the prisms, to eliminate image quality reducing abaxial non-image-forming reflections. Porro prism binoculars can offer good optical performance with relatively little manufacturing effort and as human eyes are ergonomically limited by their interpupillary distance the offset and separation of big (60+ mm wide) diameter objective lenses and the eyepieces becomes a practical advantage in a stereoscopic optical product.
In the early 2020s, the commercial market share of Porro prism-type binoculars had become the second most numerous compared to other prism-type optical designs.
There are alternative Porro prism-based systems available that find application in binoculars on a small scale, like the Perger prism that offers a significantly reduced axial offset compared to traditional Porro prism designs .
Roof.
"Roof prism binoculars" may have appeared as early as the 1870s in a design by Achille Victor Emile Daubresse. In 1897 Moritz Hensoldt began marketing pentaprism based roof prism binoculars.
Most roof prism binoculars use either the Schmidt–Pechan prism (invented in 1899) or the Abbe–Koenig prism (named after Ernst Karl Abbe and Albert König and patented by Carl Zeiss in 1905) designs to erect the image and fold the optical path. They have objective lenses that are approximately in a line with the eyepieces.
Binoculars with roof prisms have been in use to a large extent since the second half of the 20th century. Roof prism designs result in objective lenses that are almost or totally in line with the eyepieces, creating an instrument that is narrower and more compact than Porro prisms and lighter. There is also a difference in image brightness. Porro prism and Abbe–Koenig roof-prism binoculars will inherently produce a brighter image than Schmidt–Pechan roof prism binoculars of the same magnification, objective size, and optical quality, because the Schmidt-Pechan roof-prism design employs mirror-coated surfaces that reduce light transmission.
In roof prism designs, optically relevant prism angles must be correct within 2 arcseconds ( of 1 degree) to avoid seeing an obstructive double image. Maintaining such tight production tolerances for the alignment of their optical elements by laser or interference (collimation) at an affordable price point is challenging. To avoid the need for later re-collimation, the prisms are generally aligned at the factory and then permanently fixed to a metal plate. These complicating production requirements make high-quality roof prism binoculars more costly to produce than Porro prism binoculars of equivalent optical quality and until phase correction coatings were invented in 1988 Porro prism binoculars optically offered superior resolution and contrast to non-phase corrected roof prism binoculars.
In the early 2020s, the commercial offering of Schmidt-Pechan designs exceeds the Abbe-Koenig design offerings and had become the dominant optical design compared to other prism-type designs.
Alternative roof prism-based designs like the Uppendahl prism system composed of three prisms cemented together were and are commercially offered on a small scale.
Optical systems and their practical effect on binoculars housing shapes.
The optical system of modern binoculars consists of three main optical assemblies:
Although different prism systems have optical design-induced advantages and disadvantages when compared, due to technological progress in fields like optical coatings, optical glass manufacturing, etcetera, differences in the early 2020s in high-quality binoculars practically became irrelevant. At high-quality price points, similar optical performance can be achieved with every commonly applied optical system. This was 20–30 years earlier not possible, as occurring optical disadvantages and problems could at that time not be technically mitigated to practical irrelevancy. Relevant differences in optical performance in the sub-high-quality price categories can still be observed with roof prism-type binoculars today because well-executed technical problem mitigation measures and narrow manufacturing tolerances remain difficult and cost-intensive.
Optical parameters.
Binoculars are usually designed for specific applications. These different designs require certain optical parameters which may be listed on the prism cover plate of the binoculars. Those parameters are:
Magnification.
Given as the first number in a binocular description (e.g., 7×35, 10×50), magnification is the ratio of the focal length of the objective divided by the focal length of the eyepiece. This gives the magnifying power of binoculars (sometimes expressed as "diameters"). A magnification factor of 7, for example, produces an image 7 times larger than the original seen from that distance. The desirable amount of magnification depends upon the intended application, and in most binoculars is a permanent, non-adjustable feature of the device (zoom binoculars are the exception). Hand-held binoculars typically have magnifications ranging from 7× to 10×, so they will be less susceptible to the effects of shaking hands. A larger magnification leads to a smaller field of view and may require a tripod for image stability. Some specialized binoculars for astronomy or military use have magnifications ranging from 15× to 25×.
Objective diameter.
Given as the second number in a binocular description (e.g., 7×35, 10×50), the diameter of the objective lens determines the resolution (sharpness) and how much light can be gathered to form an image. When two different binoculars have equal magnification, equal quality, and produce a sufficiently matched exit pupil (see below), the larger objective diameter produces a "brighter" and sharper image. An 8×40, then, will produce a "brighter" and sharper image than an 8×25, even though both enlarge the image an identical eight times. The larger front lenses in the 8×40 also produce wider beams of light (exit pupil) that leave the eyepieces. This makes it more comfortable to view with an 8×40 than an 8×25. A pair of 10×50 binoculars is better than a pair of 8×40 binoculars for magnification, sharpness and luminous flux. Objective diameter is usually expressed in millimeters. It is customary to categorize binoculars by the "magnification" × "the objective diameter"; e.g., "7×50". Smaller binoculars may have a diameter of as low as 22 mm; 35 mm and 50 mm are common diameters for field binoculars; astronomical binoculars have diameters ranging from 70 mm to 150 mm.
Field of view.
The field of view of a pair of binoculars depends on its optical design and in general is inversely proportional to the magnifying power. It is usually notated in a linear value, such as how many feet (meters) in width will be seen at 1,000 yards (or 1,000 m), or in an angular value of how many degrees can be viewed.
Exit pupil.
Binoculars concentrate the light gathered by the objective into a beam, of which the diameter, the exit pupil, is the objective diameter divided by the magnifying power. For maximum effective light-gathering and brightest image, and to maximize the sharpness, the exit pupil should at least equal the diameter of the pupil of the human eye: about 7 mm at night and about 3 mm in the daytime, decreasing with age. If the cone of light streaming out of the binoculars is "larger" than the pupil it is going into, any light larger than the pupil is wasted. In daytime use, the human pupil is typically dilated about 3 mm, which is about the exit pupil of a 7×21 binocular. Much larger 7×50 binoculars will produce a (7.14 mm) cone of light bigger than the pupil it is entering, and this light will, in the daytime, be wasted. An exit pupil that is too "small" also will present an observer with a dimmer view, since only a small portion of the light-gathering surface of the retina is used. For applications where equipment must be carried (birdwatching, hunting), users opt for much smaller (lighter) binoculars with an exit pupil that matches their expected iris diameter so they will have maximum resolution but are not carrying the weight of wasted aperture.
A larger exit pupil makes it easier to put the eye where it can receive the light; anywhere in the large exit pupil cone of light will do. This ease of placement helps avoid, especially in large field of view binoculars, vignetting, which brings to the viewer an image with its borders darkened because the light from them is partially blocked, and it means that the image can be quickly found, which is important when looking at birds or game animals that move rapidly, or for a seafarer on the deck of a pitching vessel or observing from a moving vehicle. Narrow exit pupil binoculars also may be fatiguing because the instrument must be held exactly in place in front of the eyes to provide a useful image. Finally, many people use their binoculars at dawn, at dusk, in overcast conditions, or at night, when their pupils are larger. Thus, the daytime exit pupil is not a universally desirable standard. For comfort, ease of use, and flexibility in applications, larger binoculars with larger exit pupils are satisfactory choices even if their capability is not fully used by day.
Twilight factor and relative brightness.
Before innovations like anti-reflective coatings were commonly used in binoculars, their performance was often mathematically expressed. Nowadays, the practically achievable instrumentally measurable brightness of binoculars rely on a complex mix of factors like the quality of optical glass used and various applied optical coatings and not just the magnification and the size of objective lenses.
The twilight factor for binoculars can be calculated by first multiplying the magnification by the objective lens diameter and then finding the square root of the result. For instance, the twilight factor of 7×50 binoculars is therefore the square root of 7 × 50: the square root of 350 = 18.71. The higher the twilight factor, mathematically, the better the resolution of the binoculars when observing under dim light conditions. Mathematically, 7×50 binoculars have exactly the same twilight factor as 70×5 ones, but 70×5 binoculars are useless during twilight and also in well-lit conditions as they would offer only a 0.14 mm exit pupil. The twilight factor without knowing the accompanying more decisive exit pupil does not permit a practical determination of the low light capability of binoculars. Ideally, the exit pupil should be at least as large as the pupil diameter of the user's dark-adapted eyes in circumstances with no extraneous light.
A primarily historic, more meaningful mathematical approach to indicate the level of clarity and brightness in binoculars was relative brightness. It is calculated by squaring the diameter of the exit pupil. In the above 7×50 binoculars example, this means that their relative brightness index is 51 (7.14 × 7.14 = 51). The higher the relative brightness index number, mathematically, the better the binoculars are suited for low light use.
Eye relief.
Eye relief is the distance from the rear eyepiece lens to the exit pupil or eye point. It is the distance the observer must position his or her eye behind the eyepiece in order to see an unvignetted image. The longer the focal length of the eyepiece, the greater the potential eye relief. Binoculars may have eye relief ranging from a few millimeters to 25 mm or more. Eye relief can be particularly important for eyeglasses wearers. The eye of an eyeglasses wearer is typically farther from the eye piece which necessitates a longer eye relief in order to avoid vignetting and, in the extreme cases, to conserve the entire field of view. Binoculars with short eye relief can also be hard to use in instances where it is difficult to hold them steady.
Eyeglasses wearers who intend to wear their glasses when using binoculars should look for binoculars with an eye relief that is long enough so that their eyes are not behind the point of focus (also called the eyepoint). Else, their glasses will occupy the space where their eyes should be. Generally, an eye relief over 16 mm should be adequate for any eyeglass wearer. However, if glasses frames are thicker and so significantly protrude from the face, an eye relief over 17 mm should be considered. Eyeglasses wearers should also look for binoculars with twist-up eye cups that ideally have multiple settings, so they can be partially or fully retracted to adjust eye relief to individual ergonomic preferences.
Close focus distance.
Close focus distance is the closest point that the binocular can focus on. This distance varies from about , depending upon the design of the binoculars. If the close focus distance is short with respect to the magnification, the binocular can be used also to see particulars not visible to the naked eye.
Eyepieces.
Binocular eyepieces usually consist of three or more lens elements in two or more groups. The lens furthest from the viewer's eye is called the "field lens" or "objective lens" and that closest to the eye the "eye lens" or "ocular lens". The most common Kellner configuration is that invented in 1849 by Carl Kellner. In this arrangement, the eye lens is a plano-concave/ double convex achromatic doublet (the flat part of the former facing the eye) and the field lens is a double-convex singlet. A reversed Kellner eyepiece was developed in 1975 and in it the field lens is a double concave/ double convex achromatic doublet and the eye lens is a double convex singlet. The reverse Kellner provides 50% more eye relief and works better with small focal ratios as well as having a slightly wider field.
Wide field binoculars typically utilize some kind of Erfle configuration, patented in 1921. These have five or six elements in three groups. The groups may be two achromatic doublets with a double convex singlet between them or may all be achromatic doublets. These eyepieces tend not to perform as well as Kellner eyepieces at high power because they suffer from astigmatism and ghost images. However they have large eye lenses, excellent eye relief, and are comfortable to use at lower powers.
Field flattener lens.
High-end binoculars often incorporate a field flattener lens in the eyepiece behind their prism configuration, designed to improve image sharpness and reduce image distortion at the outer regions of the field of view.
Mechanical design.
Focus and adjustment.
Binoculars have a focusing arrangement which changes the distance between eyepiece and objective lenses or internally mounted lens elements. Normally there are two different arrangements used to provide focus, "independent focus" and "central focusing":
With increasing magnification, the depth of field – the distance between the nearest and the farthest objects that are in acceptably sharp focus in an image – decreases. The depth of field reduces quadratic with the magnification, so compared to 7× binoculars, 10× binoculars offer about half (7² ÷ 10² = 0.49) the depth of field. However, not related to the binoculars optical system, the user perceived practical depth of field or depth of acceptable view performance is also dependent on the accommodation ability (accommodation ability varies from person to person and decreases significantly with age) and light conditions dependent effective pupil size or diameter of the user's eyes.
There are "focus-free" or "fixed-focus" binoculars that have no focusing mechanism other than the eyepiece adjustments that are meant to be set for the user's eyes and left fixed. These are considered to be compromise designs, suited for convenience, but not well suited for work that falls outside their designed hyperfocal distance range (for hand held binoculars generally from about to infinity without performing eyepiece adjustments for a given viewer).
Binoculars can be generally used without eyeglasses by myopic (near-sighted) or hyperopic (far-sighted) users simply by adjusting the focus a little farther. Most manufacturers leave a little extra available focal-range beyond the infinity-stop/setting to account for this when focusing for infinity. People with severe astigmatism, however, will still need to use their glasses while using binoculars.
Some binoculars have adjustable magnification, "zoom binoculars", such as 7-21×50 intended to give the user the flexibility of having a single pair of binoculars with a wide range of magnifications, usually by moving a "zoom" lever. This is accomplished by a complex series of adjusting lenses similar to a zoom camera lens. These designs are noted to be a compromise and even a gimmick since they add bulk, complexity and fragility to the binocular. The complex optical path also leads to a narrow field of view and a large drop in brightness at high zoom. Models also have to match the magnification for both eyes throughout the zoom range and hold collimation to avoid eye strain and fatigue. These almost always perform much better at the low power setting than they do at the higher settings. This is natural, since the front objective cannot enlarge to let in more light as the power is increased, so the view gets dimmer. At 7×, the 50mm front objective provides a 7.14 mm exit pupil, but at 21×, the same front objective provides only a 2.38 mm exit pupil. Also, the optical quality of a zoom binocular at any given power is inferior to that of a fixed power binocular of that power.
Interpupillary distance.
Most modern binoculars are also adjustable via a hinged construction that enables the distance between the two telescope halves to be adjusted to accommodate viewers with different eye separation or "interpupillary distance (IPD)" (the distance measured in millimeters between the centers of the pupils of the eyes). Most are optimized for the interpupillary distance (typically about 63 mm) for adults. Interpupillary distance varies with respect to age, gender and race. The binoculars industry has to take IPD variance (most adults have IPDs in the 50–75 mm range) and its extrema into account, because stereoscopic optical products need to be able to cope with many possible users, including those with the smallest and largest IPDs.
Children and adults with narrow IPDs can experience problems with the IPD adjustment range of binocular barrels to match the width between the centers of the pupils in each eye impairing the use of some binoculars. Adults with average or wide IPDs generally experience no eye separation adjustment range problems, but straight barreled roof prism binoculars featuring over 60 mm diameter objectives can dimensionally be problematic to correctly adjust for adults with a relatively narrow IPDs. Anatomic conditions like hypertelorism and hypotelorism can affect IPD and due to extreme IPDs result in practical impairment of using stereoscopic optical products like binoculars.
Alignment.
The two telescopes in binoculars are aligned in parallel (collimated), to produce a single circular, apparently three-dimensional, image. Misalignment will cause the binoculars to produce a double image. Even slight misalignment will cause vague discomfort and visual fatigue as the brain tries to combine the skewed images.
Alignment is performed by small movements to the prisms, by adjusting an internal support cell or by turning external set screws, or by adjusting the position of the objective via eccentric rings built into the objective cell.
"Unconditional aligning" (3-axis collimation, meaning both optical axes are aligned parallel with the axis of the hinge used to select various interpupillary distance settings) binoculars requires specialized equipment. Unconditional alignment is usually done by a professional, although the externally mounted adjustment features can usually be accessed by the end user.
"Conditional alignment" ignores the third axis (the hinge) in the alignment process. Such a conditional alignment comes down to a 2-axis pseudo-collimation and will only be serviceable within a small range of interpupillary distance settings, as conditional aligned binoculars are not collimated for the full interpupillary distance setting range.
Image stability.
Some binoculars use image-stabilization technology to reduce shake at higher magnifications. This is done by having a gyroscope move part of the instrument, or by powered mechanisms driven by gyroscopic or inertial detectors, or via a mount designed to oppose and damp the effect of shaking movements. Stabilization may be enabled or disabled by the user as required. These techniques allow binoculars up to 20× to be hand-held, and much improve the image stability of lower-power instruments. There are some disadvantages: the image may not be quite as good as the best unstabilized binoculars when tripod-mounted, stabilized binoculars also tend to be more expensive and heavier than similarly specified non-stabilized binoculars.
Housing.
Binoculars housings can be made of various structural materials. Old binoculars barrels and hinge bridges were often made of brass. Later steel and relatively light metals like aluminum and magnesium alloys were used, as well as polymers like (fibre-reinforced) polycarbonate and acrylonitrile butadiene styrene. The housing can be rubber armored externally as outer covering to provide a non-slip gripping surface, absorption of undesired sounds and additional cushioning/protection against dents, scrapes, bumps and minor impacts.
Optical coatings.
Because a typical binocular has 6 to 10 optical elements with special characteristics and up to 20 atmosphere-to-glass surfaces, binocular manufacturers use different types of optical coatings for technical reasons and to improve the image they produce.
Lens and prism optical coatings on binoculars can increase light transmission, minimize detrimental reflections and interference effects, optimize beneficial reflections, repel water and grease and even protect the lens from scratches. Modern optical coatings are composed of a combination of very thin layers of materials such as oxides, metals, or rare earth materials. The performance of an optical coating is dependent on the number of layers, manipulating their exact thickness and composition, and the refractive index difference between them. These coatings have become a key technology in the field of optics and manufacturers often have their own designations for their optical coatings. The various lens and prism optical coatings used in high-quality 21st century binoculars, when added together, can total about 200 (often superimposed) coating layers.
Anti-reflective.
Anti-reflective interference coatings reduce light lost at every optical surface through reflection at each surface. Reducing reflection via anti-reflective coatings also reduces the amount of "lost" light present inside the binocular which would otherwise make the image appear hazy (low contrast). A pair of binoculars with good optical coatings may yield a brighter image than uncoated binoculars with a larger objective lens, on account of superior light transmission through the assembly. The first transparent interference-based coating "Transparentbelag (T)" used by Zeiss was invented in 1935 by Olexander Smakula. A classic lens-coating material is magnesium fluoride, which reduces reflected light from about 4% to 1.5%. At 16 atmosphere to optical glass surfaces passes, a 4% reflection loss theoretically means a 52% light transmission (0.9616 = 0.520) and a 1.5% reflection loss a much better 78.5% light transmission (0.98516 = 0.785). Reflection can be further reduced over a wider range of wavelengths and angles by using several superimposed layers with different refractive indices. The anti-reflective multi-coating "Transparentbelag* (T*)" used by Zeiss in the late 1970s consisted of six superimposed layers. In general, the outer coating layers have slightly lower index of refraction values and the layer thickness is adapted to the range of wavelengths in the visible spectrum to promote optimal destructive interference via reflection in the beams reflected from the interfaces, and constructive interference in the corresponding transmitted beams. There is no simple formula for the optimal layer thickness for a given choice of materials. These parameters are therefore determined with the help of simulation programs. Determined by the optical properties of the lenses used and intended primary use of the binoculars, different coatings are preferred, to optimize light transmission dictated by the human eye luminous efficiency function variance. Maximal light transmission around wavelengths of 555 nm (green) is important for obtaining optimal photopic vision using the eye cone cells for observation in well-lit conditions. Maximal light transmission around wavelengths of 498 nm (cyan) is important for obtaining optimal scotopic vision using the eye rod cells for observation in low light conditions. As a result, effective modern anti-reflective lens coatings consist of complex multi-layers and reflect only 0.25% or less to yield an image with maximum brightness and natural colors. These allow high-quality 21st century binoculars to practically achieve at the eye lens or ocular lens measured over 90% light transmission values in low light conditions. Depending on the coating, the character of the image seen in the binoculars under normal daylight can either look "warmer" or "colder" and appear either with higher or lower contrast. Subject to the application, the coating is also optimized for maximum color fidelity through the visible spectrum, for example in the case of lenses specially designed for bird watching.
A common application technique is physical vapor deposition of one or more superimposed anti-reflective coating layer(s) which includes evaporative deposition, making it a complex production process.
Phase correction.
In binoculars with roof prisms the light path is split into two paths that reflect on either side of the roof prism ridge. One half of the light reflects from roof surface 1 to roof surface 2. The other half of the light reflects from roof surface 2 to roof surface 1. If the roof faces are uncoated, the mechanism of reflection is Total Internal Reflection (TIR). In TIR, light polarized in the plane of incidence (p-polarized) and light polarized orthogonal to the plane of incidence (s-polarized) experience different phase shifts. As a consequence, linearly polarized light emerges from a roof prism elliptically polarized. Furthermore, the state of elliptical polarization of the two paths through the prism is different. When the two paths recombine on the retina (or a detector) there is interference between light from the two paths causing a distortion of the Point Spread Function and a deterioration of the image. Resolution and contrast significantly suffer. These unwanted interference effects can be suppressed by vapor depositing a special dielectric coating known as a "phase-correction coating" or "P-coating" on the roof surfaces of the roof prism. To approximately correct a roof prism for polychromatic light several phase-correction coating layers are superimposed, since every layer is wavelength and angle of incidence specific.
The "P-coating" was developed in 1988 by Adolf Weyrauch at Carl Zeiss.
Other manufacturers followed soon, and since then phase-correction coatings are used across the board in medium and high-quality roof prism binoculars. This coating suppresses the difference in phase shift between s- and p- polarization so both paths have the same polarization and no interference degrades the image. In this way, since the 1990s, roof prism binoculars have also achieved resolution values that were previously only achievable with Porro prisms. The presence of a phase-correction coating can be checked on unopened binoculars using two polarization filters. Dielectric phase-correction prism coatings are applied in a vacuum chamber with maybe thirty or more different superimposed vapor coating layers deposits, making it a complex production process.
Binoculars using either a Schmidt–Pechan roof prism, Abbe–Koenig roof prism or an Uppendahl roof prism benefit from phase coatings that compensate for a loss of resolution and contrast caused by the interference effects that occur in untreated roof prisms. Porro prism and Perger prism binoculars do not split beams and therefore they do not require any phase coatings.
Metallic mirror.
In binoculars with Schmidt–Pechan or Uppendahl roof prisms, mirror coatings are added to some surfaces of the roof prism because the light is incident at one of the prism's glass-air boundaries at an angle less than the critical angle so total internal reflection does not occur. Without a mirror coating most of that light would be lost. Roof prism aluminum mirror coating (reflectivity of 87% to 93%) or silver mirror coating (reflectivity of 95% to 98%) is used.
In older designs silver mirror coatings were used but these coatings oxidized and lost reflectivity over time in unsealed binoculars. Aluminum mirror coatings were used in later unsealed designs because they did not tarnish even though they have a lower reflectivity than silver. Using vacuum-vaporization technology, modern designs use either aluminum, enhanced aluminum (consisting of aluminum overcoated with a multilayer dielectric film) or silver. Silver is used in modern high-quality designs which are sealed and filled with nitrogen or argon to provide an inert atmosphere so that the silver mirror coating does not tarnish.
Porro prism and Perger prism binoculars and roof prism binoculars using the Abbe–Koenig roof prism configuration do not use mirror coatings because these prisms reflect with 100% reflectivity using total internal reflection in the prism rather than requiring a (metallic) mirror coating.
Dielectric mirror.
Dielectric coatings are used in Schmidt–Pechan and Uppendahl roof prisms to cause the prism surfaces to act as a dielectric mirror. This coating was introduced in 2004 in Zeiss Victory FL binoculars featuring Schmidt–Pechan prisms. Other manufacturers followed soon, and since then dielectric coatings are used across the board in medium and high-quality Schmidt–Pechan and Uppendahl roof prism binoculars. The non-metallic dielectric reflective coating is formed from several multilayers of alternating high and low refractive index materials deposited on a prism's reflective surfaces. The manufacturing techniques for dielectric mirrors are based on thin-film deposition methods. A common application technique is physical vapor deposition which includes evaporative deposition with maybe seventy or more different superimposed vapor coating layers deposits, making it a complex production process. This multilayer coating increases reflectivity from the prism surfaces by acting as a distributed Bragg reflector. A well-designed multilayer dielectric coating can provide a reflectivity of over 99% across the visible light spectrum. This reflectivity is an improvement compared to either an aluminium mirror coating or silver mirror coating.
Porro prism and Perger prism binoculars and roof prism binoculars using the Abbe–Koenig roof prism do not use dielectric coatings because these prisms reflect with 100% reflectivity using total internal reflection in the prism rather than requiring a (dielectric) mirror coating.
Terms.
All binoculars.
The presence of any coatings is typically denoted on binoculars by the following terms:
The presence of optical high transmittance crown glass offering relatively low refractive index (≈1.52) and low dispersion (with Abbe numbers around 60) is typically denoted on binoculars by the following terms:
Accessories.
Common accessories for binoculars are:
Applications.
General use.
Hand-held binoculars range from small 3 × 10 Galilean opera glasses, used in theaters, to glasses with 7 to 12 times magnification and 30 to 50 mm diameter objectives for typical outdoor use.
Compact or pocket binoculars are small light binoculars suitable for daytime use. Most compact binoculars feature magnifications of 7× to 10×, and objective diameter sizes of a relatively modest 20 mm to 25 mm, resulting in small exit pupil sizes limiting low light suitability. Roof prism designs tend to be narrower and more compact than equivalent Porro prism designs. Thus, compact binoculars are mostly roof prism designs. The telescope tubes of compact binoculars can often be folded closely to each other to radically reduce the binocular's volume when not in use, for easy carriage and storage.
Many tourist attractions have installed pedestal-mounted, coin-operated binocular tower viewers to allow visitors to obtain a closer view of the attraction.
Land surveys and geographic data collection.
Although technology has surpassed using binoculars for data collection, historically these were advanced tools used by geographers and other geoscientists. Field glasses still today can provide visual aid when surveying large areas.
Bird watching.
Birdwatching is a very popular hobby among nature and animal lovers; a binocular is their most basic tool because most human eyes cannot resolve sufficient detail to fully appreciate and/or study small birds. To be able to view birds in flight well rapid moving objects acquiring capability and depth of field are important. Typically, binoculars with a magnification of 8× to 10× are used, though many manufacturers produce models with 7× magnification for a wider field of view and increased depth of field. The other main consideration for birdwatching binoculars is the size of the objective that collects light. A larger (e.g. 40–45mm) objective works better in low light and for seeing into foliage, but also makes for a heavier binocular than a 30–35mm objective. Weight may not seem a primary consideration when first hefting a pair of binoculars, but birdwatching involves a lot of holding up the binoculars while standing in one place. Careful shopping is advised by the birdwatching community.
Hunting.
Hunters commonly use binoculars in the field as a way to observe distant game animals. Hunters most commonly use about 8× magnification binoculars with 40–45mm objectives to be able to find and observe game in low light conditions. European manufacturers produced and produce 7×42 binoculars with good low light performance without getting too bulky for mobile use like extended carrying/stalking and much bigger bulky 8×56 and 9×63 low-light binoculars optically optimized for excellent low light performance for more stationary hunting at dusk and night. For hunting binoculars optimized for observation in twilight, coatings are preferred that maximize light transmission in the wavelength range around 460-540 nm.
Range finding.
Some binoculars have a range finding reticle (scale) superimposed upon the view. This scale allows the distance to the object to be estimated if the object's height is known (or estimable). The common mariner 7×50 binoculars have these scales with the angle between marks equal to 5 mil. One mil is equivalent to the angle between the top and bottom of an object one meter in height at a distance of 1000 meters.
Therefore, to estimate the distance to an object that is a known height the formula is:
formula_0
where:
With the typical 5 mil scale (each mark is 5 mil), a lighthouse that is 3 marks high and known to be 120 meters tall is 8000 meters distant.
formula_4
Military.
Binoculars have a long history of military use. Galilean designs were widely used up to the end of the 19th century when they gave way to porro prism types. Binoculars constructed for general military use tend to be more rugged than their civilian counterparts. They generally avoid fragile center focus arrangements in favor of independent focus, which also makes for easier, more effective weatherproofing. Prism sets in military binoculars may have redundant aluminized coatings on their prism sets to guarantee they do not lose their reflective qualities if they get wet.
One variant form was called "trench binoculars", a combination of binoculars and periscope, often used for artillery spotting purposes. It projected only a few inches above the parapet, thus keeping the viewer's head safely in the trench.
Military binoculars can and were also used as measuring and aiming devices, and can feature filters and (illuminated) reticles.
Military binoculars of the Cold War era were sometimes fitted with passive sensors that detected active IR emissions, while modern ones usually are fitted with filters blocking laser beams used as weapons. Further, binoculars designed for military usage may include a stadiametric reticle in one eyepiece in order to facilitate range estimation.
Modern binoculars designed for military usage can also feature laser rangefinders, compasses, and data exchange interfaces to send measurements to other peripheral devices.
Very large binocular naval rangefinders (up to 15 meters separation of the two objective lenses, weight 10 tons, for ranging World War II naval gun targets 25 km away) have been used, although late-20th century radar and laser range finding technology made this application mostly redundant.
Marine.
There are binoculars designed specifically for civilian and military use under harsh environmental conditions at sea. Hand held models will be 5× to 8× magnification, but with very large prism sets combined with eyepieces designed to give generous eye relief. This optical combination prevents the image vignetting or going dark when the binoculars are pitching and vibrating relative to the viewer's eyes due to a vessel's motion.
Marine binoculars often contain one or more features to aid in navigation on ships and boats.
Hand held marine binoculars typically feature:
Mariners also often deem an adequate low light performance of the optical combination important, explaining the many 7×50 hand held marine binoculars offerings featuring a large 7.14 mm exit pupil, which corresponds to the average pupil size of a youthful dark-adapted human eye in circumstances with no extraneous light.
Civilian and military ships can also use large, high-magnification binocular models with large objectives in fixed mountings.
Astronomical.
Binoculars are widely used by amateur astronomers; their wide field of view makes them useful for comet and supernova seeking (giant binoculars) and general observation (portable binoculars). Binoculars specifically geared towards astronomical viewing will have larger aperture objectives (in the 70 mm or 80 mm range) because the diameter of the objective lens increases the total amount of light captured, and therefore determines the faintest star that can be observed. Binoculars designed specifically for astronomical viewing (often 80 mm and larger) are sometimes designed without prisms in order to allow maximum light transmission. Such binoculars also usually have changeable eyepieces to vary magnification. Binoculars with high magnification and heavy weight usually require some sort of mount to stabilize the image. A magnification of 10x is generally considered the practical limit for observation with handheld binoculars. Binoculars more powerful than 15×70 require support of some type. Much larger binoculars have been made by amateur telescope makers, essentially using two refracting or reflecting astronomical telescopes.
Of particular relevance for low-light and astronomical viewing is the ratio between magnifying power and objective lens diameter. A lower magnification facilitates a larger field of view which is useful in viewing the Milky Way and large nebulous objects (referred to as deep sky objects) such as the nebulae and galaxies. The large (typical 7.14 mm using 7×50) exit pupil [objective (mm)/power] of these devices results in a small portion of the gathered light not being usable by individuals whose pupils do not sufficiently dilate. For example, the pupils of those over 50 rarely dilate over 5 mm wide. The large exit pupil also collects more light from the background sky, effectively decreasing contrast, making the detection of faint objects more difficult except perhaps in remote locations with negligible light pollution. Many astronomical objects of 8 magnitude or brighter, such as the star clusters, nebulae and galaxies listed in the Messier Catalog, are readily viewed in hand-held binoculars in the 35 to 40 mm range, as are found in many households for birding, hunting, and viewing sports events. For observing smaller star clusters, nebulae, and galaxies binocular magnification is an important factor for visibility because these objects appear tiny at typical binocular magnifications.
Some open clusters, such as the bright double cluster (NGC 869 and NGC 884) in the constellation Perseus, and globular clusters, such as M13 in Hercules, are easy to spot. Among nebulae, M17 in Sagittarius and the North America Nebula (NGC 7000) in Cygnus are also readily viewed. Binoculars can show a few of the wider-split binary stars such as Albireo in the constellation Cygnus.
A number of Solar System objects that are mostly to completely invisible to the human eye are reasonably detectable with medium-size binoculars, including larger craters on the Moon; the dim outer planets Uranus and Neptune; the inner "minor planets" Ceres, Vesta and Pallas; Saturn's largest moon Titan; and the Galilean moons of Jupiter. Although visible unaided in pollution-free skies, Uranus and Vesta require binoculars for easy detection. 10×50 binoculars are limited to an apparent magnitude of +9.5 to +11 depending on sky conditions and observer experience. Asteroids like Interamnia, Davida, Europa and, unless under exceptional conditions, Hygiea, are too faint to be seen with commonly sold binoculars. Likewise too faint to be seen with most binoculars are the planetary moons, except the Galileans and Titan, and the dwarf planets Pluto and Eris. Other difficult binocular targets include the phases of Venus and the rings of Saturn. Only binoculars with very high magnification, 20x or higher, are capable of discerning Saturn's rings to a recognizable extent. High-power binoculars can sometimes show one or two cloud belts on the disk of Jupiter, if optics and observing conditions are sufficiently good.
Binoculars can also aid in observation of human-made space objects, such as spotting satellites in the sky as they pass.
List of binocular manufacturers.
There are many companies that manufacturer binoculars, both past and present. They include:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "D = \\frac{OH}{\\text{Mil}}\\times 1000"
},
{
"math_id": 1,
"text": "D"
},
{
"math_id": 2,
"text": "OH"
},
{
"math_id": 3,
"text": "\\text{Mil}"
},
{
"math_id": 4,
"text": "8000 \\text{m} = \\frac{120 \\text{m}}{15 \\text{mil}} \\times 1000"
}
] | https://en.wikipedia.org/wiki?curid=86058 |
8605953 | Pairing-based cryptography | Pairing-based cryptography is the use of a pairing between elements of two cryptographic groups to a third group with a mapping formula_0 to construct or analyze cryptographic systems.
Definition.
The following definition is commonly used in most academic papers.
Let formula_1 be a finite field over prime formula_2, formula_3 two additive cyclic groups of prime order formula_2 and formula_4 another cyclic group of order formula_2 written multiplicatively. A pairing is a map: formula_5, which satisfies the following properties:
Classification.
If the same group is used for the first two groups (i.e. formula_9), the pairing is called "symmetric" and is a mapping from two elements of one group to an element from a second group.
Some researchers classify pairing instantiations into three (or more) basic types:
Usage in cryptography.
If symmetric, pairings can be used to reduce a hard problem in one group to a different, usually easier problem in another group.
For example, in groups equipped with a bilinear mapping such as the Weil pairing or Tate pairing, generalizations of the computational Diffie–Hellman problem are believed to be infeasible while the simpler decisional Diffie–Hellman problem can be easily solved using the pairing function. The first group is sometimes referred to as a Gap Group because of the assumed difference in difficulty between these two problems in the group.
Let formula_8 be a non-degenerate, efficiently computable, bilinear pairing. Let formula_14 be a generator of formula_15. Consider an instance of the CDH problem, formula_14,formula_16, formula_17. Intuitively, the pairing function formula_8 does not help us compute formula_18, the solution to the CDH problem. It is conjectured that this instance of the CDH problem is intractable. Given formula_19, we may check to see if formula_20 without knowledge of formula_21, formula_22, and formula_23, by testing whether formula_24 holds.
By using the bilinear property formula_25 times, we see that if formula_26, then, since formula_4 is a prime order group, formula_27.
While first used for cryptanalysis, pairings have also been used to construct many cryptographic systems for which no other efficient implementation is known, such as identity-based encryption or attribute-based encryption schemes. Thus, the security level of some pairing friendly elliptic curves have been later reduced.
Pairing-based cryptography is used in the KZG cryptographic commitment scheme.
A contemporary example of using bilinear pairings is exemplified in the BLS digital signature scheme.
Pairing-based cryptography relies on hardness assumptions separate from e.g. the elliptic-curve cryptography, which is older and has been studied for a longer time.
Cryptanalysis.
In June 2012 the National Institute of Information and Communications Technology (NICT), Kyushu University, and Fujitsu Laboratories Limited improved the previous bound for successfully computing a discrete logarithm on a supersingular elliptic curve from 676 bits to 923 bits.
In 2016, the Extended Tower Number Field Sieve algorithm allowed to reduce the complexity of finding discrete logarithm in some resulting groups of pairings. There are several variants of the multiple and extended tower number field sieve algorithm expanding the applicability and improving the complexity of the algorithm. A unified description of all such algorithms with further improvements was published in 2019. In view of these advances, several works provided revised concrete estimates on the key sizes of secure pairing-based cryptosystems.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "e :G_1 \\times G_2 \\to G_T"
},
{
"math_id": 1,
"text": "\\mathbb{F}_q"
},
{
"math_id": 2,
"text": "q"
},
{
"math_id": 3,
"text": "G_1, G_2"
},
{
"math_id": 4,
"text": "G_T"
},
{
"math_id": 5,
"text": " e: G_1 \\times G_2 \\rightarrow G_T "
},
{
"math_id": 6,
"text": " \\forall a,b \\in \\mathbb{F}_q^*, P\\in G_1, Q\\in G_2:\\ e\\left(aP, bQ\\right) = e\\left(P, Q\\right)^{ab}"
},
{
"math_id": 7,
"text": "e \\neq 1"
},
{
"math_id": 8,
"text": "e"
},
{
"math_id": 9,
"text": " G_1 = G_2"
},
{
"math_id": 10,
"text": " G_1 \\ne G_2"
},
{
"math_id": 11,
"text": "\\phi : G_2 \\to G_1"
},
{
"math_id": 12,
"text": "G_1"
},
{
"math_id": 13,
"text": "G_2"
},
{
"math_id": 14,
"text": "g"
},
{
"math_id": 15,
"text": "G"
},
{
"math_id": 16,
"text": "g^x"
},
{
"math_id": 17,
"text": "g^y"
},
{
"math_id": 18,
"text": "g^{xy}"
},
{
"math_id": 19,
"text": "g^z"
},
{
"math_id": 20,
"text": "g^z=g^{xy}"
},
{
"math_id": 21,
"text": "x"
},
{
"math_id": 22,
"text": "y"
},
{
"math_id": 23,
"text": "z"
},
{
"math_id": 24,
"text": "e(g^x,g^y)=e(g,g^z)"
},
{
"math_id": 25,
"text": "x+y+z"
},
{
"math_id": 26,
"text": "e(g^x,g^y)=e(g,g)^{xy}=e(g,g)^{z}=e(g,g^z)"
},
{
"math_id": 27,
"text": "xy=z"
}
] | https://en.wikipedia.org/wiki?curid=8605953 |
8606325 | Kutta–Joukowski theorem | Formula relating lift on an airfoil to fluid speed, density, and circulation
The Kutta–Joukowski theorem is a fundamental theorem in aerodynamics used for the calculation of lift of an airfoil (and any two-dimensional body including circular cylinders) translating in a uniform fluid at a constant speed so large that the flow seen in the body-fixed frame is steady and unseparated. The theorem relates the lift generated by an airfoil to the speed of the airfoil through the fluid, the density of the fluid and the circulation around the airfoil. The circulation is defined as the line integral around a closed loop enclosing the airfoil of the component of the velocity of the fluid tangent to the loop. It is named after Martin Kutta and Nikolai Zhukovsky (or Joukowski) who first developed its key ideas in the early 20th century. Kutta–Joukowski theorem is an inviscid theory, but it is a good approximation for real viscous flow in typical aerodynamic applications.
Kutta–Joukowski theorem relates lift to circulation much like the Magnus effect relates side force (called Magnus force) to rotation. However, the circulation here is not induced by rotation of the airfoil. The fluid flow in the presence of the airfoil can be considered to be the superposition of a translational flow and a rotating flow. This rotating flow is induced by the effects of camber, angle of attack and the sharp trailing edge of the airfoil. It should not be confused with a vortex like a tornado encircling the airfoil. At a large distance from the airfoil, the rotating flow may be regarded as induced by a line vortex (with the rotating line perpendicular to the two-dimensional plane). In the derivation of the Kutta–Joukowski theorem the airfoil is usually mapped onto a circular cylinder. In many textbooks, the theorem is proved for a circular cylinder and the Joukowski airfoil, but it holds true for general airfoils.
Lift force formula.
The theorem applies to two-dimensional flow around a fixed airfoil (or any shape of infinite span). The lift per unit span formula_0of the airfoil is given by
where formula_1 and formula_2 are the fluid density and the fluid velocity far upstream of the airfoil, and formula_3 is the circulation defined as the line integral
formula_4
around a closed contour formula_5 enclosing the airfoil and followed in the negative (clockwise) direction. As explained below, this path must be in a region of potential flow and not in the boundary layer of the cylinder. The integrand formula_6 is the component of the local fluid velocity in the direction tangent to the curve formula_5, and formula_7 is an infinitesimal length on the curve formula_5. Equation (1) is a form of the "Kutta–Joukowski theorem".
Kuethe and Schetzer state the Kutta–Joukowski theorem as follows:
"The force per unit length acting on a right cylinder of any cross section whatsoever is equal to formula_8 and is perpendicular to the direction of formula_9"
Circulation and the Kutta condition.
A lift-producing airfoil either has camber or operates at a positive angle of attack, the angle between the chord line and the fluid flow far upstream of the airfoil. Moreover, the airfoil must have a sharp trailing edge.
Any real fluid is viscous, which implies that the fluid velocity vanishes on the airfoil. Prandtl showed that for large Reynolds number, defined as formula_10, and small angle of attack, the flow around a thin airfoil is composed of a narrow viscous region called the boundary layer near the body and an inviscid flow region outside. In applying the Kutta-Joukowski theorem, the loop must be chosen outside this boundary layer. (For example, the circulation calculated using the loop corresponding to the surface of the airfoil would be zero for a viscous fluid.)
The sharp trailing edge requirement corresponds physically to a flow in which the fluid moving along the lower and upper surfaces of the airfoil meet smoothly, with no fluid moving around the trailing edge of the airfoil. This is known as the Kutta condition.
Kutta and Joukowski showed that for computing the pressure and lift of a thin airfoil for flow at large Reynolds number and small angle of attack, the flow can be assumed inviscid in the entire region outside the airfoil provided the Kutta condition is imposed. This is known as the potential flow theory and works remarkably well in practice.
Derivation.
Two derivations are presented below. The first is a heuristic argument, based on physical insight. The second is a formal and technical one, requiring basic vector analysis and complex analysis.
Heuristic argument.
For a heuristic argument, consider a thin airfoil of chord formula_11 and infinite span, moving through air of density formula_12. Let the airfoil be inclined to the oncoming flow to produce an air speed formula_13 on one side of the airfoil, and an air speed formula_14 on the other side. The circulation is then
formula_15
The difference in pressure formula_16 between the two sides of the airfoil can be found by applying Bernoulli's equation:
formula_17
so the downward force on the air, per unit span, is
formula_18
and the upward force (lift) on the airfoil is formula_19
A differential version of this theorem applies on each element of the plate and is the basis of thin-airfoil theory.
Formal derivation.
<templatestyles src="Math_proof/styles.css" />Formal derivation of Kutta–Joukowski theorem
First of all, the force exerted on each unit length of a cylinder of arbitrary cross section is calculated. Let this force per unit length (from now on referred to simply as force) be formula_20. So then the total force is:
formula_21
where "C" denotes the borderline of the cylinder, formula_22 is the static pressure of the fluid, formula_23 is the unit vector normal to the cylinder, and "ds" is the arc element of the borderline of the cross section. Now let formula_24 be the angle between the normal vector and the vertical. Then the components of the above force are:
formula_25
Now comes a crucial step: consider the used two-dimensional space as a complex plane. So every vector can be represented as a complex number, with its first component equal to the real part and its second component equal to the imaginary part of the complex number. Then, the force can be represented as:
formula_26
The next step is to take the complex conjugate of the force formula_27 and do some manipulation:
formula_28
Surface segments "ds" are related to changes "dz" along them by:
formula_29
Plugging this back into the integral, the result is:
formula_30
Now the Bernoulli equation is used, in order to remove the pressure from the integral. Throughout the analysis it is assumed that there is no outer force field present. The mass density of the flow is formula_31 Then pressure formula_22 is related to velocity formula_32 by:
formula_33
With this the force formula_27 becomes:
formula_34
Only one step is left to do: introduce formula_35 the complex potential of the flow. This is related to the velocity components as formula_36 where the apostrophe denotes differentiation with respect to the complex variable "z". The velocity is tangent to the borderline "C", so this means that formula_37 Therefore, formula_38 and the desired expression for the force is obtained:
formula_39
which is called the Blasius theorem.
To arrive at the Joukowski formula, this integral has to be evaluated. From complex analysis it is known that a holomorphic function can be presented as a Laurent series. From the physics of the problem it is deduced that the derivative of the complex potential formula_40 will look thus:
formula_41
The function does not contain higher order terms, since the velocity stays finite at infinity. So formula_42 represents the derivative the complex potential at infinity: formula_43.
The next task is to find out the meaning of formula_44. Using the residue theorem on the above series:
formula_45
Now perform the above integration:
formula_46
The first integral is recognized as the circulation denoted by formula_47 The second integral can be evaluated after some manipulation:
formula_48
Here formula_49 is the stream function. Since the "C" border of the cylinder is a streamline itself, the stream function does not change on it, and formula_50. Hence the above integral is zero. As a result:
formula_51
Take the square of the series:
formula_52
Plugging this back into the Blasius–Chaplygin formula, and performing the integration using the residue theorem:
formula_53
And so the Kutta–Joukowski formula is:
formula_54
Lift forces for more complex situations.
The lift predicted by the Kutta-Joukowski theorem within the framework of inviscid potential flow theory is quite accurate, even for real viscous flow, provided the flow is steady and unseparated.
In deriving the Kutta–Joukowski theorem, the assumption of irrotational flow was used. When there are free vortices outside of the body, as may be the case for a large number of unsteady flows, the flow is rotational. When the flow is rotational, more complicated theories should be used to derive the lift forces. Below are several important examples.
For an impulsively started flow such as obtained by suddenly accelerating an airfoil or setting an angle of attack, there is a vortex sheet continuously shed at the trailing edge and the lift force is unsteady or time-dependent. For small angle of attack starting flow, the vortex sheet follows a planar path, and the curve of the lift coefficient as function of time is given by the Wagner function. In this case the initial lift is one half of the final lift given by the Kutta–Joukowski formula. The lift attains 90% of its steady state value when the wing has traveled a distance of about seven chord lengths.
When the angle of attack is high enough, the trailing edge vortex sheet is initially in a spiral shape and the lift is singular (infinitely large) at the initial time. The lift drops for a very short time period before the usually assumed monotonically increasing lift curve is reached.
If, as for a flat plate, the leading edge is also sharp, then vortices also shed at the leading edge and the role of leading edge vortices is two-fold: 1) they are lift increasing when they are still close to the leading edge, so that they elevate the Wagner lift curve, and 2) they are detrimental to lift when they are convected to the trailing edge, inducing a new trailing edge vortex spiral moving in the lift decreasing direction. For this type of flow a vortex force line (VFL) map can be used to understand the effect of the different vortices in a variety of situations (including more situations than starting flow) and may be used to improve vortex control to enhance or reduce the lift. The vortex force line map is a two dimensional map on which vortex force lines are displayed. For a vortex at any point in the flow, its lift contribution is proportional to its speed, its circulation and the cosine of the angle between the streamline and the vortex force line. Hence the vortex force line map clearly shows whether a given vortex is lift producing or lift detrimental.
When a (mass) source is fixed outside the body, a force correction due to this source can be expressed as the product of the strength of outside source and the induced velocity at this source by all the causes except this source. This is known as the Lagally theorem. For two-dimensional inviscid flow, the classical Kutta Joukowski theorem predicts a zero drag. When, however, there is vortex outside the body, there is a vortex induced drag, in a form similar to the induced lift.
For free vortices and other bodies outside one body without bound vorticity and without vortex production, a generalized Lagally theorem holds, with which the forces are expressed as the products of strength of inner singularities (image vortices, sources and doublets inside each body) and the induced velocity at these singularities by all causes except those inside this body. The contribution due to each inner singularity sums up to give the total force. The motion of outside singularities also contributes to forces, and the force component due to this contribution is proportional to the speed of the singularity.
When in addition to multiple free vortices and multiple bodies, there are bound vortices and vortex production on the body surface, the generalized Lagally theorem still holds, but a force due to vortex production exists. This vortex production force is proportional to the vortex production rate and the distance between the vortex pair in production. With this approach, an explicit and algebraic force formula, taking into account of all causes (inner singularities, outside vortices and bodies, motion of all singularities and bodies, and vortex production) holds individually for each body with the role of other bodies represented by additional singularities. Hence a force decomposition according to bodies is possible.
For general three-dimensional, viscous and unsteady flow, force formulas are expressed in integral forms. The volume integration of certain flow quantities, such as vorticity moments, is related to forces. Various forms of integral approach are now available for unbounded domain and for artificially truncated domain. The Kutta Joukowski theorem can be recovered from these approaches when applied to a two-dimensional airfoil and when the flow is steady and unseparated.
A wing has a finite span, and the circulation at any section of the wing varies with the spanwise direction. This variation is compensated by the release of streamwise vortices, called trailing vortices, due to conservation of vorticity or Kelvin Theorem of Circulation Conservation. These streamwise vortices merge to two counter-rotating strong spirals separated by distance close to the wingspan and their cores may be visible if relative humidity is high. Treating the trailing vortices as a series of semi-infinite straight line vortices leads to the well-known lifting line theory. By this theory, the wing has a lift force smaller than that predicted by a purely two-dimensional theory using the Kutta–Joukowski theorem. This is due to the upstream effects of the trailing vortices' added downwash on the angle of attack of the wing. This reduces the wing's effective angle of attack, decreasing the amount of lift produced at a given angle of attack and requiring a higher angle of attack to recover this lost lift. At this new higher angle of attack, drag has also increased. Induced drag effectively reduces the slope of the lift curve of a 2-D airfoil and increases the angle of attack of formula_55 (while also decreasing the value of formula_55).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L'\\,"
},
{
"math_id": 1,
"text": "\\rho_\\infty"
},
{
"math_id": 2,
"text": "V_\\infty"
},
{
"math_id": 3,
"text": "\\Gamma"
},
{
"math_id": 4,
"text": "\\Gamma = \\oint_{C} V \\cdot d\\mathbf{s} = \\oint_{C} V\\cos\\theta\\, ds"
},
{
"math_id": 5,
"text": "C"
},
{
"math_id": 6,
"text": "V\\cos\\theta"
},
{
"math_id": 7,
"text": "ds"
},
{
"math_id": 8,
"text": "\\rho_\\infty V_\\infty \\Gamma"
},
{
"math_id": 9,
"text": "V_\\infty."
},
{
"math_id": 10,
"text": "\\mathord{\\text{Re}} = \\frac{\\rho V_{\\infty}c_A}{\\mu}\\,"
},
{
"math_id": 11,
"text": "c"
},
{
"math_id": 12,
"text": "\\rho"
},
{
"math_id": 13,
"text": "V"
},
{
"math_id": 14,
"text": "V + v"
},
{
"math_id": 15,
"text": "\\Gamma = Vc - (V + v)c = -v c.\\,"
},
{
"math_id": 16,
"text": "\\Delta P"
},
{
"math_id": 17,
"text": "\\begin{align}\n \\frac {\\rho}{2}(V)^2 + (P + \\Delta P) &= \\frac {\\rho}{2}(V + v)^2 + P,\\, \\\\\n \\frac {\\rho}{2}(V)^2 + \\Delta P &= \\frac {\\rho}{2}(V^2 + 2 V v + v^2),\\, \\\\\n \\Delta P &= \\rho V v \\qquad \\text{(ignoring } \\frac{\\rho}{2}v^2),\\,\n\\end{align}"
},
{
"math_id": 18,
"text": "L' = c \\Delta P = \\rho V v c = -\\rho V\\Gamma\\,"
},
{
"math_id": 19,
"text": "\\rho V\\Gamma.\\,"
},
{
"math_id": 20,
"text": "\\mathbf{F}"
},
{
"math_id": 21,
"text": " \\mathbf{F} = -\\oint_C p \\mathbf{n}\\, ds, "
},
{
"math_id": 22,
"text": "p"
},
{
"math_id": 23,
"text": "\\mathbf{n}\\,"
},
{
"math_id": 24,
"text": "\\phi"
},
{
"math_id": 25,
"text": " F_x = -\\oint_C p \\sin\\phi\\, ds\\,, \\qquad F_y = \\oint_C p \\cos\\phi\\, ds. "
},
{
"math_id": 26,
"text": "F = F_x + iF_y = -\\oint_Cp(\\sin\\phi - i\\cos\\phi)\\,ds ."
},
{
"math_id": 27,
"text": "F"
},
{
"math_id": 28,
"text": "\\bar{F} = -\\oint_C p(\\sin\\phi + i\\cos\\phi)\\,ds = -i\\oint_C p(\\cos\\phi - i\\sin\\phi)\\, ds = -i\\oint_C p e^{-i\\phi}\\,ds."
},
{
"math_id": 29,
"text": "\\begin{align}\n dz &= dx + idy = ds(\\cos\\phi + i\\sin\\phi) = ds\\,e^{i\\phi} \\\\\n {} \\Rightarrow d\\bar{z} &= e^{-i\\phi}ds.\n\\end{align}"
},
{
"math_id": 30,
"text": "\\bar{F} = -i\\oint_C p \\, d\\bar{z}."
},
{
"math_id": 31,
"text": "\\rho."
},
{
"math_id": 32,
"text": "v = v_x + iv_y"
},
{
"math_id": 33,
"text": "p = p_0 - \\frac{\\rho |v|^2}{2}."
},
{
"math_id": 34,
"text": "\\bar{F} = -ip_0\\oint_C d\\bar{z} + i \\frac{\\rho}{2} \\oint_C |v|^2\\, d\\bar{z} = \\frac{i\\rho}{2}\\oint_C |v|^2\\,d\\bar{z}."
},
{
"math_id": 35,
"text": "w = f(z),"
},
{
"math_id": 36,
"text": "w' = v_x - iv_y = \\bar{v},"
},
{
"math_id": 37,
"text": "v = \\pm |v| e^{i\\phi}."
},
{
"math_id": 38,
"text": "v^2 d\\bar{z} = |v|^2 dz, "
},
{
"math_id": 39,
"text": " \\bar{F}=\\frac{i\\rho}{2}\\oint_C w'^2\\,dz,"
},
{
"math_id": 40,
"text": "w"
},
{
"math_id": 41,
"text": "w'(z) = a_0 + \\frac{a_1}{z} + \\frac{a_2}{z^2} + \\cdots ."
},
{
"math_id": 42,
"text": "a_0\\,"
},
{
"math_id": 43,
"text": "a_0 = v_{x\\infty} - iv_{y\\infty}\\,"
},
{
"math_id": 44,
"text": "a_1\\,"
},
{
"math_id": 45,
"text": "a_1 = \\frac{1}{2\\pi i} \\oint_C w'\\, dz. "
},
{
"math_id": 46,
"text": "\\begin{align}\n \\oint_C w'(z)\\,dz &= \\oint_C (v_x - iv_y)(dx + idy) \\\\\n &= \\oint_C (v_x\\,dx + v_y\\,dy) + i\\oint_C(v_x\\,dy - v_y\\,dx) \\\\\n &= \\oint_C \\mathbf{v}\\,{ds} + i\\oint_C(v_x\\,dy - v_y\\,dx).\n\\end{align}"
},
{
"math_id": 47,
"text": "\\Gamma."
},
{
"math_id": 48,
"text": "\\oint_C(v_x\\,dy - v_y\\,dx) = \\oint_C\\left(\\frac{\\partial\\psi}{\\partial y}dy + \\frac{\\partial\\psi}{\\partial x}dx\\right) = \\oint_C d\\psi = 0."
},
{
"math_id": 49,
"text": "\\psi\\,"
},
{
"math_id": 50,
"text": "d\\psi = 0 \\,"
},
{
"math_id": 51,
"text": "a_1 = \\frac{\\Gamma}{2\\pi i}."
},
{
"math_id": 52,
"text": "w'^2(z) = a_0^2 + \\frac{a_0\\Gamma}{\\pi i z} + \\cdots."
},
{
"math_id": 53,
"text": " \\bar{F} = \\frac{i\\rho}{2}\\left[2\\pi i \\frac{a_0\\Gamma}{\\pi i}\\right] = i\\rho a_0 \\Gamma = i\\rho \\Gamma(v_{x\\infty} - iv_{y\\infty}) = \\rho\\Gamma v_{y\\infty} + i\\rho\\Gamma v_{x\\infty} = F_x - iF_y."
},
{
"math_id": 54,
"text": "\\begin{align}\n F_x &= \\rho \\Gamma v_{y\\infty}\\,, &\n F_y &= -\\rho \\Gamma v_{x\\infty}.\n\\end{align}"
},
{
"math_id": 55,
"text": "C_{L_\\max}"
}
] | https://en.wikipedia.org/wiki?curid=8606325 |
8606878 | Giovanni Frattini | Italian mathematician
Giovanni Frattini (8 January 1852 – 21 July 1925) was an Italian mathematician, noted for his contributions to group theory.
Biography.
Frattini entered the University of Rome in 1869, where he studied mathematics with Giuseppe Battaglini, Eugenio Beltrami, and Luigi Cremona, obtaining his Laurea in 1875.
In 1885 he published a paper where he defined a certain subgroup of a finite group. This subgroup, now known as the Frattini subgroup, is the subgroup formula_0 generated by all the non-generators of the group formula_1. He showed that formula_0 is nilpotent and, in so doing, developed a method of proof known today as Frattini's argument.
Besides group theory, he also studied
differential geometry and the analysis of second degree indeterminates.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Phi(G)"
},
{
"math_id": 1,
"text": "G"
}
] | https://en.wikipedia.org/wiki?curid=8606878 |
8607762 | Position sensitive device | A position sensitive device and/or position sensitive detector (PSD) is an optical position sensor (OPS) that can measure a position of a light spot in one or two-dimensions on a sensor surface.
Principles.
PSDs can be divided into two classes which work according to different principles: In the first class, the sensors have an isotropic sensor surface that supplies continuous position data. The second class has discrete sensors in an raster-like structure on the sensor surface that supply local discrete data.
Isotropic Sensors.
The technical term PSD was first used in a 1957 publication by J.T. Wallmark for "lateral photoelectric effect " used for local measurements. On a laminar semiconductor, a so-called PIN diode is exposed to a tiny spot of light. This exposure causes a change in local resistance and thus electron flow in four electrodes. From the currents formula_0, formula_1, formula_2 and formula_3 in the electrodes, the location of the light spot is computed using the following equations.
formula_4
and
formula_5
The formula_6 and formula_7 are simple scaling factors, which permit transformation into coordinates.
An advantage of this process is the continuous measurement of the light spot position with measuring rates up to over 100 kHz. The dependence of local measurement on form and size of the light spot as well as the nonlinear connection are a disadvantage that can be partly compensated by special electrode shapes.
2-D tetra-lateral Position Sensitive Device (PSD).
A 2-D tetra-lateral PSD is capable of providing continuous
position measurement of the incident light spot in 2-D. It consists of a single square PIN diode with a resistive layer. When there is an incident light on the active area of the sensor, photocurrents are generated and collected from four electrodes placed along each side of the square near the boundary. The incident light position can be estimated based on currents collected from the electrodes:
formula_8
and
formula_9
The 2-D tetra-lateral PSD has the advantages of fast response, much lower dark current, easy bias application and lower fabrication cost. Its measurement accuracy and resolution is independent of the spot shape and size unlike the quadrant detector which could be easily changed by air turbulence. However, it suffers from the nonlinearity problem. While the position estimate is approximately linear with respect to the real position when the spot is in the center area of the PSD, the relationship becomes nonlinear when the light spot is away from the center. This seriously limits its applications and there are urgent demands for linearity improvement in many applications.
To reduce the nonlinearity of 2-D PSD, a new set of formulae have been proposed to estimate the incident light position (Song Cui, Yeng Chai Soh:"Linearity indices and linearity improvement of 2-D tetra-lateral position sensitive detector." IEEE Transactions on Electron Devices, Vol. 57, No. 9, pp. 2310-2316, 2010):
formula_10
and
formula_11
where :formula_12, and :formula_13 are new scale factors.
The position estimation results obtained by this set of formulae are simulated below. We assume the light spot is moving in steps in both directions and we plot position estimates on a 2-D plane. Thus a regular grid pattern should be obtained if the estimated position is perfectly linear with the true position. The performance is much better than the previous formulae. Detailed simulations and experiment results can be found in S. Cui's paper.
Discrete Sensors.
Serial Processing.
The most common sensor applications with a sampling rate of less than 1000 Hz are CCD or CMOS cameras. The sensor is partitioned into individual pixels whose exposure value can be read out sequentially. The position of the light spot can be computed with the methods of photogrammetry directly from the brightness distribution.
Parallel Processing.
For faster applications, matrix sensors with parallel processing were developed. Both line by line and in columns, the density of light of each pixel is compared with a global threshold value. The results of comparison become lines and columns with logical OR links. From all columns and all lines the one element that is brighter than a given threshold value is the average value of the coordinates computed of the light spot.
Fabrication of isotropic sensors.
Various semiconductor structures, including p-n junctions, Schottky barriers, and metal-oxide-semiconductor structures have been utilized in position-sensitive detectors. More recent hybrid structures based on PEDOT:PSS/n-Si heterojunction exhibit ultrahigh sensitivity and excellent linearity. These hybrid configurations also benefit from a straightforward low-temperature fabrication process eliminating the high-temperature and costly process of manufacturing conventional p-n sensors. | [
{
"math_id": 0,
"text": "I_a"
},
{
"math_id": 1,
"text": "I_b"
},
{
"math_id": 2,
"text": "I_c"
},
{
"math_id": 3,
"text": "I_d"
},
{
"math_id": 4,
"text": "\nx = k_x \\cdot \\frac{I_b - I_d}{I_b + I_d} \n"
},
{
"math_id": 5,
"text": "\ny = k_y \\cdot \\frac{I_a - I_c}{I_a + I_c} \n"
},
{
"math_id": 6,
"text": "k_x"
},
{
"math_id": 7,
"text": "k_y"
},
{
"math_id": 8,
"text": "\nx = k_x \\cdot \\frac{I_4 - I_3}{I_4+ I_3} \n"
},
{
"math_id": 9,
"text": "\ny = k_y \\cdot \\frac{I_2 - I_1}{I_2 + I_1} \n"
},
{
"math_id": 10,
"text": "\nx = k_{x1} \\cdot \\frac{I_4 - I_3}{I_0 - 1.02(I_2-I_1)} \\cdot \\frac{0.7(I_2+I_1) + I_0}{I_0 + 1.02(I_2-I_1)} \n"
},
{
"math_id": 11,
"text": "\ny = k_{y1} \\cdot \\frac{I_2 - I_1}{I_0 - 1.02(I_4-I_3)} \\cdot \\frac{0.7(I_4+I_3) + I_0}{I_0 + 1.02(I_4-I_3)} \n"
},
{
"math_id": 12,
"text": " I_0 = I_1 +I_2 + I_3 + I_4 "
},
{
"math_id": 13,
"text": " k_{x1}, k_{y1}"
}
] | https://en.wikipedia.org/wiki?curid=8607762 |
860861 | Doping (semiconductor) | Intentional introduction of impurities into an intrinsic semiconductor
In semiconductor production, doping is the intentional introduction of impurities into an intrinsic (undoped) semiconductor for the purpose of modulating its electrical, optical and structural properties. The doped material is referred to as an extrinsic semiconductor.
Small numbers of dopant atoms can change the ability of a semiconductor to conduct electricity. When on the order of one dopant atom is added per 100 million atoms, the doping is said to be "low" or "light". When many more dopant atoms are added, on the order of one per ten thousand atoms, the doping is referred to as "high" or "heavy". This is often shown as "n+" for n-type doping or "p+" for p-type doping. ("See the article on semiconductors for a more detailed description of the doping mechanism.") A semiconductor doped to such high levels that it acts more like a conductor than a semiconductor is referred to as a degenerate semiconductor. A semiconductor can be considered i-type semiconductor if it has been doped in equal quantities of p and n.
In the context of phosphors and scintillators, doping is better known as activation; this is not to be confused with dopant activation in semiconductors. Doping is also used to control the color in some pigments.
History.
The effects of impurities in semiconductors (doping) were long known empirically in such devices as crystal radio detectors and selenium rectifiers. For instance, in 1885 Shelford Bidwell, and in 1930 the German scientist Bernhard Gudden, each independently reported that the properties of semiconductors were due to the impurities they contained.
A doping process was formally developed by John Robert Woodyard working at Sperry Gyroscope Company during World War II. Though the word "doping" is not used in it, his US Patent issued in 1950 describes methods for adding tiny amounts of solid elements from the nitrogen column of the periodic table to germanium to produce rectifying devices. The demands of his work on radar prevented Woodyard from pursuing further research on semiconductor doping.
Similar work was performed at Bell Labs by Gordon K. Teal and Morgan Sparks, with a US Patent issued in 1953.
Woodyard's prior patent proved to be the grounds of extensive litigation by Sperry Rand.
Carrier concentration.
The concentration of the dopant used affects many electrical properties. Most important is the material's charge carrier concentration. In an intrinsic semiconductor under thermal equilibrium, the concentrations of electrons and holes are equivalent. That is,
formula_0
In a non-intrinsic semiconductor under thermal equilibrium, the relation becomes (for low doping):
formula_1
where "n"0 is the concentration of conducting electrons, "p"0 is the conducting hole concentration, and "ni" is the material's intrinsic carrier concentration. The intrinsic carrier concentration varies between materials and is dependent on temperature. Silicon's "ni", for example, is roughly 1.08×1010 cm−3 at 300 kelvins, about room temperature.
In general, increased doping leads to increased conductivity due to the higher concentration of carriers. Degenerate (very highly doped) semiconductors have conductivity levels comparable to metals and are often used in integrated circuits as a replacement for metal. Often superscript plus and minus symbols are used to denote relative doping concentration in semiconductors. For example, "n"+ denotes an n-type semiconductor with a high, often degenerate, doping concentration. Similarly, "p"− would indicate a very lightly doped p-type material. Even degenerate levels of doping imply low concentrations of impurities with respect to the base semiconductor. In intrinsic crystalline silicon, there are approximately 5×1022 atoms/cm3. Doping concentration for silicon semiconductors may range anywhere from 1013 cm−3 to 1018 cm−3. Doping concentration above about 1018 cm−3 is considered degenerate at room temperature. Degenerately doped silicon contains a proportion of impurity to silicon on the order of parts per thousand. This proportion may be reduced to parts per billion in very lightly doped silicon. Typical concentration values fall somewhere in this range and are tailored to produce the desired properties in the device that the semiconductor is intended for.
Effect on band structure.
Doping a semiconductor in a good crystal introduces allowed energy states within the band gap, but very close to the energy band that corresponds to the dopant type. In other words, electron donor impurities create states near the conduction band while electron acceptor impurities create states near the valence band. The gap between these energy states and the nearest energy band is usually referred to as dopant-site bonding energy or "EB" and is relatively small. For example, the "EB" for boron in silicon bulk is 0.045 eV, compared with silicon's band gap of about 1.12 eV. Because "EB" is so small, room temperature is hot enough to thermally ionize practically all of the dopant atoms and create free charge carriers in the conduction or valence bands.
Dopants also have the important effect of shifting the energy bands relative to the Fermi level. The energy band that corresponds with the dopant with the greatest concentration ends up closer to the Fermi level. Since the Fermi level must remain constant in a system in thermodynamic equilibrium, stacking layers of materials with different properties leads to many useful electrical properties induced by band bending, if the interfaces can be made cleanly enough. For example, the p-n junction's properties are due to the band bending that happens as a result of the necessity to line up the bands in contacting regions of p-type and n-type material.
This effect is shown in a band diagram. The band diagram typically indicates the variation in the valence band and conduction band edges versus some spatial dimension, often denoted "x". The Fermi level is also usually indicated in the diagram. Sometimes the "intrinsic Fermi level", "Ei", which is the Fermi level in the absence of doping, is shown. These diagrams are useful in explaining the operation of many kinds of semiconductor devices.
Relationship to carrier concentration (low doping).
For low levels of doping, the relevant energy states are populated sparsely by electrons (conduction band) or holes (valence band). It is possible to write simple expressions for the electron and hole carrier concentrations, by ignoring Pauli exclusion (via Maxwell–Boltzmann statistics):
formula_2
where "E"F is the Fermi level, "E"C is the minimum energy of the conduction band, and "E"V is the maximum energy of the valence band. These are related to the value of the intrinsic concentration via
formula_3
an expression which is independent of the doping level, since "E"C – "E"V (the band gap) does not change with doping.
The concentration factors "N"C("T") and "N"V("T") are given by
formula_4
where "m""e"* and "m""h"* are the density of states effective masses of electrons and holes, respectively, quantities that are roughly constant over temperature.
Techniques of doping and synthesis.
Doping during crystal growth.
Some dopants are added as the (usually silicon) boule is grown by Czochralski method, giving each wafer an almost uniform initial doping.
Alternately, synthesis of semiconductor devices may involve the use of vapor-phase epitaxy. In vapor-phase epitaxy, a gas containing the dopant precursor can be introduced into the reactor. For example, in the case of n-type gas doping of gallium arsenide, hydrogen sulfide is added, and sulfur is incorporated into the structure. This process is characterized by a constant concentration of sulfur on the surface. In the case of semiconductors in general, only a very thin layer of the wafer needs to be doped in order to obtain the desired electronic properties.
Post-growth doping.
To define circuit elements, selected areas — typically controlled by photolithography — are further doped by such processes as diffusion and ion implantation, the latter method being more popular in large production runs because of increased controllability.
Neutron transmutation doping.
Neutron transmutation doping (NTD) is an unusual doping method for special applications. Most commonly, it is used to dope silicon n-type in high-power electronics and semiconductor detectors. It is based on the conversion of the Si-30 isotope into phosphorus atom by neutron absorption as follows:
formula_5
In practice, the silicon is typically placed near a nuclear reactor to receive the neutrons. As neutrons continue to pass through the silicon, more and more phosphorus atoms are produced by transmutation, and therefore the doping becomes more and more strongly n-type. NTD is a far less common doping method than diffusion or ion implantation, but it has the advantage of creating an extremely uniform dopant distribution.
Dopant elements.
Group IV semiconductors.
For the Group IV semiconductors such as diamond, silicon, germanium, silicon carbide, and silicon–germanium, the most common dopants are acceptors from Group III or donors from Group V elements. Boron, arsenic, phosphorus, and occasionally gallium are used to dope silicon. Boron is the p-type dopant of choice for silicon integrated circuit production because it diffuses at a rate that makes junction depths easily controllable. Phosphorus is typically used for bulk-doping of silicon wafers, while arsenic is used to diffuse junctions, because it diffuses more slowly than phosphorus and is thus more controllable.
By doping pure silicon with Group V elements such as phosphorus, extra valence electrons are added that become unbounded from individual atoms and allow the compound to be an electrically conductive n-type semiconductor. Doping with Group III elements, which are missing the fourth valence electron, creates "broken bonds" (holes) in the silicon lattice that are free to move. The result is an electrically conductive p-type semiconductor. In this context, a Group V element is said to behave as an electron donor, and a Group III element as an acceptor. This is a key concept in the physics of a diode.
A very heavily doped semiconductor behaves more like a good conductor (metal) and thus exhibits more linear positive thermal coefficient. Such effect is used for instance in sensistors. Lower dosage of doping is used in other types (NTC or PTC) thermistors.
Other semiconductors.
In the following list the "(substituting X)" refers to all of the materials preceding said parenthesis.
Compensation.
In most cases many types of impurities will be present in the resultant doped semiconductor. If an equal number of donors and acceptors are present in the semiconductor, the extra core electrons provided by the former will be used to satisfy the broken bonds due to the latter, so that doping produces no free carriers of either type. This phenomenon is known as "compensation", and occurs at the p-n junction in the vast majority of semiconductor devices.
Partial compensation, where donors outnumber acceptors or vice versa, allows device makers to repeatedly reverse (invert) the type of a certain layer under the surface of a bulk semiconductor by diffusing or implanting successively higher doses of dopants, so-called counterdoping. Most modern semiconductor devices are made by successive selective counterdoping steps to create the necessary P and N type areas under the surface of bulk silicon. This is an alternative to successively growing such layers by epitaxy.
Although compensation can be used to increase or decrease the number of donors or acceptors, the electron and hole mobility is always decreased by compensation because mobility is affected by the sum of the donor and acceptor ions.
Doping in conductive polymers.
Conductive polymers can be doped by adding chemical reactants to oxidize, or sometimes reduce, the system so that electrons are pushed into the conducting orbitals within the already potentially conducting system. There are two primary methods of doping a conductive polymer, both of which use an oxidation-reduction (i.e., redox) process.
N-doping is much less common because the Earth's atmosphere is oxygen-rich, thus creating an oxidizing environment. An electron-rich, n-doped polymer will react immediately with elemental oxygen to "de-dope" (i.e., reoxidize to the neutral state) the polymer. Thus, chemical n-doping must be performed in an environment of inert gas (e.g., argon). Electrochemical n-doping is far more common in research, because it is easier to exclude oxygen from a solvent in a sealed flask. However, it is unlikely that n-doped conductive polymers are available commercially.
Doping in organic molecular semiconductors.
Molecular dopants are preferred in doping molecular semiconductors due to their compatibilities of processing with the host, that is, similar evaporation temperatures or controllable solubility. Additionally, the relatively large sizes of molecular dopants compared with those of metal ion dopants (such as Li+ and Mo6+) are generally beneficial, yielding excellent spatial confinement for use in multilayer structures, such as OLEDs and Organic solar cells. Typical p-type dopants include F4-TCNQ and Mo(tfd)3. However, similar to the problem encountered in doping conductive polymers, air-stable n-dopants suitable for materials with low electron affinity (EA) are still elusive. Recently, photoactivation with a combination of cleavable dimeric dopants, such as [RuCp∗Mes]2, suggests a new path to realize effective n-doping in low-EA materials.
Magnetic doping.
Research on magnetic doping has shown that considerable alteration of certain properties such as specific heat may be affected by small concentrations of an impurity; for example, dopant impurities in semiconducting ferromagnetic alloys can generate different properties as first predicted by White, Hogan, Suhl and Nakamura.
The inclusion of dopant elements to impart dilute magnetism is of growing significance in the field of magnetic semiconductors. The presence of disperse ferromagnetic species is key to the functionality of emerging spintronics, a class of systems that utilise electron spin in addition to charge. Using density functional theory (DFT) the temperature dependent magnetic behaviour of dopants within a given lattice can be modeled to identify candidate semiconductor systems.
Single dopants in semiconductors.
The sensitive dependence of a semiconductor's properties on dopants has provided an extensive range of tunable phenomena to explore and apply to devices. It is possible to identify the effects of a solitary dopant on commercial device performance as well as on the fundamental properties of a semiconductor material. New applications have become available that require the discrete character of a single dopant, such as single-spin devices in the area of quantum information or single-dopant transistors. Dramatic advances in the past decade towards observing, controllably creating and manipulating single dopants, as well as their application in novel devices have allowed opening the new field of solotronics (solitary dopant optoelectronics).
Modulation doping.
Electrons or holes introduced by doping are mobile, and can be spatially separated from dopant atoms they have dissociated from. Ionized donors and acceptors however attract electrons and holes, respectively, so this spatial separation requires abrupt changes of dopant levels, of band gap (e.g. a quantum well), or built-in electric fields (e.g. in case of noncentrosymmetric crystals). This technique is called modulation doping and is advantageous owing to suppressed carrier-donor scattering, allowing very high mobility to be attained.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n = p = n_i.\\ "
},
{
"math_id": 1,
"text": "n_0 \\cdot p_0 = n_i^2\\ "
},
{
"math_id": 2,
"text": "n_e = N_{\\rm C}(T) \\exp((E_{\\rm F} - E_{\\rm C})/kT), \\quad n_h = N_{\\rm V}(T) \\exp((E_{\\rm V} - E_{\\rm F})/kT),"
},
{
"math_id": 3,
"text": "n_i^2 = n_h n_e = N_{\\rm V}(T) N_{\\rm C}(T) \\exp((E_{\\rm V}-E_{\\rm C})/kT),"
},
{
"math_id": 4,
"text": "N_{\\rm C}(T) = 2(2\\pi m_e^* kT/h^2)^{3/2} \\quad N_{\\rm V}(T) = 2(2\\pi m_h^* kT/h^2)^{3/2}."
},
{
"math_id": 5,
"text": "^{30}\\mathrm{Si} \\, (n,\\gamma) \\, ^{31}\\mathrm{Si} \\rightarrow \\, ^{31}\\mathrm{P} + \\beta^- \\; (T_{1/2} = 2.62 \\mathrm{h}). "
}
] | https://en.wikipedia.org/wiki?curid=860861 |
860895 | Subjective expected utility | Concept in decision theory
In decision theory, subjective expected utility is the attractiveness of an economic opportunity as perceived by a decision-maker in the presence of risk. Characterizing the behavior of decision-makers as using subjective expected utility was promoted and axiomatized by L. J. Savage in 1954 following previous work by Ramsey and von Neumann. The theory of subjective expected utility combines two subjective concepts: first, a personal utility function, and second a personal probability distribution (usually based on Bayesian probability theory).
Savage proved that, if the decision-maker adheres to axioms of rationality, believing an uncertain event has possible outcomes formula_0 each with a utility of formula_1 then the person's choices can be explained as arising from this utility function combined with the subjective belief that there is a probability of each outcome, formula_2 The subjective expected utility is the resulting expected value of the utility,
formula_3
If instead of choosing formula_0 the person were to choose formula_4 the person's subjective expected utility would be
formula_5
Which decision the person prefers depends on which subjective expected utility is higher. Different people may make different decisions because they may have different utility functions or different beliefs about the probabilities of different outcomes.
Savage assumed that it is possible to take convex combinations of decisions and that preferences would be preserved. So if a person prefers formula_6 to formula_7and formula_8 to formula_9 then that person will prefer formula_10 to formula_11, for any formula_12.
Experiments have shown that many individuals do not behave in a manner consistent with Savage's axioms of subjective expected utility, e.g. most prominently Allais (1953)
and Ellsberg (1961).
Notes.
<templatestyles src="Reflist/styles.css" />
de Finetti, Bruno. "Foresight: its Logical Laws, Its Subjective Sources," (translation of the 1937 article in French) in H. E. Kyburg and H. E. Smokler (eds), "Studies in Subjective Probability," New York: Wiley, 1964. | [
{
"math_id": 0,
"text": "\\{x_i\\}"
},
{
"math_id": 1,
"text": "u(x_i),"
},
{
"math_id": 2,
"text": "P(x_i)."
},
{
"math_id": 3,
"text": "\\Epsilon[u(X)] = \\sum_i \\; u(x_i) \\; P(x_i) ."
},
{
"math_id": 4,
"text": "\\{y_j\\},"
},
{
"math_id": 5,
"text": "\\Epsilon[u(Y)] = \\sum_j \\; u(y_j) \\; P(y_j)."
},
{
"math_id": 6,
"text": "x(=\\{x_i\\})"
},
{
"math_id": 7,
"text": "y (=\\{y_i\\})"
},
{
"math_id": 8,
"text": "s(=\\{s_i\\})"
},
{
"math_id": 9,
"text": "t(=\\{t_i\\})"
},
{
"math_id": 10,
"text": "\\lambda x + (1-\\lambda )s"
},
{
"math_id": 11,
"text": "\\lambda y + (1-\\lambda )t"
},
{
"math_id": 12,
"text": "0<\\lambda<1"
}
] | https://en.wikipedia.org/wiki?curid=860895 |
86113 | Winding number | Number of times a curve wraps around a point in the plane
In mathematics, the winding number or winding index of a closed curve in the plane around a given point is an integer representing the total number of times that the curve travels counterclockwise around the point, i.e., the curve's number of turns. For certain open plane curves, the number of turns may be a non-integer. The winding number depends on the orientation of the curve, and it is negative if the curve travels around the point clockwise.
Winding numbers are fundamental objects of study in algebraic topology, and they play an important role in vector calculus, complex analysis, geometric topology, differential geometry, and physics (such as in string theory).
Intuitive description.
Suppose we are given a closed, oriented curve in the "xy" plane. We can imagine the curve as the path of motion of some object, with the orientation indicating the direction in which the object moves. Then the winding number of the curve is equal to the total number of counterclockwise turns that the object makes around the origin.
When counting the total number of turns, counterclockwise motion counts as positive, while clockwise motion counts as negative. For example, if the object first circles the origin four times counterclockwise, and then circles the origin once clockwise, then the total winding number of the curve is three.
Using this scheme, a curve that does not travel around the origin at all has winding number zero, while a curve that travels clockwise around the origin has negative winding number. Therefore, the winding number of a curve may be any integer. The following pictures show curves with winding numbers between −2 and 3:
Formal definition.
Let formula_0 be a continuous closed path on the plane minus one point. The winding number of formula_1 around formula_2 is the integer
formula_3
where formula_4 is the path written in polar coordinates, i.e. the lifted path through the covering map
formula_5
The winding number is well defined because of the existence and uniqueness of the lifted path (given the starting point in the covering space) and because all the fibers of formula_6 are of the form formula_7 (so the above expression does not depend on the choice of the starting point). It is an integer because the path is closed.
Alternative definitions.
Winding number is often defined in different ways in various parts of mathematics. All of the definitions below are equivalent to the one given above:
Alexander numbering.
A simple combinatorial rule for defining the winding number was proposed by August Ferdinand Möbius in 1865
and again independently by James Waddell Alexander II in 1928.
Any curve partitions the plane into several connected regions, one of which is unbounded. The winding numbers of the curve around two points in the same region are equal. The winding number around (any point in) the unbounded region is zero. Finally, the winding numbers for any two adjacent regions differ by exactly 1; the region with the larger winding number appears on the left side of the curve (with respect to motion down the curve).
Differential geometry.
In differential geometry, parametric equations are usually assumed to be differentiable (or at least piecewise differentiable). In this case, the polar coordinate "θ" is related to the rectangular coordinates "x" and "y" by the equation:
formula_8
Which is found by differentiating the following definition for θ:
formula_9
By the fundamental theorem of calculus, the total change in "θ" is equal to the integral of "dθ". We can therefore express the winding number of a differentiable curve as a line integral:
formula_10
The one-form "dθ" (defined on the complement of the origin) is closed but not exact, and it generates the first de Rham cohomology group of the punctured plane. In particular, if "ω" is any closed differentiable one-form defined on the complement of the origin, then the integral of "ω" along closed loops gives a multiple of the winding number.
Complex analysis.
Winding numbers play a very important role throughout complex analysis (c.f. the statement of the residue theorem). In the context of complex analysis, the winding number of a closed curve formula_1 in the complex plane can be expressed in terms of the complex coordinate "z" = "x" + "iy". Specifically, if we write "z" = "re""iθ", then
formula_11
and therefore
formula_12
As formula_1 is a closed curve, the total change in formula_13 is zero, and thus the integral of formula_14 is equal to formula_15 multiplied by the total change in formula_16. Therefore, the winding number of closed path formula_1 about the origin is given by the expression
formula_17
More generally, if formula_1 is a closed curve parameterized by formula_18, the winding number of formula_1 about formula_19, also known as the "index" of formula_19 with respect to formula_1, is defined for complex formula_20 as
formula_21
This is a special case of the famous Cauchy integral formula.
Some of the basic properties of the winding number in the complex plane are given by the following theorem:
Theorem. "Let formula_22 be a closed path and let formula_23 be the set complement of the image of formula_1, that is, formula_24. Then the index of formula_25 with respect to formula_1,"formula_26"is (i) integer-valued, i.e., formula_27 for all formula_28; (ii) constant over each component (i.e., maximal connected subset) of formula_23; and (iii) zero if formula_25 is in the unbounded component of formula_23."
As an immediate corollary, this theorem gives the winding number of a circular path formula_1 about a point formula_25. As expected, the winding number counts the number of (counterclockwise) loops formula_1 makes around formula_25:
Corollary. "If formula_1 is the path defined by formula_29, then" formula_30
Topology.
In topology, the winding number is an alternate term for the degree of a continuous mapping. In physics, winding numbers are frequently called topological quantum numbers. In both cases, the same concept applies.
The above example of a curve winding around a point has a simple topological interpretation. The complement of a point in the plane is homotopy equivalent to the circle, such that maps from the circle to itself are really all that need to be considered. It can be shown that each such map can be continuously deformed to (is homotopic to) one of the standard maps formula_31, where multiplication in the circle is defined by identifying it with the complex unit circle. The set of homotopy classes of maps from a circle to a topological space form a group, which is called the first homotopy group or fundamental group of that space. The fundamental group of the circle is the group of the integers, Z; and the winding number of a complex curve is just its homotopy class.
Maps from the 3-sphere to itself are also classified by an integer which is also called the winding number or sometimes Pontryagin index.
Turning number.
One can also consider the winding number of the path with respect to the tangent of the path itself. As a path followed through time, this would be the winding number with respect to the origin of the velocity vector. In this case the example illustrated at the beginning of this article has a winding number of 3, because the small loop "is" counted.
This is only defined for immersed paths (i.e., for differentiable paths with nowhere vanishing derivatives), and is the degree of the tangential Gauss map.
This is called the turning number, rotation number, rotation index or index of the curve, and can be computed as the total curvature divided by 2π.
Polygons.
In polygons, the turning number is referred to as the polygon density. For convex polygons, and more generally simple polygons (not self-intersecting), the density is 1, by the Jordan curve theorem. By contrast, for a regular star polygon {"p"/"q"}, the density is "q".
Space curves.
Turning number cannot be defined for space curves as degree requires matching dimensions. However, for locally convex, closed space curves, one can define tangent turning sign as formula_32, where formula_33 is the turning number of the stereographic projection of its tangent indicatrix. Its two values correspond to the two non-degenerate homotopy classes of locally convex curves.
Winding number and Heisenberg ferromagnet equations.
The winding number is closely related with the (2 + 1)-dimensional continuous Heisenberg ferromagnet equations and its integrable extensions: the Ishimori equation etc. Solutions of the last equations are classified by the winding number or topological charge (topological invariant and/or topological quantum number).
Applications.
Point in polygon.
A point's winding number with respect to a polygon can be used to solve the point in polygon (PIP) problem – that is, it can be used to determine if the point is inside the polygon or not.
Generally, the ray casting algorithm is a better alternative to the PIP problem as it does not require trigonometric functions, contrary to the winding number algorithm. Nevertheless, the winding number algorithm can be sped up so that it too, does not require calculations involving trigonometric functions. The sped-up version of the algorithm, also known as Sunday's algorithm, is recommended in cases where non-simple polygons should also be accounted for.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\gamma:[0,1] \\to \\Complex \\setminus \\{a\\}"
},
{
"math_id": 1,
"text": "\\gamma"
},
{
"math_id": 2,
"text": "a"
},
{
"math_id": 3,
"text": "\\text{wind}(\\gamma,a) = s(1) - s(0),"
},
{
"math_id": 4,
"text": "(\\rho,s)"
},
{
"math_id": 5,
"text": "p:\\Reals_{>0} \\times \\Reals \\to \\Complex \\setminus \\{a\\}: (\\rho_0,s_0) \\mapsto a+\\rho_0 e^{i2\\pi s_0}."
},
{
"math_id": 6,
"text": "p"
},
{
"math_id": 7,
"text": "\\rho_0 \\times (s_0 + \\Z)"
},
{
"math_id": 8,
"text": "d\\theta = \\frac{1}{r^2} \\left( x\\,dy - y\\,dx \\right)\\quad\\text{where }r^2 = x^2 + y^2."
},
{
"math_id": 9,
"text": " \\theta(t)=\\arctan\\bigg(\\frac{y(t)}{x(t)}\\bigg)"
},
{
"math_id": 10,
"text": "\\text{wind}(\\gamma,0) = \\frac{1}{2\\pi} \\oint_{\\gamma} \\,\\left(\\frac{x}{r^2}\\,dy - \\frac{y}{r^2}\\,dx\\right)."
},
{
"math_id": 11,
"text": "dz = e^{i\\theta} dr + ire^{i\\theta} d\\theta"
},
{
"math_id": 12,
"text": "\\frac{dz}{z} = \\frac{dr}{r} + i\\,d\\theta = d[ \\ln r ] + i\\,d\\theta."
},
{
"math_id": 13,
"text": "\\ln (r)"
},
{
"math_id": 14,
"text": "\\frac{dz}{z}"
},
{
"math_id": 15,
"text": "i"
},
{
"math_id": 16,
"text": "\\theta"
},
{
"math_id": 17,
"text": "\\frac{1}{2\\pi i} \\oint_\\gamma \\frac{dz}{z} \\, ."
},
{
"math_id": 18,
"text": "t\\in[\\alpha,\\beta]"
},
{
"math_id": 19,
"text": "z_0"
},
{
"math_id": 20,
"text": "z_0\\notin \\gamma([\\alpha, \\beta])"
},
{
"math_id": 21,
"text": "\\mathrm{Ind}_\\gamma(z_0) = \\frac{1}{2\\pi i} \\oint_\\gamma \\frac{d\\zeta}{\\zeta - z_0} = \\frac{1}{2\\pi i} \\int_{\\alpha}^{\\beta} \\frac{\\gamma'(t)}{\\gamma(t) - z_0} dt."
},
{
"math_id": 22,
"text": "\\gamma:[\\alpha,\\beta]\\to\\mathbb{C}"
},
{
"math_id": 23,
"text": "\\Omega"
},
{
"math_id": 24,
"text": "\\Omega:=\\mathbb{C}\\setminus\\gamma([\\alpha,\\beta])"
},
{
"math_id": 25,
"text": "z"
},
{
"math_id": 26,
"text": "\\mathrm{Ind}_\\gamma:\\Omega\\to \\mathbb{C},\\ \\ z\\mapsto \\frac{1}{2\\pi i}\\oint_\\gamma \\frac{d\\zeta}{\\zeta-z},"
},
{
"math_id": 27,
"text": "\\mathrm{Ind}_\\gamma(z)\\in\\mathbb{Z}"
},
{
"math_id": 28,
"text": "z\\in\\Omega"
},
{
"math_id": 29,
"text": "\\gamma(t)=a+re^{int},\\ \\ 0\\leq t\\leq 2\\pi, \\ \\ n\\in\\mathbb{Z}"
},
{
"math_id": 30,
"text": "\\mathrm{Ind}_\\gamma(z) = \\begin{cases} n, & |z-a|< r; \\\\ 0, & |z-a|> r. \\end{cases}"
},
{
"math_id": 31,
"text": "S^1 \\to S^1 : s \\mapsto s^n"
},
{
"math_id": 32,
"text": "(-1)^d"
},
{
"math_id": 33,
"text": "d"
}
] | https://en.wikipedia.org/wiki?curid=86113 |
861162 | Internal resistance | Impedance of a linear circuit's Thévenin representation
In electrical engineering, a practical electric power source which is a linear circuit may, according to Thévenin's theorem, be represented as an ideal voltage source in series with an impedance. This impedance is termed the internal resistance of the source. When the power source delivers current, the measured voltage output is lower than the no-load voltage; the difference is the voltage drop (the product of current and resistance) caused by the internal resistance. The concept of internal resistance applies to all kinds of electrical sources and is useful for analyzing many types of circuits.
Battery.
A battery may be modeled as a voltage source in series with a resistance. These types of models are known as equivalent circuit models. Another common model being physiochemical models that are physical in nature involving concentrations and reaction rates. In practice, the internal resistance of a battery is dependent on its size, state of charge, chemical properties, age, temperature, and the discharge current. It has an electronic component due to the resistivity of the component materials and an ionic component due to electrochemical factors such as electrolyte conductivity, ion mobility, speed of electrochemical reaction and electrode surface area. Measurement of the internal resistance of a battery is a guide to its condition, but may not apply at other than the test conditions. Measurement with an alternating current, typically at a frequency of , may underestimate the resistance, as the frequency may be too high to take into account slower electrochemical processes. Internal resistance depends on temperature; for example, a fresh Energizer E91 AA alkaline primary battery drops from about 0.9 Ω at -40 °C, when the low temperature reduces ion mobility, to about 0.15 Ω at room temperature and about 0.1 Ω at 40 °C. A large part of this drop is due to the increase in the magnitude of the electrolyte diffusion coefficient.
The internal resistance of a battery may be calculated from its open circuit voltage "V"NL, load voltage "V"FL, and the load resistance "R"L:
formula_0
This can also be expressed in terms of the Overpotential η and the current I:
formula_1
Many equivalent series resistance (ESR) meters, essentially AC milliohm-meters normally used to measure the ESR of capacitors, can be used to estimate battery internal resistance, particularly to check the state of discharge of a battery rather than obtain an accurate DC value. Some chargers for rechargeable batteries indicate the ESR.
In use, the voltage across the terminals of a disposable battery driving a load decreases until it drops too low to be useful; this is largely due to an increase in internal resistance rather than a drop in the voltage of the equivalent source.
In rechargeable lithium polymer batteries, the internal resistance is largely independent of the state of charge but increases as the battery ages due to the build up of a passivation layer on the electrodes called the "solid electrolyte interphase"; thus, it is a good indicator of expected life. | [
{
"math_id": 0,
"text": " R_{\\text{int}} = \\left({\\frac{ V_{\\text{NL}} } { V_{\\text{FL}} } - 1 } \\right) { R_{\\text{L}} } "
},
{
"math_id": 1,
"text": " R_{int}= \\frac{\\eta}{I} "
}
] | https://en.wikipedia.org/wiki?curid=861162 |
8612907 | Relative interior | Generalization of topological interior
In mathematics, the relative interior of a set is a refinement of the concept of the interior, which is often more useful when dealing with low-dimensional sets placed in higher-dimensional spaces.
Formally, the relative interior of a set formula_0 (denoted formula_1) is defined as its interior within the affine hull of formula_2 In other words,
formula_3
where formula_4 is the affine hull of formula_5 and formula_6 is a ball of radius formula_7 centered on formula_8. Any metric can be used for the construction of the ball; all metrics define the same set as the relative interior.
A set is relatively open iff it is equal to its relative interior. Note that when formula_4 is a closed subspace of the full vector space (always the case when the full vector space is finite dimensional) then being relatively closed is equivalent to being closed.
For any convex set formula_9 the relative interior is equivalently defined as
formula_10
where formula_11 means that there exists some formula_12 such that formula_13.
Properties.
<templatestyles src="Math_theorem/styles.css" />
Theorem — If formula_14 is nonempty and convex, then its relative interior formula_15 is the union of a nested sequence of nonempty compact convex subsets formula_16.
<templatestyles src="Math_proof/styles.css" />Proof
Since we can always go down to the affine span of formula_17, WLOG, the relative interior has dimension formula_18. Now let formula_19.
<templatestyles src="Math_theorem/styles.css" />
Theorem — Here "+" denotes Minkowski sum.
<templatestyles src="Math_theorem/styles.css" />
Theorem — Here formula_23 denotes positive cone. That is, formula_24.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "\\operatorname{relint}(S)"
},
{
"math_id": 2,
"text": "S."
},
{
"math_id": 3,
"text": "\\operatorname{relint}(S) := \\{ x \\in S : \\text{ there exists } \\epsilon > 0 \\text{ such that } B_\\epsilon(x) \\cap \\operatorname{aff}(S) \\subseteq S \\},"
},
{
"math_id": 4,
"text": "\\operatorname{aff}(S)"
},
{
"math_id": 5,
"text": "S,"
},
{
"math_id": 6,
"text": "B_\\epsilon(x)"
},
{
"math_id": 7,
"text": "\\epsilon"
},
{
"math_id": 8,
"text": "x"
},
{
"math_id": 9,
"text": "C \\subseteq \\mathbb{R}^n"
},
{
"math_id": 10,
"text": "\\begin{align}\\operatorname{relint}(C) &:= \\{x \\in C : \\text{ for all } y \\in C, \\text{ there exists some } \\lambda > 1 \\text{ such that } \\lambda x + (1 - \\lambda) y \\in C\\}\\\\\n&= \\{x \\in C : \\text{ for all } y\\neq x \\in C, \\text{ there exists some } z \\in C \\text{ such that } x\\in (y,z)\\}.\n\\end{align}"
},
{
"math_id": 11,
"text": " x\\in (y,z) "
},
{
"math_id": 12,
"text": " 0< \\lambda < 1 "
},
{
"math_id": 13,
"text": " x=\\lambda z + (1 - \\lambda) y "
},
{
"math_id": 14,
"text": "A\\subset \\R^n"
},
{
"math_id": 15,
"text": "\\mathrm{relint}(A)"
},
{
"math_id": 16,
"text": "K_1\\subset K_2\\subset K_3\\subset\\cdots \\subset \\mathrm{relint}(A)"
},
{
"math_id": 17,
"text": "A"
},
{
"math_id": 18,
"text": "n"
},
{
"math_id": 19,
"text": " K_j \\equiv [-j,j]^n \\cap \\left\\{ x \\in \\text{int}(K) : \\mathrm{dist}(x, (\\text{int}(K))^c) \\ge \\frac{1}{j} \\right\\} "
},
{
"math_id": 20,
"text": "\\mathrm{relint}(S_1) + \\mathrm{relint}(S_2) \\subset \\mathrm{relint}(S_1 + S_2)"
},
{
"math_id": 21,
"text": "S_1, S_2"
},
{
"math_id": 22,
"text": "S_1 + S_2"
},
{
"math_id": 23,
"text": "\\mathrm{Cone}"
},
{
"math_id": 24,
"text": "\\mathrm{Cone}(S) = \\{rx: x\\in S, r > 0\\}"
},
{
"math_id": 25,
"text": " \\mathrm{Cone}(\\mathrm{relint}(S)) \\subset \\mathrm{relint}(\\mathrm{Cone}(S))"
}
] | https://en.wikipedia.org/wiki?curid=8612907 |
8613752 | Elongatedness | In image processing, elongatedness for a region is the ratio between the length and width of the minimum bounding rectangle of the region. It is considered a feature of the region. It can be evaluated as the ratio between the area of the region to its thickness squared:
formula_0.
where the maximum thickness, formula_1, of a holeless region is given by the number of times the region can be eroded before disappearing.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "elongatedness = \\frac{length}{width} = \\frac{area}{(2 d)^2}"
},
{
"math_id": 1,
"text": "d"
}
] | https://en.wikipedia.org/wiki?curid=8613752 |
861530 | Barometric formula | Formula used to model how air pressure varies with altitude
The barometric formula is a formula used to model how the pressure (or density) of the air changes with altitude.
Pressure equations.
There are two equations for computing pressure as a function of height. The first equation is applicable to the atmospheric layers in which the temperature is assumed to vary with altitude at a non null lapse rate of formula_0:
formula_1
The second equation is applicable to the atmospheric layers in which the temperature is assumed not to vary with altitude (lapse rate is null):
formula_2
where:
Or converted to imperial units:
The value of subscript "b" ranges from 0 to 6 in accordance with each of seven successive layers of the atmosphere shown in the table below. In these equations, "g"0, "M" and "R"* are each single-valued constants, while "P", "L," "T," and "h" are multivalued constants in accordance with the table below. The values used for "M", "g"0, and "R"* are in accordance with the U.S. Standard Atmosphere, 1976, and the value for "R"* in particular does not agree with standard values for this constant. The reference value for "Pb" for "b" = 0 is the defined sea level value, "P"0 = 101 325 Pa or 29.92126 inHg. Values of "Pb" of "b" = 1 through "b" = 6 are obtained from the application of the appropriate member of the pair equations 1 and 2 for the case when "h" = "h""b"+1.
Density equations.
The expressions for calculating density are nearly identical to calculating pressure. The only difference is the exponent in Equation 1.
There are two equations for computing density as a function of height. The first equation is applicable to the standard model of the troposphere in which the temperature is assumed to vary with altitude at a lapse rate of formula_0; the second equation is applicable to the standard model of the stratosphere in which the temperature is assumed not to vary with altitude.
Equation 1:
formula_11
which is equivalent to the ratio of the relative pressure and temperature changes
formula_12
Equation 2:
formula_13
where
or, converted to U.S. gravitational foot-pound-second units (no longer used in U.K.):
The value of subscript "b" ranges from 0 to 6 in accordance with each of seven successive layers of the atmosphere shown in the table below. The reference value for "ρb" for "b" = 0 is the defined sea level value, "ρ"0 = 1.2250 kg/m3 or 0.0023768908 slug/ft3. Values of "ρb" of "b" = 1 through "b" = 6 are obtained from the application of the appropriate member of the pair equations 1 and 2 for the case when "h" = "h""b"+1.
In these equations, "g"0, "M" and "R"* are each single-valued constants, while "ρ", "L", "T" and "h" are multi-valued constants in accordance with the table below. The values used for "M", "g"0 and "R"* are in accordance with the U.S. Standard Atmosphere, 1976, and that the value for "R"* in particular does not agree with standard values for this constant.
Derivation.
The barometric formula can be derived using the ideal gas law:
formula_23
Assuming that all pressure is hydrostatic:
formula_24
and dividing this equation by formula_25 we get:
formula_26
Integrating this expression from the surface to the altitude "z" we get:
formula_27
Assuming linear temperature change formula_28 and constant molar mass and gravitational acceleration, we get the first barometric formula:
formula_29
Instead, assuming constant temperature, integrating gives the second barometric formula:
formula_30
In this formulation, "R*" is the gas constant, and the term "R*T"/"Mg" gives the scale height (approximately equal to 8.4 km for the troposphere). | [
{
"math_id": 0,
"text": "L_b"
},
{
"math_id": 1,
"text": "P = P_{b} \\left[ 1 - \\frac{L_{M,b}}{T_{M,b}} (h - h_{b})\\right]^{\\frac{g_{0}' M_{0}}{R^{*} L_{M,b}}}"
},
{
"math_id": 2,
"text": "P = P_b \\exp \\left[\\frac{-g_0 M \\left(h-h_b\\right)}{R^* {T_{M,b}}}\\right]"
},
{
"math_id": 3,
"text": "P_b"
},
{
"math_id": 4,
"text": "T_{M,b}"
},
{
"math_id": 5,
"text": "L_{M,b}"
},
{
"math_id": 6,
"text": "h"
},
{
"math_id": 7,
"text": "h_b"
},
{
"math_id": 8,
"text": "R^*"
},
{
"math_id": 9,
"text": "g_0"
},
{
"math_id": 10,
"text": "M"
},
{
"math_id": 11,
"text": "\\rho = \\rho_b \\left[\\frac{T_b - (h-h_b) L_b}{T_b}\\right]^{\\left(\\frac{g_0 M}{R^* L_b}-1\\right)}"
},
{
"math_id": 12,
"text": "\\rho = \\rho_b \\frac{P}{T} \\frac{T_b}{P_b} "
},
{
"math_id": 13,
"text": "\\rho =\\rho_b \\exp\\left[\\frac{-g_0 M \\left(h-h_b\\right)}{R^* T_b}\\right]"
},
{
"math_id": 14,
"text": "{\\rho}"
},
{
"math_id": 15,
"text": "T_b"
},
{
"math_id": 16,
"text": "L"
},
{
"math_id": 17,
"text": "{T_b}"
},
{
"math_id": 18,
"text": "{L}"
},
{
"math_id": 19,
"text": "{h}"
},
{
"math_id": 20,
"text": "{R^*}"
},
{
"math_id": 21,
"text": "{g_0}"
},
{
"math_id": 22,
"text": "{M}"
},
{
"math_id": 23,
"text": " P = \\frac{\\rho}{M} {R^*} T"
},
{
"math_id": 24,
"text": " dP = - \\rho g\\,dz"
},
{
"math_id": 25,
"text": " P "
},
{
"math_id": 26,
"text": " \\frac{dP}{P} = - \\frac{M g\\,dz}{R^*T}"
},
{
"math_id": 27,
"text": " P = P_0 e^{-\\int_{0}^{z}{M g dz/R^*T}}"
},
{
"math_id": 28,
"text": "T = T_0 - L z"
},
{
"math_id": 29,
"text": " P = P_0 \\cdot \\left[\\frac{T}{T_0}\\right]^{\\textstyle \\frac{M g}{R^* L}}"
},
{
"math_id": 30,
"text": " P = P_0 e^{-M g z/R^*T}"
}
] | https://en.wikipedia.org/wiki?curid=861530 |
8615729 | Roy's safety-first criterion | Risk management technique in investing
Roy's safety-first criterion is a risk management technique, devised by A. D. Roy, that allows an investor to select one portfolio rather than another based on the criterion that the probability of the portfolio's return falling below a minimum desired threshold is minimized.
For example, suppose there are two available investment strategies—portfolio A and portfolio B, and suppose the investor's threshold return level (the minimum return that the investor is willing to tolerate) is −1%. Then, the investor would choose the portfolio that would provide the maximum probability of the portfolio return being at least as high as −1%.
Thus, the problem of an investor using Roy's safety criterion can be summarized symbolically as:
formula_0
where Pr("Ri" < ) is the probability of Ri (the actual return of asset i) being less than (the minimum acceptable return).
Normally distributed return and SFRatio.
If the portfolios under consideration have normally distributed returns, Roy's safety-first criterion can be reduced to the maximization of the safety-first ratio, defined by:
formula_1
where formula_2 is the expected return (the mean return) of the portfolio, formula_3 is the standard deviation of the portfolio's return and is the minimum acceptable return.
Example.
If Portfolio A has an expected return of 10% and standard deviation of 15%, while portfolio B has a mean return of 8% and a standard deviation of 5%, and the investor is willing to invest in a portfolio that maximizes the probability of a return no lower than 0%:
SFRatio(A) = = 0.67,
SFRatio(B) = = 1.6
By Roy's safety-first criterion, the investor would choose portfolio B as the correct investment opportunity.
Similarity to Sharpe ratio.
Under normality,
formula_4
The Sharpe ratio is defined as excess return per unit of risk, or in other words:
formula_5.
The SFRatio has a striking similarity to the Sharpe ratio. Thus for normally distributed returns, Roy's Safety-first criterion—with the minimum acceptable return equal to the risk-free rate—provides the same conclusions about which portfolio to invest in as if we were picking the one with the maximum Sharpe ratio.
Asset Pricing.
Roy’s work is the foundation of asset pricing under loss aversion. His work was followed by Lester G. Telser’s proposal of maximizing expected return subject to the constraint that the Pr("Ri" < ) be less than a certain safety level.
See also Chance-constrained portfolio selection.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\underset{i}{\\min}\\Pr(R_{i}<\\underline{R})"
},
{
"math_id": 1,
"text": "\\text{SFRatio}_{i}=\\frac{\\text{E}(R_{i})-\\underline{R}}{\\sqrt{\\text{Var}(R_{i})}}"
},
{
"math_id": 2,
"text": "\\text{E}(R_{i})"
},
{
"math_id": 3,
"text": "\\sqrt{\\text{Var}(R_{i})}"
},
{
"math_id": 4,
"text": "\\text{SFRatio} =\\frac{(\\text{Expected Return}) - (\\text{Minimum Return})}{\\text{standard deviation of Return}}."
},
{
"math_id": 5,
"text": "\\text{Sharpe ratio} =\\frac{(\\text{Expected Return}) - (\\text{Risk-Free Return})}{\\text{standard deviation of Return}}"
}
] | https://en.wikipedia.org/wiki?curid=8615729 |
8620449 | Leray–Hirsch theorem | Relates the homology of a fiber bundle with the homologies of its base and fiber
In mathematics, the Leray–Hirsch theorem is a basic result on the algebraic topology of fiber bundles. It is named after Jean Leray and Guy Hirsch, who independently proved it in the late 1940s. It can be thought of as a mild generalization of the Künneth formula, which computes the cohomology of a product space as a tensor product of the cohomologies of the direct factors. It is a very special case of the Leray spectral sequence.
Statement.
Setup.
Let formula_0
be a fibre bundle with fibre formula_1. Assume that for each degree formula_2, the singular cohomology rational vector space
formula_3
is finite-dimensional, and that the inclusion
formula_4
induces a "surjection" in rational cohomology
formula_5.
Consider a "section" of this surjection
formula_6,
by definition, this map satisfies
formula_7.
The Leray–Hirsch isomorphism.
The Leray–Hirsch theorem states that the linear map
formula_8
is an isomorphism of formula_9-modules.
Statement in coordinates.
In other words, if for every formula_2, there exist classes
formula_10
that restrict, on each fiber formula_1, to a basis of the cohomology in degree formula_2, the map given below is then an isomorphism of formula_9 modules.
formula_11
where formula_12 is a basis for formula_9 and thus, induces a basis formula_13 for formula_14 | [
{
"math_id": 0,
"text": "\\pi\\colon E\\longrightarrow B"
},
{
"math_id": 1,
"text": "F"
},
{
"math_id": 2,
"text": "p"
},
{
"math_id": 3,
"text": "H^p(F) = H^p(F; \\mathbb{Q})"
},
{
"math_id": 4,
"text": "\\iota\\colon F \\longrightarrow E"
},
{
"math_id": 5,
"text": "\\iota^* \\colon H^*(E) \\longrightarrow H^*(F)"
},
{
"math_id": 6,
"text": " s\\colon H^*(F) \\longrightarrow H^*(E)"
},
{
"math_id": 7,
"text": "\\iota^* \\circ s = \\mathrm {Id}"
},
{
"math_id": 8,
"text": "\\begin{array}{ccc}\nH^* (F)\\otimes H^*(B) & \\longrightarrow & H^* (E) \\\\\n\\alpha \\otimes \\beta & \\longmapsto & s (\\alpha)\\smallsmile \\pi^*(\\beta) \n\\end{array}"
},
{
"math_id": 9,
"text": "H^*(B)"
},
{
"math_id": 10,
"text": "c_{1,p},\\ldots,c_{m_p,p} \\in H^p(E)"
},
{
"math_id": 11,
"text": "\\begin{array}{ccc}\nH^*(F)\\otimes H^*(B) & \\longrightarrow & H^*(E) \\\\\n\\sum_{i,j,k}a_{i,j,k}\\iota^*(c_{i,j})\\otimes b_k & \\longmapsto & \\sum_{i,j,k}a_{i,j,k}c_{i,j}\\wedge\\pi^*(b_k)\n\\end{array}"
},
{
"math_id": 12,
"text": "\\{b_k\\}"
},
{
"math_id": 13,
"text": "\\{\\iota^*(c_{i,j})\\otimes b_k\\}"
},
{
"math_id": 14,
"text": "H^*(F)\\otimes H^*(B)."
}
] | https://en.wikipedia.org/wiki?curid=8620449 |
862140 | Thomas Stevenson | Scottish civil engineer, lighthouse designer and meteorologist (1818–1887)
Thomas Stevenson (22 July 1818 – 8 May 1887) was a pioneering Scottish civil engineer, lighthouse designer and meteorologist, who designed over thirty lighthouses in and around Scotland, as well as the Stevenson screen used in meteorology. His designs, celebrated as ground breaking, ushered in a new era of lighthouse creation.
He served as president of the Royal Scottish Society of Arts (1859–60), as president of the Royal Society of Edinburgh (1884–86), and was a co-founder of the Scottish Meteorological Society.
He was the father of writer Robert Louis Stevenson.
Life and career.
He was born at 2 Baxters Place in Edinburgh, on 22 July 1818, the youngest son of engineer Robert Stevenson, and his wife (and step-sister) Jean Smith. He was educated at the Royal High School in Edinburgh.
Thomas Stevenson was a devout and regular attendee at St. Stephen's Church in Stockbridge, at the north end of St Vincent Street, Edinburgh.
He lived with his family at Baxters Place until he got married in 1848. He then got a house at 8 Howard Place. By 1855 he moved to 1 Inverleith Terrace. From at least 1860 he lived at 17 Heriot Row, a large Georgian terraced townhouse in Edinburgh's New Town.
In 1864, he published "The design and construction of harbours: a treatise on maritime engineering". The book was based on an article he had originally written for the Encyclopædia Britannica, and covered the principles and practices involved in harbour design and construction. The work discussed the geological and physical features affecting harbour design, the generation and impact of waves, along with construction materials and masonry types for quay walls. The book also explored the efficacy of tides and fresh water in maintaining outfalls. A second edition of the book was published in 1874.
In 1869, as a successful experiment into using the newly invented electric light for lighthouses, Stevenson had an underwater cable installed from the eastern part of Granton Harbour, and a light on the end of the Trinity Chain Pier was controlled from half a mile away by an operator on the harbour. He designed the Stevenson screen as a shelter to shield meteorological instruments, and this has been widely adopted.
He died at 17 Heriot Row in Edinburgh on 8 May 1887 and is buried in the Stevenson family vault in New Calton Cemetery. The vault lies midway along the eastern wall.
Stevenson's formula for the prediction of wave heights.
In the course of his work as a lighthouse and harbour engineer, Stevenson had made observations of wave heights at various locations in Scotland over a number of years. In 1852, he published a paper in which he suggested that waves increased in height by a ratio approximate to the square root of their distance from the windward shore. Stevenson developed this into the simple formula formula_0, in which formula_1 is the wave height in feet and formula_2 is the fetch in miles.
Essential components for wave height prediction, most notably wind speed, are missing from Stevenson's formula. In 1852, mathematical analysis of the theory of water waves, and methods for numerical assessment of factors such as shoaling and surge, were in their infancy. Stevenson's analysis is possibly the first quantitative discussion of wave height as a (square root) function of fetch, and his paper is one of the first quantitative studies of wind speeds in the planetary boundary layer.
Modern analysis of Stevenson's formula indicates that it appears to conservatively estimate wave heights for wind speeds up to around 30 miles per hour, being based on his observations which most likely were taken for fetch lengths under 100 kilometres, without fully developed seas. The breakwater at Wick was exposed to a fetch length of approximately 500 kilometres, and wind speeds far in excess of 30 miles per hour, prior to its eventual destruction.
In 1965, the South African engineer Basil Wrigley Wilson proposed a method which can be used to approximate the significant wave height "H1/3" and period "T1/3" of wind waves generated by a constant wind of speed "U" blowing over a fetch length "F". The units for these quantities are as follows:
Wilson's formulae apply when the duration of the wind blowing is sufficiently long, as when the wind blows for only a limited time, waves cannot attain the full height and period corresponding to the wind speed and fetch length. Under conditions were the wind blows for a sufficiently long time, for example during a prolonged storm, the wave height and period can be calculated as follows:
formula_3
formula_4
In these formulae, "g" denotes the acceleration due to gravity, which is approximately 9.807 m/s2. The wind speed "U" is measured at an elevation of 10 metres above the sea surface. For conditions approximate to those for the Wick breakwater during a storm (fetch length of 500km, wind speed of around 75mph), the graph below shows that Wilson's method predicts a significant wave height ("H1/3") of around 1.5 times that of Stevenson's.
Nonetheless, whilst Stevenson's formula is highly limited and unsuitable for engineering design application, it was notable for being an early attempt to apply mathematical theory to hydraulic engineering problems, and shows some limited agreement (albeit within a narrow range) with a more advanced formula developed by Ramón Iribarren in 1942. A major flaw in Stevenson's formula is the absence of consideration of wind speed, and comparison with Wilson's formula at 3 different wind speeds (30, 50 and 75mph) shows only a reasonable level of agreement for 50mph winds at fetch lengths up to around 100 metres.
Stevenson himself noted that the formula was an approximation, and actively encouraged further research into similar problems, imploring young engineers to redouble efforts in the advancement of coastal engineering during an 1885 address to the Institution of Civil Engineers in London. In addition to his work on wave growth, he also undertook research into the phenomenon of wave decay inside harbour basins.
The breakwater at Wick, Caithness.
Stevenson designed and supervised the construction of a breakwater at Wick in 1863, which at the time was the largest herring fishery in Europe. The inner harbour, designed by Thomas Telford, was completed in 1811, followed by the construction of the expanded outer harbour by James Bremner between 1825 and 1834. However, by 1857, the need for increased capacity became evident, leading the British Fishery Society to propose a new breakwater. In 1862 Stevenson, along with his brother David, prepared detailed plans, sections, and specifications for the harbour's extension. This design received support from Sir John Coode and John Hawkshaw. A loan of £62,000 was sanctioned by A. M. Rendel, the engineer for the Public Works Loan Commission.
Construction commenced in April 1863, aiming for a final length of 460 metres. Stevenson's design featured a rubble mound extending to 5.5 metres above the low water mark, following the Crane Rocks. This was capped with block walls and in-filled with rubble, providing a superstructure up to 16 metres wide. The rubble for the mound was sourced from local quarries and transported by steam locomotives. This was then deposited onto the breakwater mound using travelling gantries that ran along the staging, marking a possible first in Scotland for this technique. The seaward wall was constructed with a 6:1 batter. Below the waterline, the blocks were dry-jointed, whereas above the high-water mark, initially Roman and later Portland cement mortar was used.
The breakwater failed progressively as a result of several storms, and by 1870 it had lost one third of its length. It was eventually abandoned in 1877, after further severe storm damage, despite repeated failed attempts at its reconstruction. Stevenson noted, in correspondence with the Institution of Civil Engineers, that a single storm had at one stage removed 1,350 tonnes of material from the breakwater, but he was unable to provide the height of the waves during the event.
Applying present-day techniques to calculate local wave conditions demonstrates that the breakwater as built would not have survived without mobilising additional restraint, or a mechanism to abate wave forces. Stevenson's own wave formula would have predicted offshore wave heights for Wick of around 8 to 10 metres, whereas modern observations show that the North Sea exhibits wave heights of up to two to three times this figure.
Family.
He was brother of the lighthouse engineers Alan and David Stevenson, between 1854 and 1886 he designed many lighthouses, with his brother David, and then with David's son David Alan Stevenson.
He married Margaret Isabella "Maggie" Balfour in 1848, daughter of Rev Lewis Balfour. Their son was the writer Robert Louis Stevenson, who initially caused him much disappointment by failing to follow the engineering interests of his family.
His wife's younger brother, James Melville Balfour (i.e. his brother-in-law), trained under D. & T. Stevenson and then emigrated to New Zealand, where he was first the marine engineer for Otago Province before he was appointed Colonial Marine Engineer.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H = 1.5\\sqrt F"
},
{
"math_id": 1,
"text": "H"
},
{
"math_id": 2,
"text": "F"
},
{
"math_id": 3,
"text": "gH_{1/3} / U^2 = 0.30 \\left\\{1-\\left[1+0.004 \\left(gF/U^2\\right)^{1/2}\\right]^{-2}\\right\\}"
},
{
"math_id": 4,
"text": "gT_{1/3} / (2\\pi U) = 1.37 \\left\\{1-\\left[1+0.008 \\left(gF/U^2\\right)^{1/3}\\right]^{-5}\\right\\}"
}
] | https://en.wikipedia.org/wiki?curid=862140 |
8624624 | Traditional games in the Philippines | Children's competitions in the Southeast Asian country
Traditional Filipino games or indigenous games in the Philippines () are games that are played across multiple generations, usually using native materials or instruments. In the Philippines, due to limited resources for toys, children usually invent games that do not require anything but players. There are different kinds of Filipino traditional games which are well-suited for kids, and the games also stand as one of the different cultural and traditional games of the Philippines. Due to the variety of skills used in these games, they serve an important purpose in the physical and mental development of Filipino children. These games are also an important part of Filipino culture.
"Laro ng Lahi" was coined and popularized by the Samahang Makasining (commonly known "Makasining") with the help of the National Commission for Culture and the Arts, Philippine Local Government Units, other organizations and other institutions. Imparting these Filipino games to young Filipinos is one of the organization's main activities. The Makasining also created time-based scoring for patintero, syatong, dama, lusalos and holen butas.
Traditional Philippine games, such as "luksong baka", "patintero", "piko", and "tumbang preso" are played primarily as children's games. The yo-yo, a popular toy in the Philippines, was introduced in its modern form by Pedro Flores with its name coming from the Ilocano language.
<templatestyles src="Template:TOC limit/styles.css" />
Advocates.
Dickie Aguado, executive director of Philippine NGO Magna Kultura Foundation (Arts and Culture), says that traditional Filipino games are "very much alive in the Philippines". In many urban and rural areas, a majority of Filipino children play outdoor street games, as most of them have little access to technology. Games such as patintero, tumbang preso, piko, sipa, turumpo, and many others, are played daily. One of the main reasons why some children stop playing Filipino games is because Western sports (e.g. basketball or volleyball) are featured in local barangays and in schools. With a lack of organized sports activities for Filipino street games, Filipino children may adapt to modernity by abandoning their childhood games.
Games.
Traditional Filipino games are usually played by children of younger age outdoors together with their neighbor and friends. The games have no definite rules nor any strict regulations. Different communities and regions have varying versions of the games that are agreed upon between themselves. Most games and matches have two-team gameplay in which players can divide themselves into a reasonably certain number, usually predetermined by two separate team leaders first playing Jack 'n' poy then selecting a teammate after each match. Another common variation of creating two teams is by 'win-lose' in which each player will pick another person to play Jack 'n' poy with and then grouping the winners and losers. Filipino games number more than thirty-eight. A non-exhaustive list includes:
Baril-barilan.
Players pretend to be in a gun-fight or battle which is usually done with a toy gun, mock-up gun, pellet gun, or anything closely resembling a firearm. Sometimes, children will accompany shooting sounds by saying "Bang, bang, bang" or "Pew, pew, pew". Players who were shot will rarely pretend to be dead by lying on the floor.
Declan ruki.
Declan ruki (lit. "I declare, do it!"): Participants are told to do something by the winner of the previous games, similar to the Western game Simon Says.
Gagamba.
Gagamba (lit. 'Spider')
Hand clapping games.
A hand-clapping game generally involves four people. They are split into pairs with each pair facing each other. Members from both pairs face the center (the two pairs are perpendicular to each other). Each pair then does a hand clapping "routine" while singing ""Bahay Kubo" or "Leron-leron Sinta"". In the middle of the song, each pair exchanges "routines" with the other.
The lyrics to Bahay Kubo are:
<poem>
Bahay Kubo, kahit munti
Ang halaman doon ay sari-sari
Singkamas at talong
Sigarilyas at mani
Sitaw, bataw, patani
Kundol, patola
Upo't kalabasa
At saka meron pa, labanos, mustasa
Sibuyas, kamatis
Bawang at luya
Sa paligid-ligid ay puno ng linga
</poem>
<templatestyles src="Col-begin/styles.css"/>
A variation on the game is an incorporated action according to the lyrics. An example is "Si Nena", a song about a girl named Nena, starting when she was born. The song progresses with her life story, (i.e. when she grew up, got married, got children, got old, died, and became a ghost). After she died, one player would act like a ghost and catch the other players.
Lyrics:
Si Nena ay bata pa, kaya ang sabi nya ay um um um ah ah (players perform a baby action)
Si Nena ay dalaga na, kaya ang sabi nya ay um um um ah ah (players perform a lady action)
Si Nena ay nanay na, kaya ang sabi nya ay um um um ah ah (players perform a mother action)
Si Nena ay namatay na, kaya ang sabi nya ay um um um ah ah (players perform a dead action)
Si Nena ay mumu na, kaya ang sabi nya ay um um um ah ah (players perform a ghost action)
Nanay tatay
Another version of the song is:
Nanay, Tatay, gusto ko tinapay
Ate, Kuya, gusto ko kape,
Lahat ng gusto ko ay susundin niyo.
Ang magkamali ay pipingutin ko... (clap 5x)
...and so forth
(lit. "game of rings") is noticeably Spanish in influence, deriving from medieval running at the ring. It involves riding a horse while holding a dagger and "catching" rings hanging from a tree or some other structure using the dagger. In recent years, a bicycle typically replaces the horse.
Jakempoy.
Jakempoy, Jak en poy, Dyak en poy, Dyakempoy from the words Jack 'n' Poy is the local version of rock-paper-scissors (bato, papel, at gunting). Though the origin of the spelling came from American influence, the game is really Japanese in origin ("janken") with the lyrics in the Japanese version sounding akin to "hong butt".
The lyrics:
Jakempoy, hale-hale-hoy! (Jack and Poy, hale-hale-hoy!)Sinong matalo s'yang unggoy! (Whoever loses is the monkey!)
Juego de prenda.
Juego de prenda (lit. "game of looking for the missing bird"): Any number of players can play. Players sit in a circle with the leader in the middle. Each player adopts a tree or flower that is given by the leader. The leader recounts the story of a lost bird that was owned by a king. He or she says, "The bird of the king was lost yesterday. Did you find it, ylang-ylang?" The player who adopted the ylang-ylang tree at once answers that he or she has not found it, so the leader continues to ask the other "trees" whether the bird has hidden in them. If a player cannot answer after the third count, he or she is made to deposit an object he or she owns to the leader until the leader has been able to gather multiple possessions from the players.
Piko.
Piko is the Philippine variation of the game hopscotch.The players stand behind the edge of a set of boxes (commonly in the shape of a cross or a little girl), and each throws their cue ball. The first to play is determined depending on the players' agreement (e.g. nearest to the markings of the moon, wings or chest). Whoever succeeds in throwing the cue ball nearest to the place that they have agreed upon, will play first. The next nearest is second, and so on. The players must throw their cue ball across all boxes and complete the whole course. The person is out for the round if they stand with both feet on the boxes that require only a single feet or when a part of their feet touches the edge lines of the box.
Sambunot.
Sambunot is a Philippine game played outdoors by ten to twenty players. The goal of the game is to get the coconut husk out of the circle.
A circle is drawn on the floor, big enough to accommodate the number of players. A coconut husk is placed at the center of the circle. The players position themselves inside the circle. At the signal of "go", players rush to the center to get the coconut husk. Players may steal the coconut husk from another player to be the one to take the husk out of the circle. A player who is successful in getting out of the circle with the coconut husk wins, and the game starts again.
Sipa.
Sipa (lit. "game of kicking"): The object used to play the game is also called "sipa". It is made of a washer with colorful threads, usually plastic straw, attached to it. Alternatively, "sipa" can be played using a "rattan" ball or a lead washer covered in cloth or plastic. The "sipa" is then thrown upwards with player's foot. The player must not allow the "sipa" to touch the ground by hitting it several times with their foot or just above the knee. The player must count the number of times they kick the "sipa". The one with most kicks wins the game. "Sipa" was the national sport of the Philippines until 2009.
The game mechanics of sipa is similar to the Western game hackysack. Sipa is also played professionally by Filipino athletes with a woven ball, called sepak takraw, with game rules borrowed from Indonesia.
Sikaran.
Sikaran is a Filipino traditional martial art that involves hand and foot fighting. Sikaran is a general term for kicking. It is also used as the name of the kicking aspects of other Filipino traditional martial arts.
Hari Osias Banaag, originator of the Global Sikaran Federation and diplomat for the game, was warmly received at the UNESCO Collective Consultation Meeting on the Preservation and the Promotion of Traditional Sports and Game (TSG). Banaag is an appointed member of Ad hoc Advisory Committee Traditional Sports and Games, UNESCO. (TSG)
Taguan.
Taguan, or tagu-taguan (lit. "twilight game", "look out, cover yourself!" or "take-cover game!"): Participants usually step on couches, hide under tables, or wrap themselves in curtains. It is similar to hide and seek. What is unique in taguan is that this game is usually played at sunset or at night as a challenge for the "it" to locate those who are hiding under the caves in Laguna and Cavite, which is a popular site for professional taguan players. The "it" needs to sing the following before they start seeking:
"Tagu-taguan, maliwanag ang buwan" (Hide and seek, the moon is bright)"Masarap maglaro sa kadiliman ng buwan" (It is fun to play in the semi-dark night)"'Pag kabilang kong sampu" (When I finish counting up to ten)"Nakatago na kayo" (All of you should already been hidden)"Isa, dalawa, ... tatlo!" (One, two, ... three!)
Another version of the chant goes:"Tagu-taguan, maliwanag ang buwan" (Hide and seek, the moon is bright)"Wala sa likod, wala sa harap" (Nobody in front, nobody behind)"'Pag kabilang kong sampu" (When I finish counting up to ten)"Nakatago na kayo" (All of you should already been hidden)"Isa, dalawa, ... tatlo!" (One, two, ... three!)
Another version of the chant goes:"Tagu-taguan, maliwanag ang buwan" (Hide and seek, the moon is bright)"Tayo maglaro ng tagutaguan" (let's play hide and seek)"isa, dalawa, ...umalis kana sa puwestohan mo" (one, two, ... leave that place)
Stick games.
Bati-cobra.
Bati-cobra is a hitting and catching game. This game is played outdoors only by two or more players.
To play this game, two pieces of bamboo sticks (one long, one short) are required. A player acts as a batter and stands opposite the other players at a distance. The batter holds the long bamboo stick with one hand and tosses the short one with the other hand. The batter then strikes the shorter stick with the longer stick. The other players will attempt to catch the flying shorter stick. Whoever catches the stick gets the turn to be the next batter. If nobody catches the stick, any player can pick it up. The batter then puts down the longer stick on the ground. The holder of the shorter stick will throw it with the attempt to hit the longer stick on the ground. If the longer stick is hit, the hitter becomes the next batter. If the player with the shorter stick misses to hit the longer one, the same batter will continue.
Pityaw/Pikyaw.
Pityaw (Pikyaw) is popularly known as syatong or syato syatong in Tagalog and Ilocano, or Pitiw chato/chatong or shatungs in Bisaya. The history of pikyaw/pityaw comes from rural areas where peasants play this stick game.
Tsato / syato.
Tsato (lit. "stick game", "better be good at it"): Two players, one flat stick (usually ) and one short flat piece of wood ( usually a piece cut from the flat stick).
Player A becomes the hitter and Player B the catcher. The game is played outside on the ground where one digs a small square hole (which is slanted), in which they insert the small wood so that it sticks out.
Player A hits the wood with the stick so that it flies high enough to be hit by the stick.
The farther the wood gets hit, the more points one gets (usually counted by the number of stick lengths).
Player A may try to add a multiplier to his score. By hitting the wood upwards twice in one turn before striking it forward, the points are counted by the number of wood lengths instead.
Player B, on the other hand, has to anticipate and catch the small piece of wood to nullify the points and take his turn "or" waits for Player A to miss the wood.
Sometimes the losing player is punished. The penalty is hopping on one foot from a designated spot marked by the winning player. This is done again by hitting the wood with the stick in midair as far away as possible. The spot where it lands is where the losing player starts until he reaches the hole.
Throwing games.
Holen.
Derived from the phrase "hole in," players hold the ball or marble called "holen" in their hand. They throw it to hit another players ball out of the playing area. Holen is a variation on marbles in the United States. It is played in a more precise way by tucking the marble with the player's middle finger, with the thumb under the marble, and the fourth finger used to stabilize the marble. Players aim at grouped marbles inside a circle and flick the marble from their fingers. Anything they hit out of the circle is theirs. Whoever obtains the most marbles wins the game. Players (manlalaro) can also win the game by eliminating their opponents by hitting another player's marble.
Another version of this game requires three holes lined up in the ground separated by some distance. Each player tries to complete a circuit, travelling to all the holes and back in order. Players decide on the starting line and the distance between holes. The first to complete the circuit wins the game. Players can knock other player's "holen" (marble) away using their own marble. Generally the distance between holes allows for several shots to arrive at the next hole. The players shoots from where the prior shot landed. A variant of this game needs players to require their "holen" to pass back to the starting line.
Kalahoyo.
Kalahoyo (lit. "hole-in") is an outdoor game played by two to ten players. Accurate targeting is the critical skill, because the objective is to hit the "anak" (small stones or objects) with the use of the "pamato" (big, flat stone), trying to send it to the hole.
A small hole is dug in the ground, and a throwing line is drawn opposite the hole (approx. away from the hole). A longer line is drawn between the hole and the throwing line. Each player has a "pamato" and an "anak". All the "anak" are placed on the throwing line, and players try to throw their "pamato" into the hole from the throwing line. The player whose "pamato" is in the hole or nearest the hole gets the chance for the first throw. Using the "pamato", the first thrower tries to hit the "anak", attempting to send it to the hole. Players take turns in hitting their "anak" until one of them knocks it into the hole, with the players taking turns. The game goes on until only one "anak" is left outside the hole. Players who get their "anak" inside the hole are declared winners, while the alila (loser) or muchacho is the one whose anak is left outside the hole. The "Alila" or "Muchacho" is "punished" by all the winner/s as follows:
Winners stand at the throwing line with their "anak" beyond line A-B (longer line between hole and throwing line). The winners hit their "anak" with their "pamato". The muchacho picks up the "pamato" and returns it to the owner. The winners repeat throwing as the muchacho keeps on picking up and returning the "pamato" as punishment. Winners who fail to hit their respective "anak" stop throwing. The objective is to tire the loser as punishment. When all are through, the game starts again.
Siklot.
Siklot is a game of throwing stones similar to knucklebones. "Siklot" means "to flick". It uses a large number of small stones that are tossed in the air and then caught on the back of the hand. The stones that remain on the hand are collected by the player and are known as "biik" ("piglets") or "baboy" ("pig"). The player with the most "biik" plays the second stage first. The second stage involves the stones that fall on the ground. These are flicked into each other and collected if they hit each other. This is done until the player fails to hit a stone, then the next player does the same thing with the remaining stones, and so on. "Siklot" is also the name of a traditional game of pick up sticks among the Lumad people of Mindanao.
Sintak.
Sintak is another game that is similar to modern knucklebones, but is indigenous in origin. It is also called "kuru" or "balinsay", among other names. Instead of a bouncing ball, it uses a larger stone called "ina-ina" ("mother") that the player tosses up into the air and must catch before it hits the ground. During the throw, the player gathers smaller stones (also seeds or cowries) called "anak" ("children"). All of these actions are done with one hand. The game has multiple stages known by different names, each ranking up in difficulty and mechanics. The first stage picks up the smaller stones by ones, twos, threes, and so on. Other stages include "kuhit-kuhit", "agad-silid", "hulog-bumbong", "sibara", "laglag-bunga", and "lukob". For example, in "kuhit-kuhit" the player must touch a forefinger on the ground at each throw while also collecting the stones. The last stage of the game is known as "pipi", where the losing player is flicked on the knuckles by the other player. A variant of the game just throws the collected pebbles (more than one at a time in later stages) without an "ina-ina" stone.
Slipper Game (also known as "Slipper Box").
This game is popular amongst kids and teenagers in the Philippines, especially in the region of Visayas. It is an outdoor team game, composed of two groups. There is no limit to the number of participants but each team must have the same number of members or if not achieved, the team which has the most members get to play first. Moreover, the team to play first is usually decided by jack en poy. The rules of the game is simple and heavily varies according to the agreement of the participants. Players must draw a line as parameters for the game. The slippers will be used as the primary object. The team to play first will stack the slippers (commonly 2-3 levels) and throws it high, marking the start of it. The first players take turns entering the parameter and their goal is to skip the slippers thrown at them by their opponent. If they are hit, they are out. But they can be redeemed when their members gather the slippers thrown at the start of the game and throw it again. Players will be eliminated if they cross the parameters set. If the opposing team hits all the players they win, and they can now exchange roles.
Maneuvering games.
Chinese garter.
Two people hold the ends of a stretched garter horizontally while the others attempt to cross over it.
Luksong tinik.
Luksong tinik (lit. "jump over the thorns of a plant"): two players serve as the base of the "tinik" (thorn) by putting their right or left feet and hands together (soles touching gradually building the "tinik"). A starting point is set by all the players, giving enough runway for the players to achieve a higher jump, so as not to hit the tinik. Players of the other team jump over the "tinik", followed by the other team. If a player hits either hand or feet of the base player's "tinik", he or she is punished by giving him or her consequences.
Luksong-baka.
Luksong-baka (lit. "jump over the cow") is a popular variation of "Luksong-tinik". One player crouches while the other players jump over them. The crouching player gradually stands up as the game progresses, making it harder for the other players to jump over them. A person becomes the "it" when they touch the "baka" as they jump. It will repeat continuously until the players declare the player or until the players decide to stop the game most of the time once they get tired. It is the Filipino version of leapfrog.
Palosebo.
Palosebo (lit. "greased bamboo pole climbing"): This game involves a greased bamboo pole that players attempt to climb. These games are usually played during town fiestas, particularly in the provinces. The objective of the participants is to be the first person to reach the prize—a small bag—located at the top of the bamboo pole. The small bag usually contains money or toys.
Ten-twenty.
A game involving two pairs, with one utilizing a stretched length of garter. One pair faces each other from a distance and has the garter stretched around them in such a way that a pair of parallel lengths of garter is between them. The members of the other pair then begin doing a jumping "routine" over the garters while singing a song ("ten, twenty, thirty, and so on until one hundred). Each level begins with the garters at ankle-height and progresses to higher positions, with the players jumping nimbly on the garters while doing their routines.
Tinikling.
A game variant of the tinikling dance, with the same goal—for the players to dance nimbly over the clapping bamboo "maw" without having their ankles caught.
Once one of the players' ankles gets caught, they replace the players who hold the bamboo. The game will continue until the players decide to stop.
Tiyakad.
Kadang-kadang, or karang (in Bisaya), and Tiyakad (in Tagalog) means "bamboo stilts game" in English. This racing game originated in Cebu. It was a team game introduced during the Laro ng Lahi (Game of the Races). The Laro ng Lahi was a traditional sports event initiated by the then Bureau of Physical Education and School Sports (BPESS). This game was popular long before its inclusion in the Laro ng Lahi. The elders during that period claimed they used to play it when they were younger, and also walked on kadang for fun without the rules, especially when they were done with household chores. Balance and concentration are the two most important skills a player possesses in playing kadang-kadang. However, teamwork is also necessary to successfully bring the game to the finish line.
Guessing games.
Bulong-Pari.
Bulong-Pari (lit. "whisper it to the priest") is composed of three teams and an "it", or "priest". The leader of team A goes to the priest and whispers one of the names of the players of team B. Then they return to their place and the priest calls out, "Lapit!" ("Approach!"). One of the players of team B should approach the priest, and if it happens to be the one whom the leader of team A mentioned, the priest will say, ""Boom" or "Bung!"" The player then falls out of line and stays somewhere near the priest as a prisoner.
Patay patayan (Guess the killer).
Patay patayan, also referred to as "killer eye", involves at least four players. Players cut pieces of paper according to how many players are playing. One player is the judge, at least one is the killer, and at least one is a police officer, with the others playing regular players. The objective of the game is for the police to find and catch the killers by saying "I caught you" and say the name of the killer before the killer winks at the judge. The killer is able to kill people by winking at the person he wants to kill. If he kills a normal person, the person says "I'm dead!" If he kills the judge without being caught, the judge says "I'm dead, but I'm the judge" and the game repeats
Pitik-bulag.
This game involves two players. One covers his eyes with a hand while the other flicks a finger ("pitik") over the hand covering the eyes. The person with the covered eyes gives a number with his hand at the same time the other does. If their numbers are the same, then they exchange roles in the game. Another version of this is that the one with eyes covered ("bulag") will try to guess the finger that the other person used to flick them.
Takip-silim.
Takip-silim: One player is called the taya (the "it"). The "it" is blindfolded and counts to 10 while the other players hide. The "it" needs to find at least one player and guess who it is. If the guess is correct, the player becomes the new "it".
Games involving simple objects.
Teks.
Teks or teks game cards (lit. "texted game cards"): Filipino children collect playing cards which contain comic strips and text placed within speech balloons. The game is played by tossing the cards in the air until they hit the ground. The cards are flipped upwards through the air using the thumb and the forefinger which creates a snapping sound as the nail of the thumb hits the surface of the card. The winner or gainer collects the other players' card depending on how the cards are laid out upon hitting or landing on the ground.
As a children's game, the bets are just for teks, or playing cards as well. Adults can also play for money.
A variant of the game, pogs, uses circular cards instead of rectangular ones.
Trompo.
A trompo is a top that is spun by winding a length of string around the top and launching it so that lands spinning on its point. If the string is attached to a stick the rotation can be maintained by whipping the side of the body. The string may also be wound around the point while the trompo is spinning in order to control its position or even lift the spinning top to another surface.
Variations of tag.
Agawan base.
(lit. "catch and own a corner"): the "it" or tagger stands in the middle of the ground. Players in the corners try to exchange places by running from one base to another. The "it" tries to secure a corner or base by rushing to any of those when it is vacant. This is called ""agawang" sulok" in some variants, and "bilaran" in others.
Sekyu base or Moro Moro.
Sekyu base is a version of Agawan Base without score limits. If a team scores five points, the game continues. The players can hide near the enemy base and ambush them.
Araw-lilim.
Araw-lilim (lit. "sun and shade"): The "it" or tagger tries to tag or touch any of the players in the light.
Bahay-bahayan.
Players make imaginary houses using curtains, spare wood, ropes, or other items. They assign each individual what they wanted each implement to be, to not be caught by him.
Iring-iring.
Iring-iring (lit. "go round and round until the hanky drops"): After the "it" is determined, he/she goes around the circle and drops the handkerchief behind another player. When the player notices the handkerchief is behind them, he or she has to pick up the handkerchief and go after the "it" around the circle. the "it" has to reach the vacant spot left by the player before the "it" is tagged; otherwise, the "it" has to take the handkerchief once more.
Kapitang bakod.
Kapitang bakod (lit. "touch the post, or you're it!" or "hold on to the fence"): When the "it" or tagger is chosen, the other players run from place to place and save themselves from being tagged by holding on to a fence, a post, or any object made of wood or bamboo.
Langit-lupa.
Langit-lupa (lit. "heaven and earth") one "it" chases after players who are allowed to run on level ground ("lupa") and clamber over objects ("langit"). The "it" may tag players who remain on the ground, but not those who are standing in the "langit" (heaven). The tagged player then becomes "it" and the game continues.
In choosing the first "it", a chant is usually sung, while pointing at the players one by one:
"Langit, lupa impyerno, im – im – impyerno" (Heaven, earth, hell, he-he-hell)"Sak-sak puso tulo ang dugo" (Stabbed heart, dripping in blood)"Patay, buhay, Umalis ka na sa pwesto mong mabaho !" (Dead, alive, get out of your stinky spot ! )
Another version of the song goes:
"Langit, lupa, impyerno, im – im – impyerno" (Heaven, earth, hell, he-he-hell)"Max Alvarado, barado ang ilong" (Max Alvarado has a stuffy nose!)"Tony Ferrer, mahilig sa baril" (Tony Ferrer is fond of guns!)"Vivian Velez, mahilig sa alis!" (Vivian Velez is fond of... Get out!)
When the song stops and a player is pointed at, they are "out" and the last person left is the "taya" or "it".
To prevent cheating, some players count to three, four, or five if they stand on the "langit", and can only be stopped if there is another player standing on it.
Lagundi.
A game of Indian influence. It is basically a game of tag, except the players divide into two teams, the "it" team members get to hold the ball, passing it between themselves, with the ball touching the head of the other (not "it") team.
Lawin at sisiw.
(lit. "Hawk and Chicken"):
This game is played by ten or more players. It can be played indoors or outdoors.
One player is chosen as the "hawk" and another as the "hen". The other players are the "chickens". The chickens stand one behind the other, each holding the waist of the one in front. The hen stands in front of the file of chickens.
The hawk "buys" a chicken from the hen. The hawk takes the chicken and asks it to hunt for food, and go to sleep. While the hawk is asleep, the chicken returns to the hen. The hawk wakes up and tries to get back the chicken he bought while the hen and other chickens prevent the hawk from catching the chicken. If the hawk succeeds, the chicken is taken and punished. If the hawk fails to catch the chicken, the hawk will try to buy the chicken.
This game was created by Cyberkada in 1995. Until the 2020s, it was one of the most popular traditional games in the Philippines.
Patintero.
Patintero, also called harangang taga or tubigan (lit. "try to cross my line without letting me touch or catch you"): Two teams play: an attack team and a defense team; with five players for each team. The attack team must try to run along the perpendicular lines from the home-base to the back-end, and return without being tagged by the defense players.
Members of the defense team are called "it", and must stand on the water lines (also "fire lines") with both feet each time they try to tag attacking players. The player at the center line is called "patotot". The perpendicular line in the middle allows the "it" designated on that line to intersect the lines occupied by the "it" that the parallel line intersects, thus increasing the chances of the runners to be trapped. Even if only one member of a group is tagged, the whole group becomes the "it".
Patintero is one of the most popular Filipino street games. It is a similar game to the Korean game squid and the Indian game atya-patya.
In 1997, Samahang Makasining (Artist Club), Inc. created time-based scoring similar to that of basketball, and modified the game thusly: Each team is composed of six people (four players and two substitutes). The attacking team is given 20 minutes to cross the perpendicular lines from the home-base to the back-end, and then return. Each team can play for three games. The four horizontal water lines (also "fire lines"), two vertical lines (left and right outside lines) and one perpendicular line in the middle of vertical lines. Each box measures 6 meters by 6 meters.
The team can win based on the highest score of one player who reaches the farthest distance. Scoring is two points per line for each of the four lines going away from home-base and three points per line for each of the four lines coming back toward home-base, plus five additional points for reaching home-base.
Someone who makes it all the way across and back: (2 points × 4 lines) + (3 points × 4 lines) + 5 points home-base = 25 total points.
Presohan.
See tumbang preso and patay patayan
Sawsaw-suka.
Sawsaw-suka (lit. "dip it into vinegar"): This is sometimes done to determine who is "it" in the game of tag. One player has one or both hands open while the other players tap that person's palms repeatedly with their index finger, while chanting "sawsaw suka mahuli taya!" When the last syllable of the word "taya" is shouted, the person with their hands open quickly closes them and whoever is caught becomes the next "it." After they get caught, everyone else runs while the new "it" chases them. If the is not used in the chasing game, the process simply repeats around the players. The process of tapping the palm emulates dipping food into vinegar, hence the name "sawsaw suka." which means "dip into vinegar." Another variation of the chant goes "sawsaw suka, mapaso taya", or in English "dip into the vinegar, whoever gets burned is it."
Tumbang preso.
"Tumbang preso" or "presohan" in Luzon, and "tumba-patis" or "tumba-lata" in most Visayan regions (in English "Hit The Can"). This is one of the most popular Filipino street games, played by children using their slippers to hit a can at the center.
Like other Filipino games, players (at least three here) take the following roles: one as the "taya" (it), who is responsible for guarding the "lata" (can), and two others as the players striking. The game is performed by having the players use a "pamato" (one's own slipper) to strike the can that is held beside the "taya".
The "taya" is obligated to catch another player to give them their responsibility of chasing the can. However, the "taya" is privileged to do so only if the player is holding a "pamato" while approaching when the can is in its upright position. Therefore, while running after another player, the "taya" must keep an eye on the can's position. The other spend their time kicking the can and running away from the "taya", keeping themselves safe with their "pamato," since making the can fall down helps another player recover. Having everyone's turns end can become the climax of the game that leads them to panic, since the "taya" has all their rights to capture whether or not the players have their "pamato".
The mechanics of the game give each side privileges. The "taya" starts on one side of the road, while the can is centered on the median. On the other side a line limits the player when throwing. Players can break the rules and be punished by becoming the "taya" in several ways: stepping on or outside the boundary line when throwing; kicking the can; striking the can without having oneself reached the line; or touching it.
Regional variations, especially those in Visayan regions and Southern Luzon, add complexity to the part of the "taya". The "taya" has to make the can stand upright together with their own "pamato" on top of it. The idea is that even when the "taya" has already stood the can up, when the slipper falls from the can, they are not allowed to catch any player until the "taya" puts it back.
Ubusan lahi.
Ubusan lahi (lit. "clannicide"): One player tries to conquer the members of a group (as in claiming the members of another's clan). Out of five to ten players, a tagged player from the main group automatically becomes an ally of the tagger. The more players, the more chaotic the game and optimal their performance. The game starts with only one "it" and then try to find and tag other players. Once one player is tagged, they will then help the "it" to tag the other players until no other participant is left. This also is known as "bansai" or "lipunan".
Board games.
Dama.
Dama is a game with leaping captures played in the Philippines. It is similar to draughts or checkers. In it, a kinged piece may capture by the flying leap in one direction.
The board consists of a 5x5 grid of points, four points in each row, each alternating position with an end point on the left or right edge. Points are connected with diagonal lines.
Twelve pieces per player are positioned on the first three rows closest to the player. Players take turns moving a piece of their own forward to an empty adjacent spot along the lines. A player may capture an opponent's piece by hopping over it to an empty spot on the opposite side of it along the lines.
Multiple captures are allowed, if possible. When a player's piece reaches the opposite edge of the board from which it started, it becomes a king. Kings may move any distance diagonally forward or backward, and may capture any number of opponent's pieces it leaps over. The king cannot take multiple directions in one turn. The first player to capture all of the opponent's pieces wins.
Damath/Scidama.
A variation of Dama played in schools and institutions and briefly introduced by DepEd. It uses an 8x8 board, similar to a chessboard, with each landing squares labeled with a Math sign,formula_0. Both of the players' pieces are then labelled with different numbers ranging up to three digits. The goal of the game is to lose more pieces and lose more points in order to win the match. After computation and tallying, the player with the lesser points or overall value wins.
Lusalos.
General equivalent of the game “Nine men's morris”.
Sungka.
"Sungka" is a Philippine "mancala" game popular in the diaspora; e.g. in Macau, Taiwan, Germany, and the United States. Like the closely related "congkak", it is traditionally a women's game. "Sungka" is used by fortunetellers and prophets, also called "bailan" or "maghuhula", for divinatory purposes. Older people hope to find out through the game, with their help, whether the life of a youth is favorable at a certain day, whether they will marry one day, and, in case they will, when this will be. The game is usually played outdoors because of a Filipino superstition about a house burning down if it is played indoors. In the Anay district in Panay, the loser is said to be "patay" ("dead"). The belief is that he will have a death in his family or that his house will burn down.
The game is played with a carved wooden board (e.g. mahogany) with seven small dips or holes on each side called "bahay" (houses), and two bigger holes on either side, and shells or stones. The premise of the game is to collect more shells than the opponent. In addition, a large store is known as "ulo" (head) or "inay" (mother) for the captured stones at either end of the board. A player owns the store to his left. Each small pit initially contains seven "sigay ("counters), usually cowrie shells. On their turn a player empties one of their small pits and then distributes its contents in a clockwise direction, one by one, into the following pits including their own store, but passing the opponent's store.
According to the National Historical Commission of the Philippines the game is also played counterclockwise with each player owning the store to his right.
If the last stone falls into a non-empty small pit, its contents are lifted and distributed in another lap. If the last stone is dropped into the player's own store, the player gets a bonus move. If the last stone is dropped into an empty pit, the move ends, i.e. it is "patay ("dead). If the move ends by dropping the last stone into one of a player's own small pits, one "katak" or "taktak"; literally "exhausts" or captures) the stones in the opponent's pit directly across the board and the first player's own stone. The captured stones are "subi" (deposited) in the first player's store. However, if the opponent's pit is empty, nothing is captured.
The first move is played simultaneously. After that players alternate. The first player to finish the first move may start the second move. However, in face-to-face play one player might start shortly after his opponent so that he could choose a response which would give him an advantage. No rule prevents such a tactic. So, in fact, the decision-making may be non-simultaneous.
Players must move when they can. If one cannot, a player must pass until he can move again.
The game ends when no stones are left in the small pits.
The player who captures the most stones wins the game.
Often the game is played in rounds. Pits that cannot be filled with captures, are "sunog" (closed; literally "burnt"), while the leftover seeds are put in their store. This continues until a player is unable to fill even one hole.
References.
Footnotes
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "+-\\times\\div"
}
] | https://en.wikipedia.org/wiki?curid=8624624 |
8624952 | Current differencing transconductance amplifier | Current differencing transconductance amplifier (CDTA) is a new active circuit element.
Properties.
The CDTA is not free from parasitic input capacitances and it can operate in a wide frequency range due to current-mode operation. Some voltage and current mode applications using this element have already been reported in literature, particularly from the area of frequency filtering: general higher-order filters, biquad circuits, all-pass sections, gyrators, simulation of grounded and floating inductances and LCR ladder structures. Other studies propose CDTA-based high-frequency oscillators. Nonlinear CDTA applications are also expected, particularly precise rectifiers, current-mode Schmitt triggers for measuring purposes and signal generation, current-mode multipliers, etc.
Basic operation.
The CDTA element with its schematic symbol in Fig 1 has a pair of low-impedance current inputs and p, n and an auxiliary terminal z, whose outgoing current is the difference of input currents. Here, output terminal currents are equal in magnitude, but flow in opposite directions, and the product of transconductance (formula_0) and the voltage at the z terminal gives their magnitudes. Therefore, this active element can be characterized with the following equations:
where formula_5 and formula_6 is the external impedance connected to z terminal of the CDTA. CDTA can be thought as a combination of a current differencing unit followed by a dual-output operational transconductance amplifier, DO-OTA. Ideally, the OTA is assumed as an ideal voltage-controlled current source and can be described by formula_7, where Ix is output current, formula_8 and formula_9 denote non-inverting and inverting input voltage of the OTA, respectively. Note that gm is a function of the bias current. When this element is used in CDTA, one of its input terminals is grounded (e.g., formula_10). With dual output availability, formula_11 condition is assumed. | [
{
"math_id": 0,
"text": "gm\\,"
},
{
"math_id": 1,
"text": "Vp=Vn=0\\,"
},
{
"math_id": 2,
"text": "Iz=Ip-In\\,"
},
{
"math_id": 3,
"text": "Ix+=gm.Vz\\,"
},
{
"math_id": 4,
"text": "Ix-=-gm.Vz\\,"
},
{
"math_id": 5,
"text": "Vz-=Iz.Zz\\,"
},
{
"math_id": 6,
"text": "Zz\\,"
},
{
"math_id": 7,
"text": "Ix=gm.(V+ - V-)\\,"
},
{
"math_id": 8,
"text": "V+\\,"
},
{
"math_id": 9,
"text": "V-\\,"
},
{
"math_id": 10,
"text": "V-=0\\;V\\,"
},
{
"math_id": 11,
"text": "Ix+=-Ix-\\,"
}
] | https://en.wikipedia.org/wiki?curid=8624952 |
8625 | Differential geometry | Branch of mathematics dealing with functions and geometric structures on differentiable manifolds
Differential geometry is a mathematical discipline that studies the geometry of smooth shapes and smooth spaces, otherwise known as smooth manifolds. It uses the techniques of differential calculus, integral calculus, linear algebra and multilinear algebra. The field has its origins in the study of spherical geometry as far back as antiquity. It also relates to astronomy, the geodesy of the Earth, and later the study of hyperbolic geometry by Lobachevsky. The simplest examples of smooth spaces are the plane and space curves and surfaces in the three-dimensional Euclidean space, and the study of these shapes formed the basis for development of modern differential geometry during the 18th and 19th centuries.
Since the late 19th century, differential geometry has grown into a field concerned more generally with geometric structures on differentiable manifolds. A geometric structure is one which defines some notion of size, distance, shape, volume, or other rigidifying structure. For example, in Riemannian geometry distances and angles are specified, in symplectic geometry volumes may be computed, in conformal geometry only angles are specified, and in gauge theory certain fields are given over the space. Differential geometry is closely related to, and is sometimes taken to include, differential topology, which concerns itself with properties of differentiable manifolds that do not rely on any additional geometric structure (see that article for more discussion on the distinction between the two subjects). Differential geometry is also related to the geometric aspects of the theory of differential equations, otherwise known as geometric analysis.
Differential geometry finds applications throughout mathematics and the natural sciences. Most prominently the language of differential geometry was used by Albert Einstein in his theory of general relativity, and subsequently by physicists in the development of quantum field theory and the standard model of particle physics. Outside of physics, differential geometry finds applications in chemistry, economics, engineering, control theory, computer graphics and computer vision, and recently in machine learning.
History and development.
The history and development of differential geometry as a subject begins at least as far back as classical antiquity. It is intimately linked to the development of geometry more generally, of the notion of space and shape, and of topology, especially the study of manifolds. In this section we focus primarily on the history of the application of infinitesimal methods to geometry, and later to the ideas of tangent spaces, and eventually the development of the modern formalism of the subject in terms of tensors and tensor fields.
Classical antiquity until the Renaissance (300 BC – 1600 AD).
The study of differential geometry, or at least the study of the geometry of smooth shapes, can be traced back at least to classical antiquity. In particular, much was known about the geometry of the Earth, a spherical geometry, in the time of the ancient Greek mathematicians. Famously, Eratosthenes calculated the circumference of the Earth around 200 BC, and around 150 AD Ptolemy in his "Geography" introduced the stereographic projection for the purposes of mapping the shape of the Earth. Implicitly throughout this time principles that form the foundation of differential geometry and calculus were used in geodesy, although in a much simplified form. Namely, as far back as Euclid's "Elements" it was understood that a straight line could be defined by its property of providing the shortest distance between two points, and applying this same principle to the surface of the Earth leads to the conclusion that great circles, which are only locally similar to straight lines in a flat plane, provide the shortest path between two points on the Earth's surface. Indeed, the measurements of distance along such geodesic paths by Eratosthenes and others can be considered a rudimentary measure of arclength of curves, a concept which did not see a rigorous definition in terms of calculus until the 1600s.
Around this time there were only minimal overt applications of the theory of infinitesimals to the study of geometry, a precursor to the modern calculus-based study of the subject. In Euclid's "Elements" the notion of tangency of a line to a circle is discussed, and Archimedes applied the method of exhaustion to compute the areas of smooth shapes such as the circle, and the volumes of smooth three-dimensional solids such as the sphere, cones, and cylinders.
There was little development in the theory of differential geometry between antiquity and the beginning of the Renaissance. Before the development of calculus by Newton and Leibniz, the most significant development in the understanding of differential geometry came from Gerardus Mercator's development of the Mercator projection as a way of mapping the Earth. Mercator had an understanding of the advantages and pitfalls of his map design, and in particular was aware of the conformal nature of his projection, as well as the difference between "praga", the lines of shortest distance on the Earth, and the "directio", the straight line paths on his map. Mercator noted that the praga were "oblique curvatur" in this projection. This fact reflects the lack of a metric-preserving map of the Earth's surface onto a flat plane, a consequence of the later Theorema Egregium of Gauss.
After calculus (1600–1800).
The first systematic or rigorous treatment of geometry using the theory of infinitesimals and notions from calculus began around the 1600s when calculus was first developed by Gottfried Leibniz and Isaac Newton. At this time, the recent work of René Descartes introducing analytic coordinates to geometry allowed geometric shapes of increasing complexity to be described rigorously. In particular around this time Pierre de Fermat, Newton, and Leibniz began the study of plane curves and the investigation of concepts such as points of inflection and circles of osculation, which aid in the measurement of curvature. Indeed, already in his first paper on the foundations of calculus, Leibniz notes that the infinitesimal condition formula_0 indicates the existence of an inflection point. Shortly after this time the Bernoulli brothers, Jacob and Johann made important early contributions to the use of infinitesimals to study geometry. In lectures by Johann Bernoulli at the time, later collated by L'Hopital into the first textbook on differential calculus, the tangents to plane curves of various types are computed using the condition formula_1, and similarly points of inflection are calculated. At this same time the orthogonality between the osculating circles of a plane curve and the tangent directions is realised, and the first analytical formula for the radius of an osculating circle, essentially the first analytical formula for the notion of curvature, is written down.
In the wake of the development of analytic geometry and plane curves, Alexis Clairaut began the study of space curves at just the age of 16. In his book Clairaut introduced the notion of tangent and subtangent directions to space curves in relation to the directions which lie along a surface on which the space curve lies. Thus Clairaut demonstrated an implicit understanding of the tangent space of a surface and studied this idea using calculus for the first time. Importantly Clairaut introduced the terminology of "curvature" and "double curvature", essentially the notion of principal curvatures later studied by Gauss and others.
Around this same time, Leonhard Euler, originally a student of Johann Bernoulli, provided many significant contributions not just to the development of geometry, but to mathematics more broadly. In regards to differential geometry, Euler studied the notion of a geodesic on a surface deriving the first analytical geodesic equation, and later introduced the first set of intrinsic coordinate systems on a surface, beginning the theory of "intrinsic geometry" upon which modern geometric ideas are based. Around this time Euler's study of mechanics in the "Mechanica" lead to the realization that a mass traveling along a surface not under the effect of any force would traverse a geodesic path, an early precursor to the important foundational ideas of Einstein's general relativity, and also to the Euler–Lagrange equations and the first theory of the calculus of variations, which underpins in modern differential geometry many techniques in symplectic geometry and geometric analysis. This theory was used by Lagrange, a co-developer of the calculus of variations, to derive the first differential equation describing a minimal surface in terms of the Euler–Lagrange equation. In 1760 Euler proved a theorem expressing the curvature of a space curve on a surface in terms of the principal curvatures, known as Euler's theorem.
Later in the 1700s, the new French school led by Gaspard Monge began to make contributions to differential geometry. Monge made important contributions to the theory of plane curves, surfaces, and studied surfaces of revolution and envelopes of plane curves and space curves. Several students of Monge made contributions to this same theory, and for example Charles Dupin provided a new interpretation of Euler's theorem in terms of the principle curvatures, which is the modern form of the equation.
Intrinsic geometry and non-Euclidean geometry (1800–1900).
The field of differential geometry became an area of study considered in its own right, distinct from the more broad idea of analytic geometry, in the 1800s, primarily through the foundational work of Carl Friedrich Gauss and Bernhard Riemann, and also in the important contributions of Nikolai Lobachevsky on hyperbolic geometry and non-Euclidean geometry and throughout the same period the development of projective geometry.
Dubbed the single most important work in the history of differential geometry, in 1827 Gauss produced the "Disquisitiones generales circa superficies curvas" detailing the general theory of curved surfaces. In this work and his subsequent papers and unpublished notes on the theory of surfaces, Gauss has been dubbed the inventor of non-Euclidean geometry and the inventor of intrinsic differential geometry. In his fundamental paper Gauss introduced the Gauss map, Gaussian curvature, first and second fundamental forms, proved the Theorema Egregium showing the intrinsic nature of the Gaussian curvature, and studied geodesics, computing the area of a geodesic triangle in various non-Euclidean geometries on surfaces.
At this time Gauss was already of the opinion that the standard paradigm of Euclidean geometry should be discarded, and was in possession of private manuscripts on non-Euclidean geometry which informed his study of geodesic triangles. Around this same time János Bolyai and Lobachevsky independently discovered hyperbolic geometry and thus demonstrated the existence of consistent geometries outside Euclid's paradigm. Concrete models of hyperbolic geometry were produced by Eugenio Beltrami later in the 1860s, and Felix Klein coined the term non-Euclidean geometry in 1871, and through the Erlangen program put Euclidean and non-Euclidean geometries on the same footing. Implicitly, the spherical geometry of the Earth that had been studied since antiquity was a non-Euclidean geometry, an elliptic geometry.
The development of intrinsic differential geometry in the language of Gauss was spurred on by his student, Bernhard Riemann in his Habilitationsschrift, "On the hypotheses which lie at the foundation of geometry". In this work Riemann introduced the notion of a Riemannian metric and the Riemannian curvature tensor for the first time, and began the systematic study of differential geometry in higher dimensions. This intrinsic point of view in terms of the Riemannian metric, denoted by formula_2 by Riemann, was the development of an idea of Gauss's about the linear element formula_3 of a surface. At this time Riemann began to introduce the systematic use of linear algebra and multilinear algebra into the subject, making great use of the theory of quadratic forms in his investigation of metrics and curvature. At this time Riemann did not yet develop the modern notion of a manifold, as even the notion of a topological space had not been encountered, but he did propose that it might be possible to investigate or measure the properties of the metric of spacetime through the analysis of masses within spacetime, linking with the earlier observation of Euler that masses under the effect of no forces would travel along geodesics on surfaces, and predicting Einstein's fundamental observation of the equivalence principle a full 60 years before it appeared in the scientific literature.
In the wake of Riemann's new description, the focus of techniques used to study differential geometry shifted from the ad hoc and extrinsic methods of the study of curves and surfaces to a more systematic approach in terms of tensor calculus and Klein's Erlangen program, and progress increased in the field. The notion of groups of transformations was developed by Sophus Lie and Jean Gaston Darboux, leading to important results in the theory of Lie groups and symplectic geometry. The notion of differential calculus on curved spaces was studied by Elwin Christoffel, who introduced the Christoffel symbols which describe the covariant derivative in 1868, and by others including Eugenio Beltrami who studied many analytic questions on manifolds. In 1899 Luigi Bianchi produced his "Lectures on differential geometry" which studied differential geometry from Riemann's perspective, and a year later Tullio Levi-Civita and Gregorio Ricci-Curbastro produced their textbook systematically developing the theory of absolute differential calculus and tensor calculus. It was in this language that differential geometry was used by Einstein in the development of general relativity and pseudo-Riemannian geometry.
Modern differential geometry (1900–2000).
The subject of modern differential geometry emerged from the early 1900s in response to the foundational contributions of many mathematicians, including importantly the work of Henri Poincaré on the foundations of topology. At the start of the 1900s there was a major movement within mathematics to formalise the foundational aspects of the subject to avoid crises of rigour and accuracy, known as Hilbert's program. As part of this broader movement, the notion of a topological space was distilled in by Felix Hausdorff in 1914, and by 1942 there were many different notions of manifold of a combinatorial and differential-geometric nature.
Interest in the subject was also focused by the emergence of Einstein's theory of general relativity and the importance of the Einstein Field equations. Einstein's theory popularised the tensor calculus of Ricci and Levi-Civita and introduced the notation formula_4 for a Riemannian metric, and formula_5 for the Christoffel symbols, both coming from "G" in "Gravitation". Élie Cartan helped reformulate the foundations of the differential geometry of smooth manifolds in terms of exterior calculus and the theory of moving frames, leading in the world of physics to Einstein–Cartan theory.
Following this early development, many mathematicians contributed to the development of the modern theory, including Jean-Louis Koszul who introduced connections on vector bundles, Shiing-Shen Chern who introduced characteristic classes to the subject and began the study of complex manifolds, Sir William Vallance Douglas Hodge and Georges de Rham who expanded understanding of differential forms, Charles Ehresmann who introduced the theory of fibre bundles and Ehresmann connections, and others. Of particular importance was Hermann Weyl who made important contributions to the foundations of general relativity, introduced the Weyl tensor providing insight into conformal geometry, and first defined the notion of a gauge leading to the development of gauge theory in physics and mathematics.
In the middle and late 20th century differential geometry as a subject expanded in scope and developed links to other areas of mathematics and physics. The development of gauge theory and Yang–Mills theory in physics brought bundles and connections into focus, leading to developments in gauge theory. Many analytical results were investigated including the proof of the Atiyah–Singer index theorem. The development of complex geometry was spurred on by parallel results in algebraic geometry, and results in the geometry and global analysis of complex manifolds were proven by Shing-Tung Yau and others. In the latter half of the 20th century new analytic techniques were developed in regards to curvature flows such as the Ricci flow, which culminated in Grigori Perelman's proof of the Poincaré conjecture. During this same period primarily due to the influence of Michael Atiyah, new links between theoretical physics and differential geometry were formed. Techniques from the study of the Yang–Mills equations and gauge theory were used by mathematicians to develop new invariants of smooth manifolds. Physicists such as Edward Witten, the only physicist to be awarded a Fields medal, made new impacts in mathematics by using topological quantum field theory and string theory to make predictions and provide frameworks for new rigorous mathematics, which has resulted for example in the conjectural mirror symmetry and the Seiberg–Witten invariants.
Branches.
Riemannian geometry.
Riemannian geometry studies Riemannian manifolds, smooth manifolds with a "Riemannian metric". This is a concept of distance expressed by means of a smooth positive definite symmetric bilinear form defined on the tangent space at each point. Riemannian geometry generalizes Euclidean geometry to spaces that are not necessarily flat, though they still resemble Euclidean space at each point infinitesimally, i.e. in the first order of approximation. Various concepts based on length, such as the arc length of curves, area of plane regions, and volume of solids all possess natural analogues in Riemannian geometry. The notion of a directional derivative of a function from multivariable calculus is extended to the notion of a covariant derivative of a tensor. Many concepts of analysis and differential equations have been generalized to the setting of Riemannian manifolds.
A distance-preserving diffeomorphism between Riemannian manifolds is called an isometry. This notion can also be defined "locally", i.e. for small neighborhoods of points. Any two regular curves are locally isometric. However, the Theorema Egregium of Carl Friedrich Gauss showed that for surfaces, the existence of a local isometry imposes that the Gaussian curvatures at the corresponding points must be the same. In higher dimensions, the Riemann curvature tensor is an important pointwise invariant associated with a Riemannian manifold that measures how close it is to being flat. An important class of Riemannian manifolds is the Riemannian symmetric spaces, whose curvature is not necessarily constant. These are the closest analogues to the "ordinary" plane and space considered in Euclidean and non-Euclidean geometry.
Pseudo-Riemannian geometry.
Pseudo-Riemannian geometry generalizes Riemannian geometry to the case in which the metric tensor need not be positive-definite.
A special case of this is a Lorentzian manifold, which is the mathematical basis of Einstein's general relativity theory of gravity.
Finsler geometry.
Finsler geometry has "Finsler manifolds" as the main object of study. This is a differential manifold with a "Finsler metric", that is, a Banach norm defined on each tangent space. Riemannian manifolds are special cases of the more general Finsler manifolds. A Finsler structure on a manifold formula_6 is a function formula_7 such that:
Symplectic geometry.
Symplectic geometry is the study of symplectic manifolds. An almost symplectic manifold is a differentiable manifold equipped with a smoothly varying non-degenerate skew-symmetric bilinear form on each tangent space, i.e., a nondegenerate 2-form "ω", called the "symplectic form". A symplectic manifold is an almost symplectic manifold for which the symplectic form "ω" is closed: d"ω" = 0.
A diffeomorphism between two symplectic manifolds which preserves the symplectic form is called a symplectomorphism. Non-degenerate skew-symmetric bilinear forms can only exist on even-dimensional vector spaces, so symplectic manifolds necessarily have even dimension. In dimension 2, a symplectic manifold is just a surface endowed with an area form and a symplectomorphism is an area-preserving diffeomorphism. The phase space of a mechanical system is a symplectic manifold and they made an implicit appearance already in the work of Joseph Louis Lagrange on analytical mechanics and later in Carl Gustav Jacobi's and William Rowan Hamilton's formulations of classical mechanics.
By contrast with Riemannian geometry, where the curvature provides a local invariant of Riemannian manifolds, Darboux's theorem states that all symplectic manifolds are locally isomorphic. The only invariants of a symplectic manifold are global in nature and topological aspects play a prominent role in symplectic geometry. The first result in symplectic topology is probably the Poincaré–Birkhoff theorem, conjectured by Henri Poincaré and then proved by G.D. Birkhoff in 1912. It claims that if an area preserving map of an annulus twists each boundary component in opposite directions, then the map has at least two fixed points.
Contact geometry.
Contact geometry deals with certain manifolds of odd dimension. It is close to symplectic geometry and like the latter, it originated in questions of classical mechanics. A "contact structure" on a (2"n" + 1)-dimensional manifold "M" is given by a smooth hyperplane field "H" in the tangent bundle that is as far as possible from being associated with the level sets of a differentiable function on "M" (the technical term is "completely nonintegrable tangent hyperplane distribution"). Near each point "p", a hyperplane distribution is determined by a nowhere vanishing 1-form formula_15, which is unique up to multiplication by a nowhere vanishing function:
formula_16
A local 1-form on "M" is a "contact form" if the restriction of its exterior derivative to "H" is a non-degenerate two-form and thus induces a symplectic structure on "H""p" at each point. If the distribution "H" can be defined by a global one-form formula_15 then this form is contact if and only if the top-dimensional form
formula_17
is a volume form on "M", i.e. does not vanish anywhere. A contact analogue of the Darboux theorem holds: all contact structures on an odd-dimensional manifold are locally isomorphic and can be brought to a certain local normal form by a suitable choice of the coordinate system.
Complex and Kähler geometry.
"Complex differential geometry" is the study of complex manifolds.
An almost complex manifold is a "real" manifold formula_6, endowed with a tensor of type (1, 1), i.e. a vector bundle endomorphism (called an "almost complex structure")
formula_18, such that formula_19
It follows from this definition that an almost complex manifold is even-dimensional.
An almost complex manifold is called "complex" if formula_20, where formula_21 is a tensor of type (2, 1) related to formula_22, called the Nijenhuis tensor (or sometimes the "torsion").
An almost complex manifold is complex if and only if it admits a holomorphic coordinate atlas.
An "almost Hermitian structure" is given by an almost complex structure "J", along with a Riemannian metric "g", satisfying the compatibility condition
formula_23
An almost Hermitian structure defines naturally a differential two-form
formula_24
The following two conditions are equivalent:
where formula_27 is the Levi-Civita connection of formula_4. In this case, formula_28 is called a "Kähler structure", and a "Kähler manifold" is a manifold endowed with a Kähler structure. In particular, a Kähler manifold is both a complex and a symplectic manifold. A large class of Kähler manifolds (the class of Hodge manifolds) is given by all the smooth complex projective varieties.
CR geometry.
CR geometry is the study of the intrinsic geometry of boundaries of domains in complex manifolds.
Conformal geometry.
Conformal geometry is the study of the set of angle-preserving (conformal) transformations on a space.
Differential topology.
Differential topology is the study of global geometric invariants without a metric or symplectic form.
Differential topology starts from the natural operations such as Lie derivative of natural vector bundles and de Rham differential of forms. Beside Lie algebroids, also Courant algebroids start playing a more important role.
Lie groups.
A Lie group is a group in the category of smooth manifolds. Beside the algebraic properties this enjoys also differential geometric properties. The most obvious construction is that of a Lie algebra which is the tangent space at the unit endowed with the Lie bracket between left-invariant vector fields. Beside the structure theory there is also the wide field of representation theory.
Geometric analysis.
Geometric analysis is a mathematical discipline where tools from differential equations, especially elliptic partial differential equations are used to establish new results in differential geometry and differential topology.
Gauge theory.
Gauge theory is the study of connections on vector bundles and principal bundles, and arises out of problems in mathematical physics and physical gauge theories which underpin the standard model of particle physics. Gauge theory is concerned with the study of differential equations for connections on bundles, and the resulting geometric moduli spaces of solutions to these equations as well as the invariants that may be derived from them. These equations often arise as the Euler–Lagrange equations describing the equations of motion of certain physical systems in quantum field theory, and so their study is of considerable interest in physics.
Bundles and connections.
The apparatus of vector bundles, principal bundles, and connections on bundles plays an extraordinarily important role in modern differential geometry. A smooth manifold always carries a natural vector bundle, the tangent bundle. Loosely speaking, this structure by itself is sufficient only for developing analysis on the manifold, while doing geometry requires, in addition, some way to relate the tangent spaces at different points, i.e. a notion of parallel transport. An important example is provided by affine connections. For a surface in R3, tangent planes at different points can be identified using a natural path-wise parallelism induced by the ambient Euclidean space, which has a well-known standard definition of metric and parallelism. In Riemannian geometry, the Levi-Civita connection serves a similar purpose. More generally, differential geometers consider spaces with a vector bundle and an arbitrary affine connection which is not defined in terms of a metric. In physics, the manifold may be spacetime and the bundles and connections are related to various physical fields.
Intrinsic versus extrinsic.
From the beginning and through the middle of the 19th century, differential geometry was studied from the "extrinsic" point of view: curves and surfaces were considered as lying in a Euclidean space of higher dimension (for example a surface in an ambient space of three dimensions). The simplest results are those in the differential geometry of curves and differential geometry of surfaces. Starting with the work of Riemann, the "intrinsic" point of view was developed, in which one cannot speak of moving "outside" the geometric object because it is considered to be given in a free-standing way. The fundamental result here is Gauss's theorema egregium, to the effect that Gaussian curvature is an intrinsic invariant.
The intrinsic point of view is more flexible. For example, it is useful in relativity where space-time cannot naturally be taken as extrinsic. However, there is a price to pay in technical complexity: the intrinsic definitions of curvature and connections become much less visually intuitive.
These two points of view can be reconciled, i.e. the extrinsic geometry can be considered as a structure additional to the intrinsic one. (See the Nash embedding theorem.) In the formalism of geometric calculus both extrinsic and intrinsic geometry of a manifold can be characterized by a single bivector-valued one-form called the shape operator.
Applications.
Below are some examples of how differential geometry is applied to other fields of science and mathematics.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d^2 y = 0"
},
{
"math_id": 1,
"text": "dy=0"
},
{
"math_id": 2,
"text": "ds^2"
},
{
"math_id": 3,
"text": "ds"
},
{
"math_id": 4,
"text": "g"
},
{
"math_id": 5,
"text": "\\Gamma"
},
{
"math_id": 6,
"text": "M"
},
{
"math_id": 7,
"text": "F:\\mathrm{T}M\\to[0,\\infty)"
},
{
"math_id": 8,
"text": "F(x,my)=mF(x,y)"
},
{
"math_id": 9,
"text": "x,y"
},
{
"math_id": 10,
"text": "\\mathrm{T}M"
},
{
"math_id": 11,
"text": "m\\ge 0"
},
{
"math_id": 12,
"text": "F"
},
{
"math_id": 13,
"text": "\\mathrm{T}M\\setminus\\{0\\}"
},
{
"math_id": 14,
"text": "F^2"
},
{
"math_id": 15,
"text": "\\alpha"
},
{
"math_id": 16,
"text": " H_p = \\ker\\alpha_p\\subset T_{p}M."
},
{
"math_id": 17,
"text": "\\alpha\\wedge (d\\alpha)^n"
},
{
"math_id": 18,
"text": " J:TM\\rightarrow TM "
},
{
"math_id": 19,
"text": "J^2=-1. \\,"
},
{
"math_id": 20,
"text": "N_J=0"
},
{
"math_id": 21,
"text": "N_J"
},
{
"math_id": 22,
"text": "J"
},
{
"math_id": 23,
"text": "g(JX,JY)=g(X,Y). \\,"
},
{
"math_id": 24,
"text": "\\omega_{J,g}(X,Y):=g(JX,Y). \\,"
},
{
"math_id": 25,
"text": " N_J=0\\mbox{ and }d\\omega=0 \\,"
},
{
"math_id": 26,
"text": "\\nabla J=0 \\,"
},
{
"math_id": 27,
"text": "\\nabla"
},
{
"math_id": 28,
"text": "(J, g)"
}
] | https://en.wikipedia.org/wiki?curid=8625 |
862717 | Projectile motion | Motion of launched objects due to gravity
Projectile motion is a form of motion experienced by an object or particle (a projectile) that is projected in a gravitational field, such as from Earth's surface, and moves along a curved path (a trajectory) under the action of gravity only. In the particular case of projectile motion on Earth, most calculations assume the effects of air resistance are passive and negligible.
Galileo Galilei showed that the trajectory of a given projectile is parabolic, but the path may also be straight in the special case when the object is thrown directly upward or downward. The study of such motions is called ballistics, and such a trajectory is described as ballistic. The only force of mathematical significance that is actively exerted on the object is gravity, which acts downward, thus imparting to the object a downward acceleration towards Earth's center of mass. Due to the object's inertia, no external force is needed to maintain the horizontal velocity component of the object's motion.
Taking other forces into account, such as aerodynamic drag or internal propulsion (such as in a rocket), requires additional analysis. A ballistic missile is a missile only guided during the relatively brief initial powered phase of flight, and whose remaining course is governed by the laws of classical mechanics.
Ballistics (from grc " 'to throw') is the science of dynamics that deals with the flight, behavior and effects of projectiles, especially bullets, unguided bombs, rockets, or the like; the science or art of designing and accelerating projectiles so as to achieve a desired performance.
The elementary equation of ballistics neglect nearly every factor except for initial velocity and an assumed constant gravitational acceleration. Practical solutions of a ballistics problem often require considerations of air resistance, cross winds, target motion, varying acceleration due to gravity, and in such problems as launching a rocket from one point on the Earth to another, the rotation of the Earth. Detailed mathematical solutions of practical problems typically do not have closed-form solutions, and therefore require numerical methods to address.
Kinematic quantities.
In projectile motion, the horizontal motion and the vertical motion are independent of each other; that is, neither motion affects the other. This is the principle of "compound motion" established by Galileo in 1638, and used by him to prove the parabolic form of projectile motion.
A ballistic trajectory is a parabola with homogeneous acceleration, such as in a space ship with constant acceleration in absence of other forces. On Earth the acceleration changes magnitude with altitude and direction with latitude/longitude. This causes an elliptic trajectory, which is very close to a parabola on a small scale. However, if an object was thrown and the Earth was suddenly replaced with a black hole of equal mass, it would become obvious that the ballistic trajectory is part of an elliptic orbit around that black hole, and not a parabola that extends to infinity. At higher speeds the trajectory can also be circular, parabolic or hyperbolic (unless distorted by other objects like the Moon or the Sun). In this article a homogeneous acceleration is assumed.
Acceleration.
Since there is acceleration only in the vertical direction, the velocity in the horizontal direction is constant, being equal to formula_0. The vertical motion of the projectile is the motion of a particle during its free fall. Here the acceleration is constant, being equal to g. The components of the acceleration are:
formula_1,
formula_2.*
"*The y acceleration can also be referred to as the force of the earth on the object(s) of interest."
Velocity.
Let the projectile be launched with an initial velocity formula_3, which can be expressed as the sum of horizontal and vertical components as follows:
formula_4.
The components formula_5 and formula_6 can be found if the initial launch angle, formula_7, is known:
formula_8,
formula_9
The horizontal component of the velocity of the object remains unchanged throughout the motion. The vertical component of the velocity changes linearly, because the acceleration due to gravity is constant. The accelerations in the x and y directions can be integrated to solve for the components of velocity at any time t, as follows:
formula_10,
formula_11.
The magnitude of the velocity (under the Pythagorean theorem, also known as the triangle law):
formula_12.
Displacement.
At any time formula_13, the projectile's horizontal and vertical displacement are:
formula_14,
formula_15.
The magnitude of the displacement is:
formula_16.
Consider the equations,
formula_17.
If t is eliminated between these two equations the following equation is obtained:
formula_18
Here R is the Range of a projectile.
Since g, θ, and v0 are constants, the above equation is of the form
formula_19,
in which a and b are constants. This is the equation of a parabola, so the path is parabolic. The axis of the parabola is vertical.
If the projectile's position (x,y) and launch angle (θ or α) are known, the initial velocity can be found solving for v0 in the aforementioned parabolic equation:
formula_20.
Displacement in polar coordinates.
The parabolic trajectory of a projectile can also be expressed in polar coordinates instead of Cartesian coordinates. In this case, the position has the general formula
formula_21.
In this equation, the origin is the midpoint of the horizontal range of the projectile, and if the ground is flat, the parabolic arc is plotted in the range formula_22. This expression can be obtained by transforming the Cartesian equation as stated above by formula_23 and formula_24.
Properties of the trajectory.
Time of flight or total time of the whole journey.
The total time t for which the projectile remains in the air is called the time of flight.
formula_15
After the flight, the projectile returns to the horizontal axis (x-axis), so formula_25.
formula_26
formula_27
formula_28
formula_29
Note that we have neglected air resistance on the projectile.
If the starting point is at height y0 with respect to the point of impact, the time of flight is:
formula_30
As above, this expression can be reduced to
formula_31
if θ is 45° and y0 is 0.
Time of flight to the target's position.
As shown above in the Displacement section, the horizontal and vertical velocity of a projectile are independent of each other.
Because of this, we can find the time to reach a target using the displacement formula for the horizontal velocity:
formula_32
formula_33
formula_34
This equation will give the total time "t" the projectile must travel for to reach the target's horizontal displacement, neglecting air resistance.
Maximum height of projectile.
The greatest height that the object will reach is known as the peak of the object's motion.
The increase in height will last until formula_35, that is,
formula_36.
Time to reach the maximum height(h):
formula_37.
For the vertical displacement of the maximum height of the projectile:
formula_38
formula_39
The maximum reachable height is obtained for "θ"=90°:
formula_40
If the projectile's position (x,y) and launch angle (θ) are known, the maximum height can be found by solving for h in the following equation:
formula_41
Angle of elevation (φ) at the maximum height is given by:
formula_42
Relation between horizontal range and maximum height.
The relation between the range d on the horizontal plane and the maximum height h reached at formula_43 is:
formula_44
Maximum distance of projectile.
The range and the maximum height of the projectile does not depend upon its mass. Hence range and maximum height are equal for all bodies that are thrown with the same velocity and direction.
The horizontal range d of the projectile is the horizontal distance it has traveled when it returns to its initial height (formula_45).
formula_46.
Time to reach ground:
formula_47.
From the horizontal displacement the maximum distance of projectile:
formula_48,
so
formula_49.
Note that d has its maximum value when
formula_50,
which necessarily corresponds to
formula_51,
or
formula_52.
The total horizontal distance (d) traveled.
formula_53
When the surface is flat (initial height of the object is zero), the distance traveled:
formula_54
Thus the maximum distance is obtained if θ is 45 degrees. This distance is:
formula_55
Application of the work energy theorem.
According to the work-energy theorem the vertical component of velocity is:
formula_56.
These formulae ignore aerodynamic drag and also assume that the landing area is at uniform height 0.
Angle of reach.
The "angle of reach" is the angle (θ) at which a projectile must be launched in order to go a distance d, given the initial velocity v.
formula_57
There are two solutions:
formula_58 (shallow trajectory)
and because formula_59,
formula_60 (steep trajectory)
Angle θ required to hit coordinate (x, y).
To hit a target at range x and altitude y when fired from (0,0) and with initial speed v the required angle(s) of launch θ are:
formula_61
The two roots of the equation correspond to the two possible launch angles, so long as they aren't imaginary, in which case the initial speed is not great enough to reach the point (x,y) selected. This formula allows one to find the angle of launch needed without the restriction of formula_25.
One can also ask what launch angle allows the lowest possible launch velocity. This occurs when the two solutions above are equal, implying that the quantity under the square root sign is zero. This requires solving a quadratic equation for formula_62, and we find
formula_63
This gives
formula_64
If we denote the angle whose tangent is y/x by α, then
formula_65
formula_66
formula_67
formula_68
This implies
formula_69
In other words, the launch should be at the angle halfway between the target and Zenith (vector opposite to Gravity)
Total Path Length of the Trajectory.
The length of the parabolic arc traced by a projectile L, given that the height of launch and landing is the same and that there is no air resistance, is given by the formula:
formula_70
where formula_71 is the initial velocity, formula_72 is the launch angle and formula_73 is the acceleration due to gravity as a positive value. The expression can be obtained by evaluating the arc length integral for the height-distance parabola between the bounds "initial" and "final" displacements (i.e. between 0 and the horizontal range of the projectile) such that:
formula_74
If the time of flight is "t",
formula_75
Trajectory of a projectile with air resistance.
Air resistance creates a force that (for symmetric projectiles) is always directed against the direction of motion in the surrounding medium and has a magnitude that depends on the absolute speed: formula_76. The speed-dependence of the friction force is linear (formula_77) at very low speeds (Stokes drag) and quadratic (formula_78) at larger speeds (Newton drag). The transition between these behaviours is determined by the Reynolds number, which depends on speed, object size and kinematic viscosity of the medium. For Reynolds numbers below about 1000, the dependence is linear, above it becomes quadratic. In air, which has a kinematic viscosity around formula_79, this means that the drag force becomes quadratic in "v" when the product of speed and diameter is more than about formula_80, which is typically the case for projectiles.
The free body diagram on the right is for a projectile that experiences air resistance and the effects of gravity. Here, air resistance is assumed to be in the direction opposite of the projectile's velocity: formula_85
Trajectory of a projectile with Stokes drag.
Stokes drag, where formula_86, only applies at very low speed in air, and is thus not the typical case for projectiles. However, the linear dependence of formula_87 on formula_88 causes a very simple differential equation of motion
formula_89
in which the two cartesian components become completely independent, and thus easier to solve.
Here, formula_71,formula_90 and formula_91 will be used to denote the initial velocity, the velocity along the direction of x and the velocity along the direction of y, respectively. The mass of the projectile will be denoted by m, and formula_92. For the derivation only the case where formula_93 is considered. Again, the projectile is fired from the origin (0,0).
formula_94 (1b)
formula_95 (3b)
formula_96.
Trajectory of a projectile with Newton drag.
The most typical case of air resistance, for the case of Reynolds numbers above about 1000 is Newton drag with a drag force proportional to the speed squared, formula_97. In air, which has a kinematic viscosity around formula_79, this means that the product of speed and diameter must be more than about formula_80.
Unfortunately, the equations of motion can not be easily solved analytically for this case. Therefore, a numerical solution will be examined.
The following assumptions are made:
formula_98
Where:
*"FD" is the drag force
*"c" is the drag coefficient
*"ρ" is the air density
*"A" is the cross sectional area of the projectile
*"μ" = "k"/"m" = "cρA"/(2"m")
Special cases.
Even though the general case of a projectile with Newton drag cannot be solved analytically, some special cases can. Here we denote the terminal velocity in free-fall as formula_99 and the characteristic settling time constant formula_100.
formula_102
formula_103
formula_104
The same pattern applies for motion with friction along a line in any direction, when gravity is negligible. It also applies when vertical motion is prevented, such as for a moving car with its engine off.
formula_105
formula_106
formula_107
Here
formula_108
formula_109
formula_110
and
formula_111
where formula_112 is the initial upward velocity at formula_113 and the initial position is formula_114.
A projectile cannot rise longer than formula_115 in the vertical direction before it reaches the peak.
formula_116
formula_117
formula_118
After a time formula_119, the projectile reaches almost terminal velocity formula_120.
Numerical solution.
A projectile motion with drag can be computed generically by numerical integration of the ordinary differential equation, for instance by applying a reduction to a first-order system. The equation to be solved is
formula_121.
This approach also allows to add the effects of speed-dependent drag coefficient, altitude-dependent air density and position-dependent gravity field.
Lofted trajectory.
A special case of a ballistic trajectory for a rocket is a lofted trajectory, a trajectory with an apogee greater than the minimum-energy trajectory to the same range. In other words, the rocket travels higher and by doing so it uses more energy to get to the same landing point. This may be done for various reasons such as increasing distance to the horizon to give greater viewing/communication range or for changing the angle with which a missile will impact on landing. Lofted trajectories are sometimes used in both missile rocketry and in spaceflight.
Projectile motion on a planetary scale.
When a projectile without air resistance travels a range that is significant compared to the Earth's radius (above ≈100 km), the curvature of the Earth and the non-uniform Earth's gravity have to be considered. This is, for example the case with spacecraft or intercontinental projectiles. The trajectory then generalizes from a parabola to a Kepler-ellipse with one focus at the center of the Earth. The projectile motion then follows Kepler's laws of planetary motion.
The trajectories' parameters have to be adapted from the values of a uniform gravity field stated above. The Earth radius is taken as R, and g as the standard surface gravity. Let formula_122 the launch velocity relative to the first cosmic velocity.
Total range d between launch and impact:
formula_123
Maximum range of a projectile for optimum launch angle (formula_124):
formula_125 with formula_126, the first cosmic velocity
Maximum height of a projectile above the planetary surface:
formula_127
Maximum height of a projectile for vertical launch (formula_128):
formula_129 with formula_130, the second cosmic velocity
Time of flight:
formula_131
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathbf{v}_0 \\cos\\theta "
},
{
"math_id": 1,
"text": " a_x = 0 "
},
{
"math_id": 2,
"text": " a_y = -g "
},
{
"math_id": 3,
"text": "\\mathbf{v}(0) \\equiv \\mathbf{v}_0 "
},
{
"math_id": 4,
"text": " \\mathbf{v}_0 = v_{0x}\\mathbf{\\hat x} + v_{0y}\\mathbf{\\hat y} "
},
{
"math_id": 5,
"text": " v_{0x} "
},
{
"math_id": 6,
"text": " v_{0y} "
},
{
"math_id": 7,
"text": " \\theta "
},
{
"math_id": 8,
"text": " v_{0x} = v_0\\cos(\\theta)"
},
{
"math_id": 9,
"text": " v_{0y} = v_0\\sin(\\theta)"
},
{
"math_id": 10,
"text": " v_x = v_0 \\cos(\\theta) "
},
{
"math_id": 11,
"text": " v_y = v_0 \\sin(\\theta) - gt "
},
{
"math_id": 12,
"text": " v = \\sqrt{v_x^2 + v_y^2 } "
},
{
"math_id": 13,
"text": " t "
},
{
"math_id": 14,
"text": " x = v_0 t \\cos(\\theta) "
},
{
"math_id": 15,
"text": " y = v_0 t \\sin(\\theta) - \\frac{1}{2}gt^2 "
},
{
"math_id": 16,
"text": " \\Delta r=\\sqrt{x^2 + y^2 } "
},
{
"math_id": 17,
"text": " x = v_0 t \\cos(\\theta) , y = v_0 t\\sin(\\theta) - \\frac{1}{2}gt^2 "
},
{
"math_id": 18,
"text": " y = \\tan(\\theta) \\cdot x-\\frac{g}{2v^2_{0}\\cos^2 \\theta} \\cdot x^2=\\tan\\theta \\cdot x \\left(1-\\frac{x}{R}\\right). "
},
{
"math_id": 19,
"text": " y=ax+bx^2 "
},
{
"math_id": 20,
"text": " v_0 = \\sqrt{{x^2 g} \\over {x \\sin 2\\theta - 2y \\cos^2\\theta}} "
},
{
"math_id": 21,
"text": " r( \\phi ) = \\frac{2v_0^2 \\cos^2\\theta}{|g|} \\left(\\tan\\theta\\sec\\phi -\\tan\\phi\\sec\\phi \\right) "
},
{
"math_id": 22,
"text": " 0 \\leq \\phi \\leq \\pi "
},
{
"math_id": 23,
"text": " y = r \\sin\\phi "
},
{
"math_id": 24,
"text": " x = r \\cos\\phi "
},
{
"math_id": 25,
"text": " y=0 "
},
{
"math_id": 26,
"text": " 0 = v_0 t \\sin(\\theta) - \\frac{1}{2}gt^2 "
},
{
"math_id": 27,
"text": " v_0 t \\sin(\\theta) = \\frac{1}{2}gt^2 "
},
{
"math_id": 28,
"text": " v_0 \\sin(\\theta) = \\frac{1}{2}gt "
},
{
"math_id": 29,
"text": " t = \\frac{2 v_0 \\sin(\\theta)}{|g|} "
},
{
"math_id": 30,
"text": " t = \\frac{d}{v \\cos\\theta} = \\frac{v \\sin \\theta + \\sqrt{(v \\sin \\theta)^2 + 2gy_0}}{g} "
},
{
"math_id": 31,
"text": " t = \\frac{v\\sin{\\theta} + \\sqrt{(v\\sin{\\theta})^{2}}}{|g|} = \\frac{v\\sin{\\theta} + v\\sin{\\theta}}{|g|} = \\frac{2v\\sin{\\theta}}{|g|} = \\frac{2v\\sin{(45)}}{|g|} = \\frac{2v\\frac{\\sqrt{2}}{2}}{|g|} = \\frac{\\sqrt{2}v}{|g|}"
},
{
"math_id": 32,
"text": "x = v_0 t \\cos(\\theta)"
},
{
"math_id": 33,
"text": "\\frac{x}{t}=v_0\\cos(\\theta)"
},
{
"math_id": 34,
"text": "t=\\frac{x}{v_0\\cos(\\theta)}"
},
{
"math_id": 35,
"text": " v_y=0 "
},
{
"math_id": 36,
"text": " 0=v_0 \\sin(\\theta) - gt_h "
},
{
"math_id": 37,
"text": " t_h = \\frac{v_0 \\sin(\\theta)}{|g|} "
},
{
"math_id": 38,
"text": " h = v_0 t_h \\sin(\\theta) - \\frac{1}{2} gt^2_h "
},
{
"math_id": 39,
"text": " h = \\frac{v_0^2 \\sin^2(\\theta)}{2|g|} "
},
{
"math_id": 40,
"text": " h_{\\mathrm{max}} = \\frac{v_0^2}{2|g|} "
},
{
"math_id": 41,
"text": "h=\\frac{(x\\tan\\theta)^2}{4(x\\tan\\theta-y)}. "
},
{
"math_id": 42,
"text": "\\phi = \\arctan{{\\tan\\theta\\over 2}}"
},
{
"math_id": 43,
"text": " \\frac{t_d}{2} "
},
{
"math_id": 44,
"text": " h = \\frac{d\\tan\\theta}{4} "
},
{
"math_id": 45,
"text": "y=0"
},
{
"math_id": 46,
"text": " 0 = v_0 t_d \\sin(\\theta) - \\frac{1}{2}gt_d^2 "
},
{
"math_id": 47,
"text": " t_d = \\frac{2v_0 \\sin(\\theta)}{|g|} "
},
{
"math_id": 48,
"text": " d = v_0 t_d \\cos(\\theta) "
},
{
"math_id": 49,
"text": " d = \\frac{v_0^2}{|g|}\\sin(2\\theta) "
},
{
"math_id": 50,
"text": " \\sin 2\\theta=1 "
},
{
"math_id": 51,
"text": " 2\\theta=90^\\circ "
},
{
"math_id": 52,
"text": " \\theta=45^\\circ "
},
{
"math_id": 53,
"text": " d = \\frac{v \\cos \\theta}{|g|} \\left( v \\sin \\theta + \\sqrt{(v \\sin \\theta)^2 + 2gy_0} \\right) "
},
{
"math_id": 54,
"text": " d = \\frac{v^2 \\sin(2 \\theta)}{|g|} "
},
{
"math_id": 55,
"text": " d_{\\mathrm{max}} = \\frac{v^2}{|g|} "
},
{
"math_id": 56,
"text": " v_y^2 = (v_0 \\sin \\theta)^2-2gy "
},
{
"math_id": 57,
"text": " \\sin(2\\theta) = \\frac{gd}{v^2} "
},
{
"math_id": 58,
"text": " \\theta = \\frac{1}{2} \\arcsin \\left( \\frac{gd}{v^2} \\right) "
},
{
"math_id": 59,
"text": " \\sin(2\\theta) = \\cos (2\\theta - 90^\\circ )"
},
{
"math_id": 60,
"text": " \\theta = 45^\\circ + \\frac{1}{2} \\arccos \\left( \\frac{gd}{v^2} \\right) "
},
{
"math_id": 61,
"text": " \\theta = \\arctan{\\left(\\frac{v^2\\pm\\sqrt{v^4-g(gx^2+2yv^2)}}{gx}\\right)} "
},
{
"math_id": 62,
"text": " v^2 "
},
{
"math_id": 63,
"text": " v^2/g=y+\\sqrt{y^2+x^2}. "
},
{
"math_id": 64,
"text": " \\theta=\\arctan\\left(y/x+\\sqrt{y^2/x^2+1}\\right). "
},
{
"math_id": 65,
"text": " \\tan\\theta=\\frac{\\sin\\alpha+1}{\\cos\\alpha} "
},
{
"math_id": 66,
"text": " \\tan(\\pi/2-\\theta)=\\frac{\\cos\\alpha}{\\sin\\alpha+1} "
},
{
"math_id": 67,
"text": " \\cos^2(\\pi/2-\\theta)=\\frac 12(\\sin\\alpha+1) "
},
{
"math_id": 68,
"text": " 2\\cos^2(\\pi/2-\\theta)-1=\\cos(\\pi/2-\\alpha) "
},
{
"math_id": 69,
"text": " \\theta = \\pi/2 - \\frac 12(\\pi/2-\\alpha). "
},
{
"math_id": 70,
"text": "L = \\frac{v_0^2}{2g} \\left( 2\\sin\\theta + \\cos^2\\theta\\cdot\\ln \\frac{1 + \\sin\\theta}{1 - \\sin\\theta} \\right) = \\frac{v_0^2}{g} \\left( \\sin\\theta + \\cos^2\\theta\\cdot\\tanh^{-1}(\\sin\\theta) \\right)"
},
{
"math_id": 71,
"text": "v_0"
},
{
"math_id": 72,
"text": "\\theta"
},
{
"math_id": 73,
"text": "g"
},
{
"math_id": 74,
"text": "L = \\int_{0}^{\\mathrm{range}} \\sqrt{1 + \\left ( \\frac{\\mathrm{d}y}{\\mathrm{d}x} \\right )^2}\\,\\mathrm{d}x = \\int_{0}^{v_0^2 \\sin(2\\theta)/g} \\sqrt{1+\\left (\\tan\\theta -{g\\over {v_0^2 \\cos^2\\theta}}x\\right)^2}\\,\\mathrm{d}x ."
},
{
"math_id": 75,
"text": "L = \\int_{0}^{t} \\sqrt{v_x^2 + v_y^2}\\,\\mathrm{d}t = \\int_{0}^{2v_0\\sin\\theta/g} \\sqrt{(gt)^2-2gv_0\\sin\\theta t+v_0^2}\\,\\mathrm{d}t."
},
{
"math_id": 76,
"text": "\\mathbf{F_{air}} = -f(v)\\cdot\\mathbf{\\hat v}"
},
{
"math_id": 77,
"text": "f(v) \\propto v"
},
{
"math_id": 78,
"text": "f(v) \\propto v^2"
},
{
"math_id": 79,
"text": "0.15\\,\\mathrm{cm^2/s}"
},
{
"math_id": 80,
"text": "0.015\\,\\mathrm{m^2/s}"
},
{
"math_id": 81,
"text": "\\mathbf{F_{air}} = -k_{\\mathrm{Stokes}}\\cdot\\mathbf{v}\\qquad"
},
{
"math_id": 82,
"text": "Re \\lesssim 1000"
},
{
"math_id": 83,
"text": "\\mathbf{F_{air}} = -k\\,|\\mathbf{v}|\\cdot\\mathbf{v}\\qquad"
},
{
"math_id": 84,
"text": "Re \\gtrsim 1000"
},
{
"math_id": 85,
"text": "\\mathbf{F_{\\mathrm{air}}} = -f(v)\\cdot\\mathbf{\\hat v}"
},
{
"math_id": 86,
"text": "\\mathbf{F_{air}} \\propto \\mathbf{v}"
},
{
"math_id": 87,
"text": "F_\\mathrm{air}"
},
{
"math_id": 88,
"text": "v"
},
{
"math_id": 89,
"text": "\\frac{\\mathrm{d}}{\\mathrm{d}t}\\begin{pmatrix}v_x \\\\ v_y\\end{pmatrix} = \\begin{pmatrix}-\\mu\\,v_x \\\\ -g-\\mu\\,v_y\\end{pmatrix}"
},
{
"math_id": 90,
"text": "v_x"
},
{
"math_id": 91,
"text": "v_y"
},
{
"math_id": 92,
"text": "\\mu:=k/m"
},
{
"math_id": 93,
"text": "0^o \\le \\theta \\le 180^o "
},
{
"math_id": 94,
"text": " x(t) = \\frac{v_{x0}}{\\mu}\\left(1-e^{-\\mu t}\\right) "
},
{
"math_id": 95,
"text": " y(t) = -\\frac{g}{\\mu}t + \\frac{1}{\\mu}\\left(v_{y0} + \\frac{g}{\\mu}\\right)\\left(1 - e^{-\\mu t}\\right) "
},
{
"math_id": 96,
"text": "t=\\frac{1}{\\mu}\\left(1+\\frac{\\mu}{g}v_{y0}+W\\left(-\\left(1+\\frac{\\mu}{g} v_{y0}\\right)e^{-\\left(1+\\frac{\\mu}{g} v_{y0}\\right)}\\right)\\right)"
},
{
"math_id": 97,
"text": "F_{\\mathrm{air}} = -k v^2"
},
{
"math_id": 98,
"text": "\\mathbf{F_D} = -\\tfrac{1}{2} c \\rho A\\, v\\,\\mathbf{v}"
},
{
"math_id": 99,
"text": "v_\\infty=\\sqrt{g/\\mu}"
},
{
"math_id": 100,
"text": "t_f=1/\\sqrt{g\\mu}"
},
{
"math_id": 101,
"text": "|v_x|\\gg|v_y|"
},
{
"math_id": 102,
"text": "\\dot{v}_x(t) = -\\mu\\,v_x^2(t)"
},
{
"math_id": 103,
"text": "v_x(t) = \\frac{1}{1/v_{x,0}+\\mu\\,t}"
},
{
"math_id": 104,
"text": "x(t) = \\frac{1}{\\mu}\\ln(1+\\mu\\,v_{x,0}\\cdot t)"
},
{
"math_id": 105,
"text": "\\dot{v}_y(t) = -g-\\mu\\,v_y^2(t)"
},
{
"math_id": 106,
"text": "v_y(t) = v_\\infty \\tan\\frac{t_{\\mathrm{peak}}-t}{t_f}"
},
{
"math_id": 107,
"text": "y(t) = y_{\\mathrm{peak}} + \\frac{1}{\\mu}\\ln\\left(\\cos\\frac{t_{\\mathrm{peak}}-t}{t_f}\\right)"
},
{
"math_id": 108,
"text": "v_\\infty \\equiv \\sqrt{\\frac{g}{\\mu}},"
},
{
"math_id": 109,
"text": "t_f \\equiv \\frac{1}{\\sqrt{\\mu g}},"
},
{
"math_id": 110,
"text": "t_{\\mathrm{peak}} \\equiv t_f \\arctan{\\frac{v_{y,0}}{v_\\infty}} = \\frac{1}{\\sqrt{\\mu g}} \\arctan{\\left(\\sqrt\\frac{\\mu}{g}v_{y,0}\\right)},"
},
{
"math_id": 111,
"text": "y_{\\mathrm{peak}} \\equiv -\\frac{1}{\\mu}\\ln{\\cos{\\frac{t_\\mathrm{peak}}{t_f}}} = \\frac{1}{2\\mu}\\ln{\\left(1+\\frac{\\mu}{g} v_{y,0}^2\\right)}"
},
{
"math_id": 112,
"text": "v_{y,0}"
},
{
"math_id": 113,
"text": "t = 0"
},
{
"math_id": 114,
"text": "y(0) = 0"
},
{
"math_id": 115,
"text": "t_\\mathrm{rise}=\\frac{\\pi}{2}t_f"
},
{
"math_id": 116,
"text": "\\dot{v}_y(t) = -g+\\mu\\,v_y^2(t)"
},
{
"math_id": 117,
"text": "v_y(t) = -v_\\infty \\tanh\\frac{t-t_{\\mathrm{peak}}}{t_f}"
},
{
"math_id": 118,
"text": "y(t) = y_{\\mathrm{peak}} - \\frac{1}{\\mu}\\ln\\left(\\cosh\\frac{t-t_{\\mathrm{peak}}}{t_f}\\right)"
},
{
"math_id": 119,
"text": "t_f"
},
{
"math_id": 120,
"text": "-v_\\infty"
},
{
"math_id": 121,
"text": "\\frac{\\mathrm{d}}{\\mathrm{d}t}\\begin{pmatrix}x \\\\ y \\\\ v_x \\\\ v_y\\end{pmatrix} = \\begin{pmatrix}v_x \\\\ v_y \\\\ -\\mu\\,v_x\\sqrt{v_x^2+v_y^2} \\\\ -g-\\mu\\,v_y\\sqrt{v_x^2+v_y^2}\\end{pmatrix}"
},
{
"math_id": 122,
"text": "\\tilde v:=v/\\sqrt{Rg}"
},
{
"math_id": 123,
"text": " d = \\frac{v^2 \\sin(2 \\theta)}{g} \\Big/ \\sqrt{1-\\left(2-\\tilde v^2\\right)\\tilde v^2\\cos^2\\theta}"
},
{
"math_id": 124,
"text": "\\theta=\\tfrac12\\arccos\\left(\\tilde v^2/(2-\\tilde v^2)\\right)"
},
{
"math_id": 125,
"text": " d_{\\mathrm{max}} = \\frac{v^2}{g} \\big/ \\left(1-\\tfrac12\\tilde v^2\\right)"
},
{
"math_id": 126,
"text": "v<\\sqrt{Rg}"
},
{
"math_id": 127,
"text": " h = \\frac{v^2 \\sin^2\\theta}{g} \\Big/ \\left(1-\\tilde v^2+\\sqrt{1-\\left(2-\\tilde v^2\\right)\\tilde v^2\\cos^2\\theta}\\right) "
},
{
"math_id": 128,
"text": "\\theta=90^\\circ"
},
{
"math_id": 129,
"text": " h_{\\mathrm{max}} = \\frac{v^2}{2g} \\big/ \\left(1-\\tfrac12\\tilde v^2\\right) "
},
{
"math_id": 130,
"text": "v<\\sqrt{2Rg}"
},
{
"math_id": 131,
"text": " t = \\frac{2v\\sin\\theta}{g} \\cdot \\frac{1}{2-\\tilde v^2} \\left(1 + \\frac{1}{\\sqrt{2-\\tilde v^2}\\,\\tilde v\\sin\\theta}\\arcsin\\frac{\\sqrt{2-\\tilde v^2}\\,\\tilde v\\sin\\theta}{\\sqrt{1-\\left(2-\\tilde v^2\\right)\\tilde v^2\\cos^2\\theta}}\\right) "
}
] | https://en.wikipedia.org/wiki?curid=862717 |
862898 | Absolute space and time | Theoretical foundation of Newtonian mechanics
Absolute space and time is a concept in physics and philosophy about the properties of the universe. In physics, absolute space and time may be a preferred frame.
Early concept.
A version of the concept of absolute space (in the sense of a preferred frame) can be seen in Aristotelian physics. Robert S. Westman writes that a "whiff" of absolute space can be observed in Copernicus's "De revolutionibus orbium coelestium", where Copernicus uses the concept of an immobile sphere of stars.
Newton.
Originally introduced by Sir Isaac Newton in "Philosophiæ Naturalis Principia Mathematica", the concepts of absolute time and space provided a theoretical foundation that facilitated Newtonian mechanics. According to Newton, absolute time and space respectively are independent aspects of objective reality:
Absolute, true and mathematical time, of itself, and from its own nature flows equably without regard to anything external, and by another name is called duration: relative, apparent and common time, is some sensible and external (whether accurate or unequable) measure of duration by the means of motion, which is commonly used instead of true time ...
According to Newton, absolute time exists independently of any perceiver and progresses at a consistent pace throughout the universe. Unlike relative time, Newton believed absolute time was imperceptible and could only be understood mathematically. According to Newton, humans are only capable of perceiving relative time, which is a measurement of perceivable objects in motion (like the Moon or Sun). From these movements, we infer the passage of time.
<templatestyles src="Template:Blockquote/styles.css" />Absolute space, in its own nature, without regard to anything external, remains always similar and immovable. Relative space is some movable dimension or measure of the absolute spaces; which our senses determine by its position to bodies: and which is vulgarly taken for immovable space ...
Absolute motion is the translation of a body from one absolute place into another: and relative motion, the translation from one relative place into another ...
These notions imply that absolute space and time do not depend upon physical events, but are a backdrop or stage setting within which physical phenomena occur. Thus, every object has an absolute state of motion relative to absolute space, so that an object must be either in a state of absolute rest, or moving at some absolute speed. To support his views, Newton provided some empirical examples: according to Newton, a solitary rotating sphere can be inferred to rotate about its axis relative to absolute space by observing the bulging of its equator, and a solitary pair of spheres tied by a rope can be inferred to be in absolute rotation about their center of gravity (barycenter) by observing the tension in the rope.
Differing views.
Historically, there have been differing views on the concept of absolute space and time. Gottfried Leibniz was of the opinion that space made no sense except as the relative location of bodies, and time made no sense except as the relative movement of bodies. George Berkeley suggested that, lacking any point of reference, a sphere in an otherwise empty universe could not be conceived to rotate, and a pair of spheres could be conceived to rotate relative to one another, but not to rotate about their center of gravity, an example later raised by Albert Einstein in his development of general relativity.
A more recent form of these objections was made by Ernst Mach. Mach's principle proposes that mechanics is entirely about relative motion of bodies and, in particular, mass is an expression of such relative motion. So, for example, a single particle in a universe with no other bodies would have zero mass. According to Mach, Newton's examples simply illustrate relative rotation of spheres and the bulk of the universe.
When, accordingly, we say that a body preserves unchanged its direction and velocity "in space", our assertion is nothing more or less than an abbreviated reference to "the entire universe".—Ernst Mach
These views opposing absolute space and time may be seen from a modern stance as an attempt to introduce operational definitions for space and time, a perspective made explicit in the special theory of relativity.
Even within the context of Newtonian mechanics, the modern view is that absolute space is unnecessary. Instead, the notion of inertial frame of reference has taken precedence, that is, a preferred "set" of frames of reference that move uniformly with respect to one another. The laws of physics transform from one inertial frame to another according to Galilean relativity, leading to the following objections to absolute space, as outlined by Milutin Blagojević:
Newton himself recognized the role of inertial frames.
The motions of bodies included in a given space are the same among themselves, whether that space is at rest or moves uniformly forward in a straight line.
As a practical matter, inertial frames often are taken as frames moving uniformly with respect to the fixed stars. See Inertial frame of reference for more discussion on this.
Mathematical definitions.
"Space", as understood in Newtonian mechanics, is three-dimensional and Euclidean, with a fixed orientation. It is denoted "E"3. If some point "O" in "E"3 is fixed and defined as an origin, the "position" of any point "P" in "E"3 is uniquely determined by its radius vector formula_0 (the origin of this vector coincides with the point "O" and its end coincides with the point "P"). The three-dimensional linear vector space "R"3 is a set of all radius vectors. The space "R"3 is endowed with a scalar product ⟨ , ⟩.
"Time" is a scalar which is the same in all space "E"3 and is denoted as "t". The ordered set { "t" } is called a time axis.
"Motion" (also "path" or "trajectory") is a function "r" : Δ → "R"3 that maps a point in the interval Δ from the time axis to a position (radius vector) in "R"3.
The above four concepts are the "well-known" objects mentioned by Isaac Newton in his Principia:
"I do not define time, space, place and motion, as being well known to all."
Special relativity.
The concepts of space and time were separate in physical theory prior to the advent of special relativity theory, which connected the two and showed both to be dependent upon the reference frame's motion. In Einstein's theories, the ideas of absolute time and space were superseded by the notion of spacetime in special relativity, and curved spacetime in general relativity.
Absolute simultaneity refers to the concurrence of events in time at different locations in space in a manner agreed upon in all frames of reference. The theory of relativity does not have a concept of absolute time because there is a relativity of simultaneity. An event that is simultaneous with another event in one frame of reference may be in the past or future of that event in a different frame of reference, which negates absolute simultaneity.
Einstein.
Quoted below from his later papers, Einstein identified the term aether with "properties of space", a terminology that is not widely used. Einstein stated that in general relativity the "aether" is not absolute anymore, as the geodesic and therefore the structure of spacetime depends on the presence of matter.
<templatestyles src="Template:Blockquote/styles.css" />To deny the ether is ultimately to assume that empty space has no physical qualities whatever. The fundamental facts of mechanics do not harmonize with this view. For the mechanical behaviour of a corporeal system hovering freely in empty space depends not only on relative positions (distances) and relative velocities, but also on its state of rotation, which physically may be taken as a characteristic not appertaining to the system in itself. In order to be able to look upon the rotation of the system, at least formally, as something real, Newton objectivises space. Since he classes his absolute space together with real things, for him rotation relative to an absolute space is also something real. Newton might no less well have called his absolute space “Ether”; what is essential is merely that besides observable objects, another thing, which is not perceptible, must be looked upon as real, to enable acceleration or rotation to be looked upon as something real.
<templatestyles src="Template:Blockquote/styles.css" />Because it was no longer possible to speak, in any absolute sense, of simultaneous states at different locations in the aether, the aether became, as it were, four-dimensional, since there was no objective way of ordering its states by time alone. According to special relativity too, the aether was absolute, since its influence on inertia and the propagation of light was thought of as being itself independent of physical influence...The theory of relativity resolved this problem by establishing the behaviour of the electrically neutral point-mass by the law of the geodetic line, according to which inertial and gravitational effects are no longer considered as separate. In doing so, it attached characteristics to the aether which vary from point to point, determining the metric and the dynamic behaviour of material points, and determined, in their turn, by physical factors, namely the distribution of mass/energy. Thus the aether of general relativity differs from those of classical mechanics and special relativity in that it is not ‘absolute’ but determined, in its locally variable characteristics, by ponderable matter.
General relativity.
Special relativity eliminates absolute time (although Gödel and others suspect absolute time may be valid for some forms of general relativity) and general relativity further reduces the physical scope of absolute space and time through the concept of geodesics. There appears to be absolute space in relation to the distant stars because the local geodesics eventually channel information from these stars, but it is not necessary to invoke absolute space with respect to any system's physics, as its local geodesics are sufficient to describe its spacetime.
References and notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{r} = \\vec{OP}"
}
] | https://en.wikipedia.org/wiki?curid=862898 |
8631522 | Shallow water equations | Set of partial differential equations that describe the flow below a pressure surface in a fluid
The shallow-water equations (SWE) are a set of hyperbolic partial differential equations (or parabolic if viscous shear is considered) that describe the flow below a pressure surface in a fluid (sometimes, but not necessarily, a free surface). The shallow-water equations in unidirectional form are also called (de) Saint-Venant equations, after Adhémar Jean Claude Barré de Saint-Venant (see the related section below).
The equations are derived from depth-integrating the Navier–Stokes equations, in the case where the horizontal length scale is much greater than the vertical length scale. Under this condition, conservation of mass implies that the vertical velocity scale of the fluid is small compared to the horizontal velocity scale. It can be shown from the momentum equation that vertical pressure gradients are nearly hydrostatic, and that horizontal pressure gradients are due to the displacement of the pressure surface, implying that the horizontal velocity field is constant throughout the depth of the fluid. Vertically integrating allows the vertical velocity to be removed from the equations. The shallow-water equations are thus derived.
While a vertical velocity term is not present in the shallow-water equations, note that this velocity is not necessarily zero. This is an important distinction because, for example, the vertical velocity cannot be zero when the floor changes depth, and thus if it were zero only flat floors would be usable with the shallow-water equations. Once a solution (i.e. the horizontal velocities and free surface displacement) has been found, the vertical velocity can be recovered via the continuity equation.
Situations in fluid dynamics where the horizontal length scale is much greater than the vertical length scale are common, so the shallow-water equations are widely applicable. They are used with Coriolis forces in atmospheric and oceanic modeling, as a simplification of the primitive equations of atmospheric flow.
Shallow-water equation models have only one vertical level, so they cannot directly encompass any factor that varies with height. However, in cases where the mean state is sufficiently simple, the vertical variations can be separated from the horizontal and several sets of shallow-water equations can describe the state.
Equations.
Conservative form.
The shallow-water equations are derived from equations of conservation of mass and conservation of linear momentum (the Navier–Stokes equations), which hold even when the assumptions of shallow-water break down, such as across a hydraulic jump. In the case of a horizontal bed, with negligible Coriolis forces, frictional and viscous forces, the shallow-water equations are:
formula_0
Here "η" is the total fluid column height (instantaneous fluid depth as a function of "x", "y" and "t"), and the 2D vector ("u","v") is the fluid's horizontal flow velocity, averaged across the vertical column. Further "g" is acceleration due to gravity and ρ is the fluid density. The first equation is derived from mass conservation, the second two from momentum conservation.
Non-conservative form.
Expanding the derivatives in the above using the product rule, the non-conservative form of the shallow-water equations is obtained. Since velocities are not subject to a fundamental conservation equation, the non-conservative forms do not hold across a shock or hydraulic jump. Also included are the appropriate terms for Coriolis, frictional and viscous forces, to obtain (for constant fluid density):
formula_1
where
It is often the case that the terms quadratic in "u" and "v", which represent the effect of bulk advection, are small compared to the other terms. This is called geostrophic balance, and is equivalent to saying that the Rossby number is small. Assuming also that the wave height is very small compared to the mean height ("h" ≪ "H"), we have (without lateral viscous forces):
formula_2
One-dimensional Saint-Venant equations.
The one-dimensional (1-D) Saint-Venant equations were derived by Adhémar Jean Claude Barré de Saint-Venant, and are commonly used to model transient open-channel flow and surface runoff. They can be viewed as a contraction of the two-dimensional (2-D) shallow-water equations, which are also known as the two-dimensional Saint-Venant equations. The 1-D Saint-Venant equations contain to a certain extent the main characteristics of the channel cross-sectional shape.
The 1-D equations are used extensively in computer models such as TUFLOW, Mascaret (EDF), SIC (Irstea), HEC-RAS, SWMM5, ISIS, InfoWorks, Flood Modeller, SOBEK 1DFlow, MIKE 11, and MIKE SHE because they are significantly easier to solve than the full shallow-water equations. Common applications of the 1-D Saint-Venant equations include flood routing along rivers (including evaluation of measures to reduce the risks of flooding), dam break analysis, storm pulses in an open channel, as well as storm runoff in overland flow.
Equations.
The system of partial differential equations which describe the 1-D incompressible flow in an open channel of arbitrary cross section – as derived and posed by Saint-Venant in his 1871 paper (equations 19 & 20) – is:
and
where "x" is the space coordinate along the channel axis, "t" denotes time, "A"("x","t") is the cross-sectional area of the flow at location "x", "u"("x","t") is the flow velocity, "ζ"("x","t") is the free surface elevation and τ("x","t") is the wall shear stress along the wetted perimeter "P"("x","t") of the cross section at "x". Further ρ is the (constant) fluid density and "g" is the gravitational acceleration.
Closure of the hyperbolic system of equations (1)–(2) is obtained from the geometry of cross sections – by providing a functional relationship between the cross-sectional area "A" and the surface elevation ζ at each position "x". For example, for a rectangular cross section, with constant channel width "B" and channel bed elevation "z"b, the cross sectional area is: "A" = "B" (ζ − "z"b) = "B" "h". The instantaneous water depth is "h"("x","t") = ζ("x","t") − "z"b("x"), with "z"b("x") the bed level (i.e. elevation of the lowest point in the bed above datum, see the cross-section figure). For non-moving channel walls the cross-sectional area "A" in equation (1) can be written as:
formula_3
with "b"("x","h") the effective width of the channel cross section at location "x" when the fluid depth is "h" – so "b"("x", "h") = "B"("x") for rectangular channels.
The wall shear stress "τ" is dependent on the flow velocity "u", they can be related by using e.g. the Darcy–Weisbach equation, Manning formula or Chézy formula.
Further, equation (1) is the continuity equation, expressing conservation of water volume for this incompressible homogeneous fluid. Equation (2) is the momentum equation, giving the balance between forces and momentum change rates.
The bed slope "S"("x"), friction slope "S"f("x", "t") and hydraulic radius "R"("x", "t") are defined as:
formula_4
formula_5
and
formula_6
Consequently, the momentum equation (2) can be written as:
Conservation of momentum.
The momentum equation (3) can also be cast in the so-called conservation form, through some algebraic manipulations on the Saint-Venant equations, (1) and (3). In terms of the discharge "Q" = "Au":
where "A", "I"1 and "I"2 are functions of the channel geometry, described in the terms of the channel width "B"(σ,"x"). Here σ is the height above the lowest point in the cross section at location "x", see the cross-section figure. So σ is the height above the bed level "z"b("x") (of the lowest point in the cross section):
formula_7
Above – in the momentum equation (4) in conservation form – "A", "I"1 and "I"2 are evaluated at "σ" = "h"("x","t"). The term "g" "I"1 describes the hydrostatic force in a certain cross section. And, for a non-prismatic channel, "g" "I"2 gives the effects of geometry variations along the channel axis "x".
In applications, depending on the problem at hand, there often is a preference for using either the momentum equation in non-conservation form, (2) or (3), or the conservation form (4). For instance in case of the description of hydraulic jumps, the conservation form is preferred since the momentum flux is continuous across the jump.
Characteristics.
The Saint-Venant equations (1)–(2) can be analysed using the method of characteristics. The two celerities d"x"/d"t" on the characteristic curves are:
formula_8 with formula_9
The Froude number "Fr"
|"u"| / "c" determines whether the flow is subcritical ("Fr" < 1) or supercritical ("Fr" > 1).
For a rectangular and prismatic channel of constant width "B", i.e. with "A" = "B h" and "c" = √"gh", the Riemann invariants are:
formula_10 and formula_11
so the equations in characteristic form are:
formula_12
The Riemann invariants and method of characteristics for a prismatic channel of arbitrary cross-section are described by Didenkulova & Pelinovsky (2011).
The characteristics and Riemann invariants provide important information on the behavior of the flow, as well as that they may be used in the process of obtaining (analytical or numerical) solutions.
Hamiltonian structure for frictionless flow.
In case there is no friction and the channel has a rectangular prismatic cross section, the Saint-Venant equations have a Hamiltonian structure. The Hamiltonian "H" is equal to the energy of the free-surface flow:
formula_13
with constant "B" the channel width and "ρ" the constant fluid density. Hamilton's equations then are:
formula_14
since ∂"A"/∂"ζ" = "B").
Derived modelling.
Dynamic wave.
The dynamic wave is the full one-dimensional Saint-Venant equation. It is numerically challenging to solve, but is valid for all channel flow scenarios. The dynamic wave is used for modeling transient storms in modeling programs including Mascaret (EDF), SIC (Irstea), HEC-RAS, InfoWorks_ICM , MIKE 11, Wash 123d and SWMM5.
In the order of increasing simplifications, by removing some terms of the full 1D Saint-Venant equations (aka Dynamic wave equation), we get the also classical Diffusive wave equation and Kinematic wave equation.
Diffusive wave.
For the diffusive wave it is assumed that the inertial terms are less than the gravity, friction, and pressure terms. The diffusive wave can therefore be more accurately described as a non-inertia wave, and is written as:
formula_15
The diffusive wave is valid when the inertial acceleration is much smaller than all other forms of acceleration, or in other words when there is primarily subcritical flow, with low Froude values. Models that use the diffusive wave assumption include MIKE SHE and LISFLOOD-FP. In the SIC (Irstea) software this options is also available, since the 2 inertia terms (or any of them) can be removed in option from the interface.
Kinematic wave.
For the kinematic wave it is assumed that the flow is uniform, and that the friction slope is approximately equal to the slope of the channel. This simplifies the full Saint-Venant equation to the kinematic wave:
formula_16
The kinematic wave is valid when the change in wave height over distance and velocity over distance and time is negligible relative to the bed slope, e.g. for shallow flows over steep slopes. The kinematic wave is used in HEC-HMS.
Derivation from Navier–Stokes equations.
The 1-D Saint-Venant momentum equation can be derived from the Navier–Stokes equations that describe fluid motion. The "x"-component of the Navier–Stokes equations – when expressed in Cartesian coordinates in the "x"-direction – can be written as:
formula_17
where "u" is the velocity in the "x"-direction, "v" is the velocity in the "y"-direction, "w" is the velocity in the "z"-direction, "t" is time, "p" is the pressure, ρ is the density of water, ν is the kinematic viscosity, and "f"x is the body force in the "x"-direction.
The local acceleration (a) can also be thought of as the "unsteady term" as this describes some change in velocity over time. The convective acceleration (b) is an acceleration caused by some change in velocity over position, for example the speeding up or slowing down of a fluid entering a constriction or an opening, respectively. Both these terms make up the inertia terms of the 1-dimensional Saint-Venant equation.
The pressure gradient term (c) describes how pressure changes with position, and since the pressure is assumed hydrostatic, this is the change in head over position. The friction term (d) accounts for losses in energy due to friction, while the gravity term (e) is the acceleration due to bed slope.
Wave modelling by shallow-water equations.
Shallow-water equations can be used to model Rossby and Kelvin waves in the atmosphere, rivers, lakes and oceans as well as gravity waves in a smaller domain (e.g. surface waves in a bath). In order for shallow-water equations to be valid, the wavelength of the phenomenon they are supposed to model has to be much larger than the depth of the basin where the phenomenon takes place. Somewhat smaller wavelengths can be handled by extending the shallow-water equations using the Boussinesq approximation to incorporate dispersion effects. Shallow-water equations are especially suitable to model tides which have very large length scales (over hundred of kilometers). For tidal motion, even a very deep ocean may be considered as shallow as its depth will always be much smaller than the tidal wavelength.
Turbulence modelling using non-linear shallow-water equations.
Shallow-water equations, in its non-linear form, is an obvious candidate for modelling turbulence in the atmosphere and oceans, i.e. geophysical turbulence. An advantage of this, over Quasi-geostrophic equations, is that it allows solutions like gravity waves, while also conserving energy and potential vorticity. However, there are also some disadvantages as far as geophysical applications are concerned - it has a non-quadratic expression for total energy and a tendency for waves to become shock waves. Some alternate models have been proposed which prevent shock formation. One alternative is to modify the "pressure term" in the momentum equation, but it results in a complicated expression for kinetic energy. Another option is to modify the non-linear terms in all equations, which gives a quadratic expression for kinetic energy, avoids shock formation, but conserves only linearized potential vorticity.
Notes.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n\\frac{\\partial (\\rho \\eta) }{\\partial t} &+ \\frac{\\partial (\\rho \\eta u)}{\\partial x} + \\frac{\\partial (\\rho \\eta v)}{\\partial y} = 0,\\\\[3pt]\n\\frac{\\partial (\\rho \\eta u)}{\\partial t} &+ \\frac{\\partial}{\\partial x}\\left( \\rho \\eta u^2 + \\frac{1}{2}\\rho g \\eta^2 \\right) + \\frac{\\partial (\\rho \\eta u v)}{\\partial y} = 0,\\\\[3pt]\n\\frac{\\partial (\\rho \\eta v)}{\\partial t} &+ \\frac{\\partial}{\\partial y}\\left(\\rho \\eta v^2 + \\frac{1}{2}\\rho g \\eta ^2\\right) + \\frac{\\partial (\\rho \\eta uv)}{\\partial x} = 0.\n\\end{align}"
},
{
"math_id": 1,
"text": "\\begin{align}\n\\frac{\\partial h}{\\partial t} &+ \\frac{\\partial}{\\partial x} \\Bigl( (H+h) u \\Bigr) + \\frac{\\partial}{\\partial y} \\Bigl( (H+h) v \\Bigr) = 0,\\\\[3pt]\n\\frac{\\partial u}{\\partial t} &+ u\\frac{\\partial u}{\\partial x} + v\\frac{\\partial u}{\\partial y} - f v = -g \\frac{\\partial h}{\\partial x} - k u + \\nu \\left( \\frac{\\partial^2 u}{\\partial x^2} + \\frac{\\partial^2 u}{\\partial y^2} \\right),\\\\[3pt]\n\\frac{\\partial v}{\\partial t} &+ u\\frac{\\partial v}{\\partial x} + v\\frac{\\partial v}{\\partial y} + f u = -g \\frac{\\partial h}{\\partial y} - k v + \\nu \\left( \\frac{\\partial^2 v}{\\partial x^2} + \\frac{\\partial^2 v}{\\partial y^2} \\right),\n\\end{align}"
},
{
"math_id": 2,
"text": "\\begin{align}\n\\frac{\\partial h}{\\partial t} &+ H \\left( \\frac{\\partial u}{\\partial x} + \\frac{\\partial v}{\\partial y} \\right) = 0,\\\\[3pt]\n\\frac{\\partial u}{\\partial t} &- f v = -g \\frac{\\partial h}{\\partial x} - k u,\\\\[3pt]\n\\frac{\\partial v}{\\partial t} &+ f u = -g \\frac{\\partial h}{\\partial y} - k v.\n\\end{align}"
},
{
"math_id": 3,
"text": "A(x,t) = \\int_0^{h(x,t)} b(x,h')\\, dh', "
},
{
"math_id": 4,
"text": "S = - \\frac{\\mathrm{d} z_\\mathrm{b}}{\\mathrm{d} x},"
},
{
"math_id": 5,
"text": "S_\\mathrm{f} = \\frac{\\tau}{\\rho g R}"
},
{
"math_id": 6,
"text": "R = \\frac{A}{P}."
},
{
"math_id": 7,
"text": "\\begin{align}\n A(\\sigma,x) &= \\int_0^\\sigma B(\\sigma', x)\\; \\mathrm{d}\\sigma',\n \\\\\n I_1(\\sigma,x) &= \\int_0^\\sigma ( \\sigma - \\sigma' )\\, B(\\sigma^\\prime,x)\\; \\mathrm{d}\\sigma'\n \\qquad \\text{and}\n \\\\\n I_2(\\sigma,x) &= \\int_0^\\sigma ( \\sigma - \\sigma' )\\, \\frac{\\partial B(\\sigma', x)}{\\partial x}\\; \\mathrm{d}\\sigma'.\n\\end{align}"
},
{
"math_id": 8,
"text": "\\frac{\\mathrm{d}x}{\\mathrm{d}t} = u \\pm c,"
},
{
"math_id": 9,
"text": "c = \\sqrt{ \\frac{gA}{B} }."
},
{
"math_id": 10,
"text": "r_+ = u + 2\\sqrt{gh}"
},
{
"math_id": 11,
"text": "r_- = u - 2\\sqrt{gh},"
},
{
"math_id": 12,
"text": "\\begin{align}\n &\\frac{\\mathrm{d}}{\\mathrm{d}t} \\left( u + 2 \\sqrt{gh} \\right) = g \\left( S - S_f \\right)\n &&\\text{along} \\quad \\frac{\\mathrm{d}x}{\\mathrm{d}t} = u + \\sqrt{gh} \\quad \\text{and}\n \\\\\n &\\frac{\\mathrm{d}}{\\mathrm{d}t} \\left( u - 2 \\sqrt{gh} \\right) = g \\left( S - S_f \\right)\n &&\\text{along} \\quad \\frac{\\mathrm{d}x}{\\mathrm{d}t} = u - \\sqrt{gh}.\n\\end{align}"
},
{
"math_id": 13,
"text": " H = \\rho \\int \\left( \\frac12 A u^2 + \\frac12 g B \\zeta^2 \\right) \\mathrm{d}x, "
},
{
"math_id": 14,
"text": "\\begin{align}\n&\\rho B \\frac{\\partial \\zeta}{\\partial t} + \\frac{\\partial}{\\partial x} \\left( \\frac{\\partial H}{\\partial u} \\right) =\n\\rho \\left( B \\frac{\\partial\\zeta}{\\partial t} + \\frac{\\partial(Au)}{\\partial x} \\right) =\n\\rho \\left( \\frac{\\partial A}{\\partial t} + \\frac{\\partial (Au)}{\\partial x} \\right) = 0,\n\\\\\n&\\rho B \\frac{\\partial u}{\\partial t} + \\frac{\\partial}{\\partial x} \\left( \\frac{\\partial H}{\\partial \\zeta} \\right) =\n\\rho B \\left( \\frac{\\partial u}{\\partial t} + u \\frac{\\partial u}{\\partial x} + g \\frac{\\partial \\zeta}{\\partial x} \\right) = 0,\n\\end{align}"
},
{
"math_id": 15,
"text": "g \\frac{\\partial h}{\\partial x} + g (S_f - S) = 0. "
},
{
"math_id": 16,
"text": "S_f - S = 0. "
},
{
"math_id": 17,
"text": " \\frac{\\partial u}{\\partial t} + u \\frac{\\partial u}{\\partial x} + v \\frac{\\partial u}{\\partial y}+ w \\frac{\\partial u}{\\partial z}= -\\frac{\\partial p}{\\partial x} \\frac{1}{\\rho} + \\nu \\left(\\frac{\\partial^2 u}{\\partial x^2} + \\frac{\\partial^2 u}{\\partial y^2} + \\frac{\\partial^2 u}{\\partial z^2}\\right)+ f_x,"
},
{
"math_id": 18,
"text": "\\nu"
},
{
"math_id": 19,
"text": " \\nu \\left(\\frac{\\partial^2 u}{\\partial x^2} + \\frac{\\partial^2 u}{\\partial y^2} + \\frac{\\partial^2 u}{\\partial z^2}\\right)= 0. "
},
{
"math_id": 20,
"text": "v\\frac{\\partial u}{\\partial y}+ w \\frac{\\partial u}{\\partial z} = 0"
},
{
"math_id": 21,
"text": " p = \\rho g h"
},
{
"math_id": 22,
"text": "\\partial p = \\rho g (\\partial h)."
},
{
"math_id": 23,
"text": " -\\frac{\\partial p}{\\partial x} \\frac{1}{\\rho} = -\\frac{1}{\\rho}\\frac{\\rho g \\left(\\partial h \\right)}{\\partial x} = -g \\frac{\\partial h}{\\partial x}."
},
{
"math_id": 24,
"text": " f_x =f_{x,g} + f_{x,f} "
},
{
"math_id": 25,
"text": "F_{g} = \\sin(\\theta) gM"
},
{
"math_id": 26,
"text": " \\sin\\theta = \\frac{\\text{opp}}{\\text{hyp}}. "
},
{
"math_id": 27,
"text": "\\sin\\theta = \\tan\\theta = \\frac{\\text{opp}}{\\text{adj}} = S"
},
{
"math_id": 28,
"text": "f_{x,g} = gS."
},
{
"math_id": 29,
"text": "f_{x,f} = S_f g."
},
{
"math_id": 30,
"text": "\\frac{\\partial u}{\\partial t} + u \\frac{\\partial u}{\\partial x} + g \\frac{\\partial h}{\\partial x} + g (S_f - S) = 0, "
},
{
"math_id": 31,
"text": "(a)\\quad \\ \\ (b)\\quad \\ \\ \\ (c) \\qquad \\ \\ \\ (d) \\quad (e)\\ "
}
] | https://en.wikipedia.org/wiki?curid=8631522 |
8632032 | Multiphoton intrapulse interference phase scan | Multiphoton intrapulse interference phase scan (MIIPS) is a method used in ultrashort laser technology that simultaneously measures (phase characterization), and compensates (phase correction) femtosecond laser pulses using an adaptive pulse shaper. When an ultrashort laser pulse reaches a duration of less than a few hundred femtosecond, it becomes critical to characterize its duration, its temporal intensity curve, or its electric field as a function of time. Classical photodetectors measuring the intensity of light are still too slow to allow for a direct measurement, even with the fastest photodiodes or streak cameras.
Other means have been developed based on quasi instantaneous non linear optical effects such as autocorrelation, FROG, SPIDER, etc. However, these can only measure the pulse characteristics but not correct for defects in order to make the pulse as short as possible. For instance, the pulse could be linearly chirped or present higher order group delay dispersion (GDD) so that its duration is longer than a bandwidth-limited pulse having the same intensity spectrum. It is therefore highly desirable to have a method which can not only characterize the pulse, but also correct the pulse to specific shapes for various applications in which repeatable pulse characteristics are requested. MIIPS can not only measure the pulse but also correct the high-order dispersion, thus is highly preferable for applications where repeatable electromagnetic field is important, such as to generate ultrashort pulses which are transform limited or possess specific phase characteristics.
The MIIPS method is also based on second-harmonic generation (SHG) in a non-linear crystal; however, instead of temporally scanning a replica of the pulse as in autocorrelation, a controllable and varying GDD is applied to the pulse through a pulse shaper. The intensity is maximal when the outgoing pulse is unchirped, or when the applied GDD exactly compensates the incoming pulse GDD. The pulse GDD is thus measured and compensated. By spectrally resolving the SHG signal, GDD can be measured as a function of frequency, so that the spectral phase can be measured and dispersion can be compensated to all orders.
Theory.
A MIIPS-based device consists of two basic components controlled by a computer: a pulse shaper (usually a liquid crystal based spatial light modulator - SLM) and a spectrometer. The pulse shaper allows manipulation of the spectral phase and/or amplitude of the ultrashort pulses. The spectrometer records the spectrum of a nonlinear optical process such as second harmonic generation produced by the laser pulse. The MIIPS process is analogous to the Wheatstone bridge in electronics. A well-known (calibrated) spectral phase function is used in order to measure the unknown spectral phase distortions of the ultrashort laser pulses. Typically, the known superimposed function is a periodic sinusoidal function that is scanned across the bandwidth of the pulse.
MIIPS is similar to FROG in that a frequency trace is collected for the characterization of the ultrashort pulse. In Frequency-resolved optical gating, a FROG trace is collected through scanning the ultrashort pulse across the temporal axis, and detecting the spectrum of the nonlinear process. It can be expressed as
formula_0
In MIIPS, instead of scanning on the temporal domain, a series of phase scan is applied on the phase domain of the pulse. The trace of the MIIPS scan consists of the second-harmonic spectra of each phase scan. The signal of MIIPS can be written as
formula_1
The phase scan in MIIPS is realized with introducing a well-known reference function, formula_2, by the pulse shaper to locally cancel distortions by the unknown spectral phase, formula_3, of the pulse. The sum of the unknown phase and the reference phase is given by formula_4. Because the frequency doubled spectrum of the pulse depends on formula_5, it is possible to accurately retrieve the unknown formula_3.
The phase modulation procedure of the physical process is generally a continuous function. Thus, the SHG signal can be expanded with a Taylor expansion around formula_6:
formula_7
And
formula_8
According to this equation, the SHG signal reaches maximum when formula_9 is zero. This is equivalent to formula_10. Through scanning of formula_2, the formula_3 can be decided.
The frequency doubled spectrum recorded for each full scan of the reference phase formula_11 results in two replicas of the MIIPS trace (see Figure 1, four replicas shown). From this data, a 2D plot for SHG(formula_12) is constructed where formula_13. The second harmonic spectrum of the resulting pulse has a maximum amplitude at the frequency where the second derivative of the pulse has been compensated. The lines describing formula_14 are used to obtain analytically the second derivative of the unknown phase. After double integration the phase distortions are known. The system then introduces a correction phase to cancel the distortions and achieve shorter pulses. The absolute accuracy of MIIPS improves as the phase distortions diminish, therefore an iterative procedure of measurement and compensation is applied to reduce phase distortions below 0.1 radian for all frequencies within the bandwidth of the laser.
When all phase distortions have been eliminated, the pulses have the highest possible peak power, and are considered to be Bandwidth-limited-pulse|transform limited (TL). The MIIPS trace corresponding to TL pulses shows straight parallel lines separated by formula_15. Once spectral phase distortions have been eliminated, the shaper can be used to introduce calibrated phases and amplitudes to control laser induced processes.
MIIPS technology has been applied successfully in selective excitation of multiphoton imaging and femtosecond light-mass interaction study.
Experimental setup.
The expanded laser beam reaches the Diffractive grating (G) first, the first-order reflection is deflected to the Mirror (M) and then to the curved mirror (CM). The curved mirror reflects the laser to the spatial light modulator (SLM). The phases are applied through the SLM to each component of the frequency. The laser is then retro-reflected. By using a nonlinear medium, the nonlinear (SHG, THG, etc.) spectra vs. the phase scan can be recorded as a MIIPS trace for the characterization of the pulse. Once the pulse is characterized, a compensatory phase can be applied to the ultrashort pulse through the SLM.
Variants.
There is also an improved MIIPS algorithm that allows for efficient phase retrieval in a single iteration, providing that the laser spectrum at the reference sample is known. This technique is expected to be particularly beneficial for measuring photosensitive samples, and it is also helpful in the case of samples which produce very low second harmonic spectra. This method of analysis avoids a type of non-trivial ambiguity that arises for structured amplitude pulse profiles and can provide better feedback on the accuracy of the phase retrieval.
Gated-MIIPS (G-MIIPS) is an enhanced variant of MIIPS, designed to address the limitations posed by higher-order phase distortions in ultrashort laser pulse characterization. G-MIIPS employs an amplitude gate scanned across the spectrum, mitigating the influence of higher-order phase terms and enabling efficient compression of broadband laser pulses with a simple 4𝑓 pulse shaper setup. G-MIIPS is particularly effective for correcting substantial phase distortions caused by factors like high-NA microscope objectives.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nI(\\omega,\\tau)=\\left|\\int{E(t)g(t-\\tau)e^{i\\omega t}\\mathrm{d}t}\\right|^2\n"
},
{
"math_id": 1,
"text": "\nI(2\\omega)=\\left|\\int{|E(\\omega)|^2e^{i\\phi}\\mathrm{d}\\phi}\\right|^2\n"
},
{
"math_id": 2,
"text": "f(\\omega)"
},
{
"math_id": 3,
"text": "\\Phi(\\omega)"
},
{
"math_id": 4,
"text": "\\phi(\\omega)=\\Phi(\\omega)+ f(\\omega)"
},
{
"math_id": 5,
"text": "\\phi(\\omega)"
},
{
"math_id": 6,
"text": "\\omega"
},
{
"math_id": 7,
"text": "\nI(\\omega)= \\left| \\int| E(\\omega+\\Omega)| |E(\\omega-\\Omega)| \\times \\text{exp} \\{i[\\phi(\\omega+\\Omega)+\\phi(\\omega -\\Omega)]\\} \\mathrm{d}\\Omega \\right|^2\n"
},
{
"math_id": 8,
"text": "\n\\phi(\\omega+\\Omega)+\\phi(\\omega-\\Omega)=2\\phi0+\\phi''(\\omega)\\Omega^2+...+\\frac{2}{(2n)!}\\phi^{2n'}(\\omega)\\Omega^{2n}\n"
},
{
"math_id": 9,
"text": "\\phi(\\omega+\\Omega)+\\phi(\\omega-\\Omega)"
},
{
"math_id": 10,
"text": "\\Phi''(\\omega)=-f''(\\omega)"
},
{
"math_id": 11,
"text": "4(\\pi)"
},
{
"math_id": 12,
"text": "\\omega,\\omega"
},
{
"math_id": 13,
"text": "\\omega=\\pi c/\\lambda_{SHG}"
},
{
"math_id": 14,
"text": "\\omega_{m}(\\omega)"
},
{
"math_id": 15,
"text": "\\pi"
}
] | https://en.wikipedia.org/wiki?curid=8632032 |
8632578 | Gravitation (book) | Textbook by Misner, Thorne, and Wheeler
Gravitation is a widely adopted textbook on Albert Einstein's general theory of relativity, written by Charles W. Misner, Kip S. Thorne, and John Archibald Wheeler. It was originally published by W. H. Freeman and Company in 1973 and reprinted by Princeton University Press in 2017. It is frequently abbreviated MTW (for its authors' last names). The cover illustration, drawn by Kenneth Gwin, is a line drawing of an apple with cuts in the skin to show the geodesics on its surface.
The book contains 10 parts and 44 chapters, each beginning with a quotation. The bibliography has a long list of original sources and other notable books in the field. While this may not be considered the best introductory text because its coverage may overwhelm a newcomer, and even though parts of it are now out of date, it has remained a highly valued reference for advanced graduate students and researchers as of 1998.
Content.
Subject matter.
After a brief review of special relativity and flat spacetime, physics in curved spacetime is introduced and many aspects of general relativity are covered; particularly about the Einstein field equations and their implications, experimental confirmations, and alternatives to general relativity. Segments of history are included to summarize the ideas leading up to Einstein's theory. The book concludes by questioning the nature of spacetime and suggesting possible frontiers of research. Although the exposition on linearized gravity is detailed, one topic which is not covered is gravitoelectromagnetism. Some quantum mechanics is mentioned, but quantum field theory in curved spacetime and quantum gravity are not included.
The topics covered are broadly divided into two "tracks", the first contains the core topics while the second has more advanced content. The first track can be read independently of the second track. The main text is supplemented by boxes containing extra information, which can be omitted without loss of continuity. Margin notes are also inserted to annotate the main text.
The mathematics, primarily tensor calculus and differential forms in curved spacetime, is developed as required. An introductory chapter on spinors near the end is also given. There are numerous illustrations of advanced mathematical ideas such as alternating multilinear forms, parallel transport, and the orientation of the hypercube in spacetime. Mathematical exercises and physical problems are included for the reader to practice.
The prose in the book is conversational; the authors use plain language and analogies to everyday objects. For example, Lorentz transformed coordinates are described as a "squashed egg-crate" with an illustration. Tensors are described as "machines with slots" to insert vectors or one-forms, and containing "gears and wheels that guarantee the output" of other tensors.
Sign and unit conventions.
"MTW" uses the − + + + sign convention, and discourages the use of the + + + + metric with an imaginary time coordinate formula_0. In the front endpapers, the sign conventions for the Einstein field equations are established and the conventions used by many other authors are listed.
The book also uses geometrized units, in which the gravitational constant formula_1 and speed of light formula_2 are each set to 1. The back end papers contain a table of unit conversions.
Editions and translations.
The book has been reprinted in English 24 times. Hardback and softcover editions have been published. The original citation is
It has also been translated into other languages, including Russian (in three volumes), Chinese, and Japanese.
This is a recent reprinting with new foreword and preface.
Reviews.
The book is still considered influential in the physics community, with generally positive reviews, but with some criticism of the book's length and presentation style. To quote Ed Ehrlich:
<templatestyles src="Template:Blockquote/styles.css" />'Gravitation' is such a prominent book on relativity that the initials of its authors MTW can be used by other books on relativity without explanation.
James Hartle notes in his book:
<templatestyles src="Template:Blockquote/styles.css" />Over thirty years since its publication, "Gravitation" is still the most comprehensive treatise on general relativity. An authoritative and complete discussion of almost any topic in the subject can be found within its 1300 pages. It also contains an extensive bibliography with references to original sources. Written by three twentieth-century masters of the subject, it set the style for many later texts on the subject, including this one.
Sean M. Carroll states in his own introductory text:
<templatestyles src="Template:Blockquote/styles.css" />The book that educated at least two generations of researchers in gravitational physics. Comprehensive and encyclopedic, the book is written in an often-idiosyncratic way that you will either like or not.
Pankaj Sharan writes:
<templatestyles src="Template:Blockquote/styles.css" />This large sized (20cm × 25cm), 1272 page book begins at the very beginning and has everything on gravity (up to 1973). There are hundreds of diagrams and special boxes for additional explanations, exercises, historical and bibliographical asides and bibliographical details.
Ray D'Inverno suggests:
<templatestyles src="Template:Blockquote/styles.css" />I would also recommend looking at the relevant sections of the text of Misner, Thorne, and Wheeler, known for short as ‘MTW’. MTW is a rich resource and is certainly worth consulting for a whole string of topics. However, its style is not perhaps for everyone (I find it somewhat verbose in places and would not recommend it for a first course in general relativity). MTW has a very extensive bibliography.
Many texts on general relativity refer to it in their bibliographies or footnotes. In addition to the four given, other modern references include George Efstathiou et al., Bernard F. Schutz, James Foster et al., Robert Wald, and Stephen Hawking et al.
Other prominent physics books also cite it. For example, "Classical Mechanics" (second edition) by Herbert Goldstein, who comments:
<templatestyles src="Template:Blockquote/styles.css" />This massive treatise (1279 pages! (the pun is irresistible)) is to be praised for the great efforts made to help the reader through the maze. The pedagogic apparatus includes separately marked tracks, boxes of various kinds, marginal comments, and cleverly designed diagrams.
The third edition of Goldstein's text still lists "Gravitation" as an "excellent" resource on field theory in its selected biography.
A 2019 review of another work by Gerard F. Gilmore opened: "Every teacher of General Relativity depends heavily on two texts: one, the massive ‘Gravitation’ by Misner, Thorne and Wheeler, the second the diminutive ‘The Meaning of Relativity’ by Einstein."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "ict"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "c"
}
] | https://en.wikipedia.org/wiki?curid=8632578 |
8633008 | Weakly contractible | Topological space consisting of trivial homotopy groups
In mathematics, a topological space is said to be weakly contractible if all of its homotopy groups are trivial.
Property.
It follows from Whitehead's Theorem that if a CW-complex is weakly contractible then it is contractible.
Example.
Define formula_0 to be the inductive limit of the spheres formula_1. Then this space is weakly contractible. Since formula_0 is moreover a CW-complex, it is also contractible. See Contractibility of unit sphere in Hilbert space for more.
The Long Line is an example of a space which is weakly contractible, but not contractible. This does not contradict Whitehead theorem since the Long Line does not have the homotopy type of a CW-complex.
Another prominent example for this phenomenon is the Warsaw circle. | [
{
"math_id": 0,
"text": "S^\\infty"
},
{
"math_id": 1,
"text": "S^n, n\\ge 1"
}
] | https://en.wikipedia.org/wiki?curid=8633008 |
8634207 | TCP-Illinois | TCP-Illinois is a variant of TCP congestion control protocol, developed at the University of Illinois at Urbana–Champaign. It is especially targeted at high-speed, long-distance networks. A sender side modification to the standard TCP congestion control algorithm, it achieves a higher average throughput than the standard TCP, allocates the network resource fairly as the standard TCP, is compatible with the standard TCP, and provides incentives for TCP users to switch.
Principles of operation.
TCP-Illinois is a loss-delay based algorithm, which uses packet loss as the "primary" congestion signal to determine the "direction" of window size change, and uses queuing delay as the "secondary" congestion signal to adjust the "pace" of window size change. Similarly to the standard TCP, TCP-Illinois increases the window size W by formula_0 for each acknowledgment, and decreases formula_1 by formula_2 for each loss event. Unlike the standard TCP, formula_3 and formula_4 are not constants. Instead, they are functions of average queuing delay formula_5: formula_6, where formula_7 is decreasing and formula_8 is increasing.
There are numerous choices of formula_7 and formula_8. One such class is:
formula_9
formula_10
We let formula_7 and formula_8 be continuous functions and thus formula_11, formula_12 and formula_13. Suppose formula_14 is the maximum average queuing delay and we denote formula_15, then we also have formula_16. From these conditions, we have
formula_17
This specific choice is demonstrated in Figure 1.
Properties and Performance.
TCP-Illinois increases the throughput much more quickly than TCP when congestion is far and increases the throughput very slowly when congestion is imminent. As a result, the window curve is concave and the average throughput achieved is much larger than the standard TCP, see Figure 2.
It also has many other desirable features, like fairness, compatibility with the standard TCP, providing incentive for TCP users to switch, robust against inaccurate delay measurement. | [
{
"math_id": 0,
"text": "\\alpha/W"
},
{
"math_id": 1,
"text": "W"
},
{
"math_id": 2,
"text": "\\beta W"
},
{
"math_id": 3,
"text": "\\alpha"
},
{
"math_id": 4,
"text": "\\beta"
},
{
"math_id": 5,
"text": "d_a"
},
{
"math_id": 6,
"text": "\\alpha=f_1(d_a), \\beta=f_2(d_a)"
},
{
"math_id": 7,
"text": "f_1(\\cdot)"
},
{
"math_id": 8,
"text": "f_2(\\cdot)"
},
{
"math_id": 9,
"text": "\n\\alpha=f_1(d_a)= \\left\\{ \\begin{array}{ll}\n\\alpha_{max} & \\mbox{if } d_a \\leq d_1 \\\\\n\\frac{\\kappa_1}{\\kappa_2+d_a} & \\mbox{otherwise.}\n\\end{array} \\right.\n"
},
{
"math_id": 10,
"text": "\n\\beta=f_2(d_a)= \\left\\{ \\begin{array}{ll}\n\\beta_{min} & \\mbox{if } d_a \\leq d_2 \\\\\n\\kappa_3+\\kappa_4 d_a & \\mbox{if } d_2 < d_a < d_3 \\\\\n\\beta_{max} & \\mbox{otherwise.}\n\\end{array}\\right.\n"
},
{
"math_id": 11,
"text": "\\frac{\\kappa_1}{\\kappa_2+d_1} = \\alpha_{max}"
},
{
"math_id": 12,
"text": "\\beta_{min}=\\kappa_3+\\kappa_4 d_2"
},
{
"math_id": 13,
"text": "\\beta_{max}=\\kappa_3+\\kappa_4 d_3"
},
{
"math_id": 14,
"text": "d_m"
},
{
"math_id": 15,
"text": "\\alpha_{min}=f_1(d_m)"
},
{
"math_id": 16,
"text": "\\frac{\\kappa_1}{\\kappa_2+d_m} = \\alpha_{min}"
},
{
"math_id": 17,
"text": "\n\\begin{array}{lcl}\n\\kappa_1 = \\frac{ (d_m-d_1) \\alpha_{min} \\alpha_{max} }{\\alpha_{max}-\\alpha_{min}} & \\mbox{and} &\n\\kappa_2 = \\frac{(d_m-d_1) \\alpha_{min} }{\\alpha_{max}-\\alpha_{min}} - d_1 \\,, \\\\\n\\kappa_3 = \\frac{ \\beta_{min} d_3- \\beta_{max} d_2}{d_3-d_2} & \\mbox{and} & \\kappa_4 = \\frac{\\beta_{max}-\\beta_{min}}{d_3-d_2} \\,. \\end{array}\n"
}
] | https://en.wikipedia.org/wiki?curid=8634207 |
8635114 | Pompeiu's theorem | On line segments from a point to the vertices of an equilateral triangle
Pompeiu's theorem is a result of plane geometry, discovered by the Romanian mathematician Dimitrie Pompeiu. The theorem is simple, but not classical. It states the following:
"Given an equilateral triangle ABC in the plane, and a point P in the plane of the triangle ABC, the lengths PA, PB, and PC form the sides of a (maybe, degenerate) triangle."
The proof is quick. Consider a rotation of 60° about the point "B". Assume "A" maps to "C", and "P" maps to "P" '. Then formula_0, and formula_1. Hence triangle "PBP" ' is equilateral and formula_2. Then formula_3. Thus, triangle "PCP" ' has sides equal to "PA", "PB", and "PC" and the proof by construction is complete (see drawing).
Further investigations reveal that if "P" is not in the interior of the triangle, but rather on the circumcircle, then "PA", "PB", "PC" form a degenerate triangle, with the largest being equal to the sum of the others, this observation is also known as Van Schooten's theorem.
Generally, by the point "P" and the lengths to the vertices of the equilateral triangle - "PA", "PB", and "PC" two equilateral triangles ( the larger and the smaller) with sides formula_4 and formula_5 are defined:
formula_6.
The symbol △ denotes the area of the triangle whose sides have lengths "PA", "PB", "PC".
Pompeiu published the theorem in 1936, however August Ferdinand Möbius had published a more general theorem about four points in the Euclidean plane already in 1852. In this paper Möbius also derived the statement of Pompeiu's theorem explicitly as a special case of his more general theorem. For this reason the theorem is also known as the "Möbius-Pompeiu theorem". | [
{
"math_id": 0,
"text": "\\scriptstyle PB\\ =\\ P'B"
},
{
"math_id": 1,
"text": "\\scriptstyle\\angle PBP'\\ =\\ 60^{\\circ}"
},
{
"math_id": 2,
"text": "\\scriptstyle PP'\\ =\\ PB"
},
{
"math_id": 3,
"text": "\\scriptstyle PA\\ =\\ P'C"
},
{
"math_id": 4,
"text": "a_1"
},
{
"math_id": 5,
"text": "a_2"
},
{
"math_id": 6,
"text": "\\begin{align}\n a_{1,2}^2 &= \\frac{1}{2}\\left(PA^2 + PB^2 + PC^2 \\pm 4\\sqrt{3}\\triangle_{(PA,PB,PC)}\\right)\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=8635114 |
8635379 | Reaction–diffusion system | Type of mathematical model
Reaction–diffusion systems are mathematical models that correspond to several physical phenomena. The most common is the change in space and time of the concentration of one or more chemical substances: local chemical reactions in which the substances are transformed into each other, and diffusion which causes the substances to spread out over a surface in space.
Reaction–diffusion systems are naturally applied in chemistry. However, the system can also describe dynamical processes of non-chemical nature. Examples are found in biology, geology and physics (neutron diffusion theory) and ecology. Mathematically, reaction–diffusion systems take the form of semi-linear parabolic partial differential equations. They can be represented in the general form
formula_0
where q(x, "t") represents the unknown vector function, is a diagonal matrix of diffusion coefficients, and R accounts for all local reactions. The solutions of reaction–diffusion equations display a wide range of behaviours, including the formation of travelling waves and wave-like phenomena as well as other self-organized patterns like stripes, hexagons or more intricate structure like dissipative solitons. Such patterns have been dubbed "Turing patterns". Each function, for which a reaction diffusion differential equation holds, represents in fact a "concentration variable".
One-component reaction–diffusion equations.
The simplest reaction–diffusion equation is in one spatial dimension in plane geometry,
formula_1
is also referred to as the Kolmogorov–Petrovsky–Piskunov equation. If the reaction term vanishes, then the equation represents a pure diffusion process. The corresponding equation is Fick's second law. The choice "R"("u")
"u"(1 − "u") yields Fisher's equation that was originally used to describe the spreading of biological populations, the Newell–Whitehead-Segel equation with "R"("u")
"u"(1 − "u"2) to describe Rayleigh–Bénard convection, the more general Zeldovich–Frank-Kamenetskii equation with "R"("u")
"u"(1 − "u")e-"β"(1-"u") and 0 < "β" < "∞" (Zeldovich number) that arises in combustion theory, and its particular degenerate case with "R"("u")
"u"2 − "u"3 that is sometimes referred to as the Zeldovich equation as well.
The dynamics of one-component systems is subject to certain restrictions as the evolution equation can also be written in the variational form
formula_2
and therefore describes a permanent decrease of the "free energy" formula_3 given by the functional
formula_4
with a potential "V"("u") such that "R"("u")
In systems with more than one stationary homogeneous solution, a typical solution is given by travelling fronts connecting the homogeneous states. These solutions move with constant speed without changing their shape and are of the form "u"("x", "t")
"û"("ξ") with "ξ"
"x" − "ct", where c is the speed of the travelling wave. Note that while travelling waves are generically stable structures, all non-monotonous stationary solutions (e.g. localized domains composed of a front-antifront pair) are unstable. For "c"
0, there is a simple proof for this statement: if "u"0("x") is a stationary solution and "u"
"u"0("x") + "ũ"("x", "t") is an infinitesimally perturbed solution, linear stability analysis yields the equation
formula_5
With the ansatz "ũ"
"ψ"("x")exp(−"λt") we arrive at the eigenvalue problem
formula_6
of Schrödinger type where negative eigenvalues result in the instability of the solution. Due to translational invariance "ψ"
∂"x" "u"0("x") is a neutral eigenfunction with the eigenvalue "λ"
0, and all other eigenfunctions can be sorted according to an increasing number of nodes with the magnitude of the corresponding real eigenvalue increases monotonically with the number of zeros. The eigenfunction "ψ"
∂"x" "u"0("x") should have at least one zero, and for a non-monotonic stationary solution the corresponding eigenvalue "λ"
0 cannot be the lowest one, thereby implying instability.
To determine the velocity c of a moving front, one may go to a moving coordinate system and look at stationary solutions:
formula_7
This equation has a nice mechanical analogue as the motion of a mass D with position "û" in the course of the "time" ξ under the force R with the damping coefficient c which allows for a rather illustrative access to the construction of different types of solutions and the determination of c.
When going from one to more space dimensions, a number of statements from one-dimensional systems can still be applied. Planar or curved wave fronts are typical structures, and a new effect arises as the local velocity of a curved front becomes dependent on the local radius of curvature (this can be seen by going to polar coordinates). This phenomenon leads to the so-called curvature-driven instability.
Two-component reaction–diffusion equations.
Two-component systems allow for a much larger range of possible phenomena than their one-component counterparts. An important idea that was first proposed by Alan Turing is that a state that is stable in the local system can become unstable in the presence of diffusion.
A linear stability analysis however shows that when linearizing the general two-component system
formula_8
a plane wave perturbation
formula_9
of the stationary homogeneous solution will satisfy
formula_10
Turing's idea can only be realized in four equivalence classes of systems characterized by the signs of the Jacobian R′ of the reaction function. In particular, if a finite wave vector k is supposed to be the most unstable one, the Jacobian must have the signs
formula_11
This class of systems is named "activator-inhibitor system" after its first representative: close to the ground state, one component stimulates the production of both components while the other one inhibits their growth. Its most prominent representative is the FitzHugh–Nagumo equation
formula_12
with "f" ("u")
"λu" − "u"3 − "κ" which describes how an action potential travels through a nerve. Here, "du", "dv", "τ", "σ" and "λ" are positive constants.
When an activator-inhibitor system undergoes a change of parameters, one may pass from conditions under which a homogeneous ground state is stable to conditions under which it is linearly unstable. The corresponding bifurcation may be either a Hopf bifurcation to a globally oscillating homogeneous state with a dominant wave number "k"
0 or a "Turing bifurcation" to a globally patterned state with a dominant finite wave number. The latter in two spatial dimensions typically leads to stripe or hexagonal patterns.
For the Fitzhugh–Nagumo example, the neutral stability curves marking the boundary of the linearly stable region for the Turing and Hopf bifurcation are given by
formula_13
If the bifurcation is subcritical, often localized structures (dissipative solitons) can be observed in the hysteretic region where the pattern coexists with the ground state. Other frequently encountered structures comprise pulse trains (also known as periodic travelling waves), spiral waves and target patterns. These three solution types are also generic features of two- (or more-) component reaction–diffusion equations in which the local dynamics have a stable limit cycle
Three- and more-component reaction–diffusion equations.
For a variety of systems, reaction–diffusion equations with more than two components have been proposed, e.g. the Belousov–Zhabotinsky reaction, for blood clotting, fission waves or planar gas discharge systems.
It is known that systems with more components allow for a variety of phenomena not possible in systems with one or two components (e.g. stable running pulses in more than one spatial dimension without global feedback). An introduction and systematic overview of the possible phenomena in dependence on the properties of the underlying system is given in.
Applications and universality.
In recent times, reaction–diffusion systems have attracted much interest as a prototype model for pattern formation. The above-mentioned patterns (fronts, spirals, targets, hexagons, stripes and dissipative solitons) can be found in various types of reaction–diffusion systems in spite of large discrepancies e.g. in the local reaction terms. It has also been argued that reaction–diffusion processes are an essential basis for processes connected to morphogenesis in biology and may even be related to animal coats and skin pigmentation. Other applications of reaction–diffusion equations include ecological invasions, spread of epidemics, tumour growth, dynamics of fission waves, wound healing and visual hallucinations. Another reason for the interest in reaction–diffusion systems is that although they are nonlinear partial differential equations, there are often possibilities for an analytical treatment.
Experiments.
Well-controllable experiments in chemical reaction–diffusion systems have up to now been realized in three ways. First, gel reactors or filled capillary tubes may be used. Second, temperature pulses on catalytic surfaces have been investigated. Third, the propagation of running nerve pulses is modelled using reaction–diffusion systems.
Aside from these generic examples, it has turned out that under appropriate circumstances electric transport systems like plasmas or semiconductors can be described in a reaction–diffusion approach. For these systems various experiments on pattern formation have been carried out.
Numerical treatments.
A reaction–diffusion system can be solved by using methods of numerical mathematics. There are existing several numerical treatments in research literature. Also for complex geometries numerical solution methods are proposed.
To highest degree of detail reaction-diffusion systems are described with particle based simulation tools like SRSim or ReaDDy which employ for example
reversible interacting-particle reaction dynamics.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\partial_t \\boldsymbol{q} = \\underline{\\underline{\\boldsymbol{D}}} \\,\\nabla^2 \\boldsymbol{q} + \\boldsymbol{R}(\\boldsymbol{q}), "
},
{
"math_id": 1,
"text": "\\partial_t u = D \\partial^2_x u + R(u),"
},
{
"math_id": 2,
"text": "\\partial_t u=-\\frac{\\delta\\mathfrak L}{\\delta u}"
},
{
"math_id": 3,
"text": "\\mathfrak L"
},
{
"math_id": 4,
"text": " \\mathfrak L=\\int_{-\\infty}^\\infty \\left[\\tfrac{D}{2} \\left (\\partial_xu \\right )^2-V(u)\\right] \\, \\text{d}x"
},
{
"math_id": 5,
"text": " \\partial_t \\tilde{u}=D\\partial_x^2 \\tilde{u}-U(x)\\tilde{u},\\qquad U(x) = -R^{\\prime}(u)\\Big|_{u=u_0(x)}."
},
{
"math_id": 6,
"text": " \\hat H\\psi=\\lambda\\psi, \\qquad \\hat H=-D\\partial_x^2+U(x),"
},
{
"math_id": 7,
"text": "D \\partial^2_{\\xi}\\hat{u}(\\xi)+ c\\partial_{\\xi} \\hat{u}(\\xi)+R(\\hat{u}(\\xi))=0."
},
{
"math_id": 8,
"text": " \\begin{pmatrix} \\partial_t u \\\\ \\partial_t v \\end{pmatrix} = \\begin{pmatrix} D_u &0 \\\\0&D_v \\end{pmatrix} \\begin{pmatrix} \\partial_{xx} u\\\\ \\partial_{xx} v \\end{pmatrix} + \\begin{pmatrix} F(u,v)\\\\G(u,v)\\end{pmatrix}"
},
{
"math_id": 9,
"text": " \\tilde{\\boldsymbol{q}}_{\\boldsymbol{k}}(\\boldsymbol{x},t) = \\begin{pmatrix} \\tilde{u}(t)\\\\\\tilde{v}(t) \\end{pmatrix} e^{i \\boldsymbol{k} \\cdot \\boldsymbol{x}} "
},
{
"math_id": 10,
"text": "\\begin{pmatrix} \\partial_t \\tilde{u}_{\\boldsymbol{k}}(t)\\\\ \\partial_t \\tilde{v}_{\\boldsymbol{k}}(t) \\end{pmatrix} = -k^2\\begin{pmatrix} D_u \\tilde{u}_{\\boldsymbol{k}}(t)\\\\ D_v\\tilde{v}_{\\boldsymbol{k}}(t) \\end{pmatrix} + \\boldsymbol{R}^{\\prime} \\begin{pmatrix}\\tilde{u}_{\\boldsymbol{k}}(t) \\\\ \\tilde{v}_{\\boldsymbol{k}}(t) \\end{pmatrix}."
},
{
"math_id": 11,
"text": " \\begin{pmatrix} +&-\\\\+&-\\end{pmatrix}, \\quad \\begin{pmatrix} +&+\\\\-&-\\end{pmatrix}, \\quad \\begin{pmatrix} -&+\\\\-&+\\end{pmatrix}, \\quad\n\\begin{pmatrix} -&-\\\\+&+\\end{pmatrix}. "
},
{
"math_id": 12,
"text": "\\begin{align}\n\\partial_t u &= d_u^2 \\,\\nabla^2 u + f(u) - \\sigma v, \\\\\n\\tau \\partial_t v &= d_v^2 \\,\\nabla^2 v + u - v\n\\end{align}"
},
{
"math_id": 13,
"text": "\\begin{align}\nq_{\\text{n}}^H(k): &{}\\quad \\frac{1}{\\tau} + \\left (d_u^2 + \\frac{1}{\\tau} d_v^2 \\right )k^2 & =f^{\\prime}(u_{h}),\\\\[6pt]\nq_{\\text{n}}^T(k): &{}\\quad \\frac{\\kappa}{1 + d_v^2 k^2}+ d_u^2 k^2 & = f^{\\prime}(u_{h}).\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=8635379 |
8635441 | Ackermann set theory | Axiomatic set theory proposed by Wilhelm Ackermann
In mathematics and logic, Ackermann set theory (AST, also known as formula_0) is an axiomatic set theory proposed by Wilhelm Ackermann in 1956.
AST differs from Zermelo–Fraenkel set theory (ZF) in that it allows proper classes, that is, objects that are not sets, including a class of all sets.
It replaces several of the standard ZF axioms for constructing new sets with a principle known as Ackermann's schema. Intuitively, the schema allows a new set to be constructed if it can be defined by a formula which does not refer to the class of all sets.
In its use of classes, AST differs from other alternative set theories such as Morse–Kelley set theory and Von Neumann–Bernays–Gödel set theory in that a class may be an element of another class.
William N. Reinhardt established in 1970 that AST is effectively equivalent in strength to ZF, putting it on equal foundations. In particular, AST is consistent if and only if ZF is consistent.
Preliminaries.
AST is formulated in first-order logic. The language formula_1 of AST contains one binary relation formula_2 denoting set membership and one constant formula_3 denoting the class of all sets. Ackermann used a predicate formula_4 instead of formula_3; this is equivalent as each of formula_4 and formula_3 can be defined in terms of the other.
We will refer to elements of formula_3 as "sets", and general objects as classes. A class that is not a set is called a proper class.
Axioms.
The following formulation is due to Reinhardt.
The five axioms include two axiom schemas.
Ackermann's original formulation included only the first four of these, omitting the axiom of regularity.
1. Axiom of extensionality.
If two classes have the same elements, then they are equal.
formula_5
This axiom is identical to the axiom of extensionality found in many other set theories, including ZF.
2. Heredity.
Any element or a subset of a set is a set.
formula_6
3. Comprehension schema.
For any property, we can form the class of sets satisfying that property. Formally, for any formula formula_7 where formula_8 is not free:
formula_9
That is, the only restriction is that comprehension is restricted to objects in formula_3. But the resulting object is not necessarily a set.
4. Ackermann's schema.
For any formula formula_7 with free variables formula_10 and no occurrences of formula_3:
formula_11
Ackermann's schema is a form of set comprehension that is unique to AST. It allows constructing a new set (not just a class) as long as we can define it by a property that "does not refer" to the symbol formula_3. This is the principle that replaces ZF axioms such as pairing, union, and power set.
5. Regularity.
Any non-empty set contains an element disjoint from itself:
formula_12
Here, formula_13 is shorthand for formula_14. This axiom is identical to the axiom of regularity in ZF.
This axiom is conservative in the sense that without it, we can simply use comprehension (axiom schema 3) to restrict our attention to the subclass of sets that are regular.
Alternative formulations.
Ackermann's original axioms did not include regularity, and used a predicate symbol formula_4 instead of the constant symbol formula_3. We follow Lévy and Reinhardt in replacing instances of formula_15 with formula_16. This is equivalent because formula_4 can be given a definition as formula_16, and conversely, the set formula_3 can be obtained in Ackermann's original formulation by applying comprehension to the predicate formula_17.
In "axiomatic set theory," Ralf Schindler replaces Ackermann's schema (axiom schema 4) with the following reflection principle:
for any formula formula_7 with free variables formula_18,
formula_19
Here, formula_20 denotes the "relativization" of formula_7 to formula_3, which replaces all quantifiers in formula_7 of the form formula_21 and formula_22 by formula_23 and formula_24, respectively.
Relation to Zermelo–Fraenkel set theory.
Let formula_25 be the language of formulas that do not mention formula_3.
In 1959, Azriel Lévy proved that if formula_7 is a formula of formula_25 and AST proves formula_20, then ZF proves formula_7.
In 1970, William N. Reinhardt proved that if formula_7 is a formula of formula_25 and ZF proves formula_7, then AST proves formula_20.
Therefore, AST and ZF are mutually interpretable in conservative extensions of each other. Thus they are equiconsistent.
A remarkable feature of AST is that, unlike NBG and its variants, a proper class can be an element of another proper class.
Extensions.
An extension of AST for category theory called ARC was developed by F.A. Muller. Muller stated that ARC "founds Cantorian set-theory as well as category-theory and therefore can pass as a founding theory of the whole of mathematics".
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A^*/V"
},
{
"math_id": 1,
"text": "L_{\\{\\in,V\\}}"
},
{
"math_id": 2,
"text": "\\in"
},
{
"math_id": 3,
"text": "V"
},
{
"math_id": 4,
"text": "M"
},
{
"math_id": 5,
"text": "\\forall x \\; (x \\in A \\leftrightarrow x \\in B) \\to A = B."
},
{
"math_id": 6,
"text": "(x \\in y \\lor x \\subseteq y) \\land y \\in V \\to x \\in V."
},
{
"math_id": 7,
"text": "\\phi"
},
{
"math_id": 8,
"text": "X"
},
{
"math_id": 9,
"text": "\\exists X \\; \\forall x \\; (x \\in X \\leftrightarrow x \\in V \\land \\phi)."
},
{
"math_id": 10,
"text": "a_1, \\ldots, a_n, x"
},
{
"math_id": 11,
"text": "a_1, \\ldots, a_n \\in V \\land \\forall x \\; (\\phi \\to x \\in V) \\to \\exists X {\\in} V \\; \\forall x \\; (x \\in X \\leftrightarrow \\phi)."
},
{
"math_id": 12,
"text": "\\forall x \\in V \\; (x = \\varnothing \\lor \\exists y (y \\in x \\land y \\cap x = \\varnothing))."
},
{
"math_id": 13,
"text": "y \\cap x = \\varnothing"
},
{
"math_id": 14,
"text": "\\not \\exists z \\; (z \\in x \\land z \\in y)"
},
{
"math_id": 15,
"text": "Mx"
},
{
"math_id": 16,
"text": "x \\in V"
},
{
"math_id": 17,
"text": "\\phi = \\text{True}"
},
{
"math_id": 18,
"text": "a_1, \\ldots, a_n"
},
{
"math_id": 19,
"text": "a_1, \\ldots, a_n {\\in} V \\to (\\phi \\leftrightarrow \\phi^V)."
},
{
"math_id": 20,
"text": "\\phi^V"
},
{
"math_id": 21,
"text": "\\forall x"
},
{
"math_id": 22,
"text": "\\exists x"
},
{
"math_id": 23,
"text": "\\forall x {\\in} V"
},
{
"math_id": 24,
"text": "\\exists x {\\in} V"
},
{
"math_id": 25,
"text": "L_{\\{\\in\\}}"
}
] | https://en.wikipedia.org/wiki?curid=8635441 |
863579 | Call-by-push-value | Intermediate language
In programming language theory, call-by-push-value (CBPV) is an intermediate language that embeds the call-by-value (CBV) and call-by-name (CBN) evaluation strategies. CBPV is structured as a polarized λ-calculus with two main types, "values" (+) and "computations" (-). Restrictions on interactions between the two types enforce a controlled order of evaluation, similar to monads or CPS. The calculus can embed computational effects, such as nontermination, mutable state, or nondeterminism. There are natural semantics-preserving translations from CBV and CBN into CBPV. This means that giving a CBPV semantics and proving its properties implicitly establishes CBV and CBN semantics and properties as well. Paul Blain Levy formulated and developed CBPV in several papers and his doctoral thesis.
Definition.
The CBPV paradigm is based on the slogan "a value is, a computation does". One complication in the presentation is distinguishing type variables ranging over value types from those ranging over computation types. This article follows Levy in using underlines to denote computations, so formula_0 is an (arbitrary) value type but formula_1 is a computation type. Some authors use other conventions, such as distinct sets of letters.
The exact set of constructs varies by author and desired use for the calculus, but the following constructs are typical:
A program is a closed computation of type formula_11, where formula_10 is a ground ADT type.
Complex values.
Expressions such as codice_17 make sense denotationally. But, following the rules above, codice_18 can only be encoded using pattern-matching, which would make it a computation, and therefore the overall expression must also be a computation, giving codice_19. Similarly, there is no way to obtain codice_20 from codice_21 without constructing a computation. When modelling CBPV in the equational or category theory, such constructs are indispensable. Levy therefore defines an extended IR, "CBPV with complex values". This IR extends let-binding to bind values within a value expression, and also to pattern-match a value with each clause returning a value expression. Besides modelling, such constructs also make writing programs in CBPV more natural.
Complex values complicate the operational semantics, in particular requiring an arbitrary decision of when to evaluate the complex value. Such a decision has no semantic significance because evaluating complex values has no side effects. Also, it is possible to syntactically convert any computation or closed expression to one of the same type and denotation without complex values. Therefore, many presentations omit complex values.
Translation.
The CBV translation produces CBPV values for each expression. A CBV function codice_0 : formula_12 is translated to codice_23 : formula_13. A CBV application codice_24 : formula_10 is translated to a computation codice_25 of type formula_11, making the order of evaluation explicit. A pattern match codice_16 is translated as codice_27. Values are wrapped with codice_28 when necessary, but otherwise remain unmodified. In some translations, sequencing may be required, such as translating codice_29 to codice_30.
The CBN translation produces CBPV computations for each expression. A CBN function codice_0 : formula_14 translates unaltered, codice_32 : formula_15. A CBN application codice_24 : formula_16 is translated to a computation codice_34 of type formula_17. A pattern match codice_16 is translated similarly to CBN as codice_36. ADT values are wrapped with codice_28, but codice_38 and codice_39 are also necessary on internal structure. Levy's translation assumes that codice_40, which does indeed hold.
It is also possible to extend CBPV to model call-by-need, by introducing a codice_41 construct that allows visible sharing. This construct has semantics similar to codice_42, except that with the codice_43 construct, the thunk of codice_6 is evaluated at most once.
Modifications.
Some authors have noted that CBPV can be simplified, by removing either the U type constructor (thunks) or the F type constructor (computations returning values). Egger and Mogelberg justify omitting U on the grounds of streamlined syntax and avoiding the clutter of inferable conversions from computations to values. This choice makes computation types a subset of value types, and it is then natural to expand function types to a full function space between values. They term their calculus the "Enriched Effects Calculus". This modified calculus is equivalent to a superset of CBPV via a bidirectional semantics-preserving translation. Ehrhard in contrast omits the F type constructor, making values a subset of computations. Ehrhard renames computations to "general types" to better reflect their semantics. This modified calculus, the "half-polarized lambda calculus", has close connections to linear logic. It can be translated bidirectionally to a subset of a fully-polarized variant of CBPV.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "B"
},
{
"math_id": 1,
"text": "\\underline{B}"
},
{
"math_id": 2,
"text": "A \\to \\underline{B}"
},
{
"math_id": 3,
"text": "x : A"
},
{
"math_id": 4,
"text": "M : \\underline{B}"
},
{
"math_id": 5,
"text": "V : A"
},
{
"math_id": 6,
"text": "F : A \\to \\underline{B}"
},
{
"math_id": 7,
"text": "A_1"
},
{
"math_id": 8,
"text": "U \\underline{A}"
},
{
"math_id": 9,
"text": "\\underline{A}"
},
{
"math_id": 10,
"text": "A"
},
{
"math_id": 11,
"text": "F A"
},
{
"math_id": 12,
"text": "A \\to_v B"
},
{
"math_id": 13,
"text": "U (A^v \\to F B^v)"
},
{
"math_id": 14,
"text": "A \\to B"
},
{
"math_id": 15,
"text": "(U A^n) \\to X^n"
},
{
"math_id": 16,
"text": "C"
},
{
"math_id": 17,
"text": "C^n"
}
] | https://en.wikipedia.org/wiki?curid=863579 |
863720 | Reductive Lie algebra | In mathematics, a Lie algebra is reductive if its adjoint representation is completely reducible, hence the name. More concretely, a Lie algebra is reductive if it is a direct sum of a semisimple Lie algebra and an abelian Lie algebra: formula_0 there are alternative characterizations, given below.
Examples.
The most basic example is the Lie algebra formula_1 of formula_2 matrices with the commutator as Lie bracket, or more abstractly as the endomorphism algebra of an "n"-dimensional vector space, formula_3 This is the Lie algebra of the general linear group GL("n"), and is reductive as it decomposes as formula_4 corresponding to traceless matrices and scalar matrices.
Any semisimple Lie algebra or abelian Lie algebra is "a fortiori" reductive.
Over the real numbers, compact Lie algebras are reductive.
Definitions.
A Lie algebra formula_5 over a field of characteristic 0 is called reductive if any of the following equivalent conditions are satisfied:
Some of these equivalences are easily seen. For example, the center and radical of formula_14 is formula_15 while if the radical equals the center the Levi decomposition yields a decomposition formula_9 Further, simple Lie algebras and the trivial 1-dimensional Lie algebra formula_16 are prime ideals.
Properties.
Reductive Lie algebras are a generalization of semisimple Lie algebras, and share many properties with them: many properties of semisimple Lie algebras depend only on the fact that they are reductive. Notably, the unitarian trick of Hermann Weyl works for reductive Lie algebras.
The associated reductive Lie groups are of significant interest: the Langlands program is based on the premise that what is done for one reductive Lie group should be done for all.
The intersection of reductive Lie algebras and solvable Lie algebras is exactly abelian Lie algebras (contrast with the intersection of semisimple and solvable Lie algebras being trivial).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathfrak{g} = \\mathfrak{s} \\oplus \\mathfrak{a};"
},
{
"math_id": 1,
"text": "\\mathfrak{gl}_n"
},
{
"math_id": 2,
"text": "n \\times n"
},
{
"math_id": 3,
"text": "\\mathfrak{gl}(V)."
},
{
"math_id": 4,
"text": "\\mathfrak{gl}_n = \\mathfrak{sl}_n \\oplus \\mathfrak{k},"
},
{
"math_id": 5,
"text": "\\mathfrak{g}"
},
{
"math_id": 6,
"text": "\\mathfrak{r}(\\mathfrak{g}) = \\mathfrak{z}(\\mathfrak{g})."
},
{
"math_id": 7,
"text": "\\mathfrak{s}_0"
},
{
"math_id": 8,
"text": "\\mathfrak{z}(\\mathfrak{g}): "
},
{
"math_id": 9,
"text": "\\mathfrak{g} = \\mathfrak{s}_0 \\oplus \\mathfrak{z}(\\mathfrak{g})."
},
{
"math_id": 10,
"text": "\\mathfrak{s}"
},
{
"math_id": 11,
"text": "\\mathfrak{a}"
},
{
"math_id": 12,
"text": "\\mathfrak{g} = \\mathfrak{s} \\oplus \\mathfrak{a}."
},
{
"math_id": 13,
"text": "\\mathfrak{g} = \\textstyle{\\sum \\mathfrak{g}_i}."
},
{
"math_id": 14,
"text": "\\mathfrak{s} \\oplus \\mathfrak{a}"
},
{
"math_id": 15,
"text": "\\mathfrak{a},"
},
{
"math_id": 16,
"text": "\\mathfrak{k}"
}
] | https://en.wikipedia.org/wiki?curid=863720 |
863741 | Light field | Vector function in optics
A light field, or lightfield, is a vector function that describes the amount of light flowing in every direction through every point in a space. The space of all possible "light rays" is given by the five-dimensional plenoptic function, and the magnitude of each ray is given by its radiance. Michael Faraday was the first to propose that light should be interpreted as a field, much like the magnetic fields on which he had been working. The term "light field" was coined by Andrey Gershun in a classic 1936 paper on the radiometric properties of light in three-dimensional space.
The term "radiance field" may also be used to refer to similar, or identical concepts. The term is used in modern research such as neural radiance fields
The plenoptic function.
For geometric optics—i.e., to incoherent light and to objects larger than the wavelength of light—the fundamental carrier of light is a ray. The measure for the amount of light traveling along a ray is radiance, denoted by "L" and measured in W·sr−1·m−2; i.e., watts (W) per steradian (sr) per square meter (m2). The steradian is a measure of solid angle, and meters squared are used as a measure of cross-sectional area, as shown at right.
The radiance along all such rays in a region of three-dimensional space illuminated by an unchanging arrangement of lights is called the plenoptic function. The plenoptic illumination function is an idealized function used in computer vision and computer graphics to express the image of a scene from any possible viewing position at any viewing angle at any point in time. It is not used in practice computationally, but is conceptually useful in understanding other concepts in vision and graphics. Since rays in space can be parameterized by three coordinates, "x", "y", and "z" and two angles "θ" and "ϕ", as shown at left, it is a five-dimensional function, that is, a function over a five-dimensional manifold equivalent to the product of 3D Euclidean space and the 2-sphere.
The light field at each point in space can be treated as an infinite collection of vectors, one per direction impinging on the point, with lengths proportional to their radiances.
Integrating these vectors over any collection of lights, or over the entire sphere of directions, produces a single scalar value—the total irradiance at that point, and a resultant direction. The figure shows this calculation for the case of two light sources. In computer graphics, this vector-valued function of 3D space is called the vector irradiance field. The vector direction at each point in the field can be interpreted as the orientation of a flat surface placed at that point to most brightly illuminate it.
Higher dimensionality.
Time, wavelength, and polarization angle can be treated as additional dimensions, yielding higher-dimensional functions, accordingly.
The 4D light field.
In a plenoptic function, if the region of interest contains a concave object (e.g., a cupped hand), then light leaving one point on the object may travel only a short distance before another point on the object blocks it. No practical device could measure the function in such a region.
However, for locations outside the object's convex hull (e.g., shrink-wrap), the plenoptic function can be measured by capturing multiple images. In this case the function contains redundant information, because the radiance along a ray remains constant throughout its length. The redundant information is exactly one dimension, leaving a four-dimensional function variously termed the photic field, the 4D light field or lumigraph. Formally, the field is defined as radiance along rays in empty space.
The set of rays in a light field can be parameterized in a variety of ways. The most common is the two-plane parameterization. While this parameterization cannot represent all rays, for example rays parallel to the two planes if the planes are parallel to each other, it relates closely to the analytic geometry of perspective imaging. A simple way to think about a two-plane light field is as a collection of perspective images of the "st" plane (and any objects that may lie astride or beyond it), each taken from an observer position on the "uv" plane. A light field parameterized this way is sometimes called a light slab.
Sound analog.
The analog of the 4D light field for sound is the sound field or wave field"," as in wave field synthesis, and the corresponding parametrization is the Kirchhoff–Helmholtz integral, which states that, in the absence of obstacles, a sound field over time is given by the pressure on a plane. Thus this is two dimensions of information at any point in time, and over time, a 3D field.
This two-dimensionality, compared with the apparent four-dimensionality of light, is because light travels in rays (0D at a point in time, 1D over time), while by the Huygens–Fresnel principle, a sound wave front can be modeled as spherical waves (2D at a point in time, 3D over time): light moves in a single direction (2D of information), while sound expands in every direction. However, light travelling in non-vacuous media may scatter in a similar fashion, and the irreversibility or information lost in the scattering is discernible in the apparent loss of a system dimension.
Image refocusing.
Because light field provides spatial and angular information, we can alter the position of focal planes after exposure, which is often termed "refocusing". The principle of refocusing is to obtain conventional 2-D photographs from a light field through the integral transform. The transform takes a lightfield as its input and generates a photograph focused on a specific plane.
Assuming formula_0 represents a 4-D light field that records light rays traveling from position formula_1 on the first plane to position formula_2 on the second plane, where formula_3 is the distance between two planes, a 2-D photograph at any depth formula_4 can be obtained from the following integral transform:
formula_5,
or more concisely,
formula_6,
where formula_7, formula_8, and formula_9 is the photography operator.
In practice, this formula cannot be directly used because a plenoptic camera usually captures discrete samples of the lightfield formula_0, and hence resampling (or interpolation) is needed to compute formula_10. Another problem is high computation complexity. To compute an formula_11 2-D photograph from an formula_12 4-D light field, the complexity of the formula is formula_13.
Fourier slice photography.
One way to reduce the complexity of computation is to adopt the concept of Fourier slice theorem: The photography operator formula_9 can be viewed as a shear followed by projection. The result should be proportional to a dilated 2-D slice of the 4-D Fourier transform of a light field. More precisely, a refocused image can be generated from the 4-D Fourier spectrum of a light field by extracting a 2-D slice, applying an inverse 2-D transform, and scaling. The asymptotic complexity of the algorithm is formula_14.
Discrete focal stack transform.
Another way to efficiently compute 2-D photographs is to adopt discrete focal stack transform (DFST). DFST is designed to generate a collection of refocused 2-D photographs, or so-called Focal Stack. This method can be implemeted by fast fractional fourier transform (FrFT).
The discrete photography operator formula_9 is defined as follows for a lightfield formula_15 sampled in a 4-D grid formula_16 formula_17, formula_18:
formula_19
Because formula_20 is usually not on the 4-D grid, DFST adopts trigonometric interpolation to compute the non-grid values.
The algorithm consists of these steps:
Methods to create light fields.
In computer graphics, light fields are typically produced either by rendering a 3D model or by photographing a real scene. In either case, to produce a light field, views must be obtained for a large collection of viewpoints. Depending on the parameterization, this collection typically spans some portion of a line, circle, plane, sphere, or other shape, although unstructured collections are possible.
Devices for capturing light fields photographically may include a moving handheld camera or a robotically controlled camera, an arc of cameras (as in the bullet time effect used in "The Matrix"), a dense array of cameras, handheld cameras, microscopes, or other optical system.
The number of images in a light field depends on the application. A light field capture of Michelangelo's statue of "Night" contains 24,000 1.3-megapixel images, which is considered large as of 2022. For light field rendering to completely capture an opaque object, images must be taken of at least the front and back. Less obviously, for an object that lies astride the "st" plane, finely spaced images must be taken on the "uv" plane (in the two-plane parameterization shown above).
The number and arrangement of images in a light field, and the resolution of each image, are together called the "sampling" of the 4D light field. Also of interest are the effects of occlusion, lighting and reflection.
Applications.
Illumination engineering.
Gershun's reason for studying the light field was to derive (in closed form) illumination patterns that would be observed on surfaces due to light sources of various shapes positioned above these surface. The branch of optics devoted to illumination engineering is nonimaging optics. It extensively uses the concept of flow lines (Gershun's flux lines) and vector flux (Gershun's light vector). However, the light field (in this case the positions and directions defining the light rays) is commonly described in terms of phase space and Hamiltonian optics.
Light field rendering.
Extracting appropriate 2D slices from the 4D light field of a scene, enables novel views of the scene. Depending on the parameterization of the light field and slices, these views might be perspective, orthographic, crossed-slit, general linear cameras, multi-perspective, or another type of projection. Light field rendering is one form of image-based rendering.
Synthetic aperture photography.
Integrating an appropriate 4D subset of the samples in a light field can approximate the view that would be captured by a camera having a finite (i.e., non-pinhole) aperture. Such a view has a finite depth of field. Shearing or warping the light field before performing this integration can focus on different fronto-parallel or oblique planes. Images captured by digital cameras that capture the light field can be refocused.
3D display.
Presenting a light field using technology that maps each sample to the appropriate ray in physical space produces an autostereoscopic visual effect akin to viewing the original scene. Non-digital technologies for doing this include integral photography, parallax panoramagrams, and holography; digital technologies include placing an array of lenslets over a high-resolution display screen, or projecting the imagery onto an array of lenslets using an array of video projectors. An array of video cameras can capture and display a time-varying light field. This essentially constitutes a 3D television system. Modern approaches to light-field display explore co-designs of optical elements and compressive computation to achieve higher resolutions, increased contrast, wider fields of view, and other benefits.
Brain imaging.
Neural activity can be recorded optically by genetically encoding neurons with reversible fluorescent markers such as GCaMP that indicate the presence of calcium ions in real time. Since light field microscopy captures full volume information in a single frame, it is possible to monitor neural activity in individual neurons randomly distributed in a large volume at video framerate. Quantitative measurement of neural activity can be done despite optical aberrations in brain tissue and without reconstructing a volume image, and be used to monitor activity in thousands of neurons.
Generalized scene reconstruction (GSR).
This is a method of 3D reconstruction from multiple images that creates a scene model comprising a generalized light field and a relightable matter field. The generalized light field represents light flowing in every direction through every point in the field. The relightable matter field represents the light interaction properties and emissivity of matter occupying every point in the field. Scene data structures can be implemented using Neural Networks, and Physics-based structures, among others. The light and matter fields are at least partially disentangled.
Holographic stereograms.
Image generation and predistortion of synthetic imagery for holographic stereograms is one of the earliest examples of computed light fields.
Glare reduction.
Glare arises due to multiple scattering of light inside the camera body and lens optics that reduces image contrast. While glare has been analyzed in 2D image space, it is useful to identify it as a 4D ray-space phenomenon. Statistically analyzing the ray-space inside a camera allows the classification and removal of glare artifacts. In ray-space, glare behaves as high frequency noise and can be reduced by outlier rejection. Such analysis can be performed by capturing the light field inside the camera, but it results in the loss of spatial resolution. Uniform and non-uniform ray sampling can be used to reduce glare without significantly compromising image resolution.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L_{F}(s,t,u,v)"
},
{
"math_id": 1,
"text": "(u,v)"
},
{
"math_id": 2,
"text": "(s,t)"
},
{
"math_id": 3,
"text": "F"
},
{
"math_id": 4,
"text": "\\alpha F"
},
{
"math_id": 5,
"text": "\n \\mathcal{P}_{\\alpha}\\left[L_{F}\\right](s, t) =\n {1 \\over \\alpha^2 F^2}\\iint L_F\\left(u\\left(1 - \\frac{1}{\\alpha}\\right) + \\frac{s}{\\alpha}, v\\left(1 - \\frac{1}{\\alpha}\\right) + \\frac{t}{\\alpha}, u, v\\right)~dudv\n"
},
{
"math_id": 6,
"text": "\\mathcal{P}_{\\alpha}\\left[L_{F}\\right](\\boldsymbol{s})=\\frac{1}{\\alpha^{2} F^{2}} \\int L_{F}\\left(\\boldsymbol{u}\\left(1-\\frac{1}{\\alpha}\\right)+\\frac{\\boldsymbol{s}}{\\alpha}, \\boldsymbol{u}\\right) d \\boldsymbol{u}"
},
{
"math_id": 7,
"text": "\\boldsymbol{s}=(s,t)"
},
{
"math_id": 8,
"text": "\\boldsymbol{u}=(u,v)"
},
{
"math_id": 9,
"text": "\\mathcal{P}_{\\alpha}\\left[\\cdot\\right]"
},
{
"math_id": 10,
"text": " L_{F}\\left(\\boldsymbol{u}\\left(1-\\frac{1}{\\alpha}\\right)+\\frac{\\boldsymbol{s}}{\\alpha}, \\boldsymbol{u}\\right)"
},
{
"math_id": 11,
"text": "N\\times N"
},
{
"math_id": 12,
"text": "N\\times N\\times N\\times N"
},
{
"math_id": 13,
"text": "O(N^4)"
},
{
"math_id": 14,
"text": "O(N^2 \\operatorname{log}N)"
},
{
"math_id": 15,
"text": "L_{F}(\\boldsymbol {s},\\boldsymbol {u})"
},
{
"math_id": 16,
"text": "\\boldsymbol {s} = \\Delta s \\tilde{\\boldsymbol {s}},"
},
{
"math_id": 17,
"text": "\\tilde{\\boldsymbol {s}} = -\\boldsymbol {n}_{\\boldsymbol {s}}, ..., \\boldsymbol {n}_{\\boldsymbol {s}}"
},
{
"math_id": 18,
"text": "\\boldsymbol {u} = \\Delta u \\tilde{\\boldsymbol {u}}, \\tilde{\\boldsymbol {u}}=-\\boldsymbol {n}_{\\boldsymbol {u}},...,\\boldsymbol {n}_{\\boldsymbol {u}}"
},
{
"math_id": 19,
"text": "\\mathcal{P}_{q}[L](\\boldsymbol{s})=\n\\sum_{\\tilde{\\boldsymbol{u}}=-\\boldsymbol{n}_{\\boldsymbol{u}}}^{\\boldsymbol{n}_{\\boldsymbol{u}}} L(\\boldsymbol{u} q+\\boldsymbol{s}, \\boldsymbol{u}) \\Delta \\boldsymbol{u}, \n\\quad \\Delta \\boldsymbol{u}=\\Delta u\\Delta v,\n\\quad q=\\left(1-\\frac{1}{\\alpha}\\right)"
},
{
"math_id": 20,
"text": "(\\boldsymbol{u} q+\\boldsymbol{s}, \\boldsymbol{u}) "
},
{
"math_id": 21,
"text": "\\Delta s"
},
{
"math_id": 22,
"text": "\\Delta u"
},
{
"math_id": 23,
"text": "L^d_{F}(\\boldsymbol {s},\\boldsymbol {u})"
},
{
"math_id": 24,
"text": "\\boldsymbol {u}"
},
{
"math_id": 25,
"text": "R1"
},
{
"math_id": 26,
"text": "\\alpha"
},
{
"math_id": 27,
"text": "R2"
},
{
"math_id": 28,
"text": "(2{n}_{\\boldsymbol {s}}+1)"
}
] | https://en.wikipedia.org/wiki?curid=863741 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.